questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
---|---|
Is there any way to bind a differnt click handler to QPushButton in PyQt5? I have a QPushbutton:btn = QPushButton("Click me")btn.clicked.connect(lambda: print("one"))Later in my program, I want to rebind its click handler, I tried to achieve this by calling connect again:btn.clicked.connect(lambda: print("two"))I expected to see that the console only prints two, but actually it printed both one and two. In other words, I actually bound two click handlers to the button.How can I rebind the click handler? | Signals and slots in Qt are observer pattern (pub-sub) implementation, many objects can subscribe to same signal and subscribe many times. And they can unsubscribe with disconnect function.from PyQt5 import QtWidgets, QtCoreif __name__ == "__main__": app = QtWidgets.QApplication([]) def handler1(): print("one") def handler2(): print("two") button = QtWidgets.QPushButton("test") button.clicked.connect(handler1) button.show() def change_handler(): print("change_handler") button.clicked.disconnect(handler1) button.clicked.connect(handler2) QtCore.QTimer.singleShot(2000, change_handler) app.exec()In case of lambda you can only disconnect all subscribers at once with disconnect() (without arguments), which is fine for button case. |
Python class not recognizing list I am attempting to build one of my first classes ever and after checking some documentation and other StackOverflow questions I cannot figure out why I am getting NameError: name 'executed_trades' is not defined in the code listed below:class Position: def __init__(self): self.executed_trades = [] def add_position(self, execution): if execution not in executed_trades: executed_trades.append(execution)Does it not belong under __init__()? Is there something different about declaration in classes I am missing? It feels like a relatively simple error but I cannot seem to figure it out. | You are missing self in add_position method when you refer to executed_trades:class Position: def __init__(self): self.executed_trades = [] def add_position(self, execution): if execution not in self.executed_trades: self.executed_trades.append(execution) |
Decrypt message with cryptography.fernet do not work I just tried my hand at encrypting and decrypting data. I first generated a key, then encrypted data with it and saved it to an XML file. Now this data is read and should be decrypted again.But now I get the error message "cryptography.fernet.InvalidToken".import xml.etree.cElementTree as ETfrom cryptography.fernet import Fernetfrom pathlib import Pathdef load_key(): """ Load the previously generated key """ return open("../login/secret.key", "rb").read()def generate_key(): """ Generates a key and save it into a file """ key = Fernet.generate_key() with open("../login/secret.key", "wb") as key_file: key_file.write(key)def decrypt_message(encrypted_message): """ Decrypts an encrypted message """ key = load_key() f = Fernet(key) message = encrypted_message.encode('utf-8') decrypted_message = f.decrypt(message) return(decrypted_message)def decryptMessage(StringToDecrypt): decryptedMessage = decrypt_message(StringToDecrypt) return decryptedMessagedef loginToRoster(chrome): credentials = readXML() user = decryptMessage(credentials[0]) pw = decryptMessage(credentials[1]) userName = chrome.find_element_by_id('UserName') userName.send_keys(user) password = chrome.find_element_by_id('Password') password.send_keys(pw)In the tuple "credentials" there are 2 encrypted strings.Please help - have already tried everything to change the formats, but no chance.Edit:Errormessage:Traceback (most recent call last): File "C:/Users/r/Documents/GitHub/ServiceEvaluationRK/source/main.py", line 27, in <module> login.loginToRoster(chrome) File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\source\login.py", line 106, in loginToRoster user = decryptMessage(credentials[0]) File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\source\login.py", line 49, in decryptMessage decryptedMessage = decrypt_message(StringToDecrypt) File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\source\login.py", line 43, in decrypt_message decrypted_message = f.decrypt(message) File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\venv\lib\site-packages\cryptography\fernet.py", line 75, in decrypt timestamp, data = Fernet._get_unverified_token_data(token) File "C:\Users\r\Documents\GitHub\ServiceEvaluationRK\venv\lib\site-packages\cryptography\fernet.py", line 107, in _get_unverified_token_data raise InvalidTokencryptography.fernet.InvalidToken | I found an answer to my problem:I took ASCII instead of utf-8. And I added a .decode('ASCII') at the function "loginToRoster" to both variables 'user' and 'pw'Now the encryption and decryption works fine.So, the 'loginToRoster' functions looks like:def loginToRoster(chrome): credentials = readXML() user = decryptMessage(credentials[0]).decode('ASCII') pw = decryptMessage(credentials[1]).decode('ASCII') userName = chrome.find_element_by_id('UserName') userName.send_keys(user) password = chrome.find_element_by_id('Password') password.send_keys(pw) |
i tried to create a program for in case of an error in entering the input but after that it does not receive new output and continues in a loop after i type 5 it continue to loop and dont't get to the if statementdef facility(): global user while user != 1 and user != 2 and user != 3 and user != 4: user =input("please choose between this four number. \n[1/2/3/4]\n") if user == 1: y = ("PBl Classroom") elif user == 2: y = ("meeting room") elif user == 3: y =("Workstation Computer Lab,ITMS") elif user == 4: y = ("swimming pool") print("you have choose",y)user = int(input("please choose your facility..\n ")) | You use int(input(...)) on your first call, but input(...) in the function. Thus the values are strings, not integers and your comparisons will fail.Here is a fix with minor improvements:def facility(): user = int(input("please choose your facility..\n ")) while user not in (1,2,3,4): user = int(input("please choose between this four number. \n[1/2/3/4]\n")) if user == 1: y = ("PBl Classroom") elif user == 2: y = ("meeting room") elif user == 3: y =("Workstation Computer Lab,ITMS") elif user == 4: y = ("swimming pool") print("you have chosen", y)facility() |
Importing CSV into MySQL Database (Django Webapp) I'm developing a webapp in Django, and for it's database I need to import a CSV file into a particular MySQL database.I searched around a bit, and found many pages which listed how to do this, but I'm a bit confused.Most pages say to do this:LOAD DATA INFILE '<file>' INTO TABLE <tablenname>FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';But I'm confused how Django would interpret this, since we haven't mentioned any column names here.I'm new to Django and even newer to databasing, so I don't really know how this would work out. | It looks like you are in the database admin (i.e. PostgreSQL/MySQL). Others above has given a good explanation for that.But if you want to import data into Django itself -- Python has its own csv implementation, like so: import csv.But if you're new to Django, then I recommend installing something like the Django CSV Importer: http://django-csv-importer.readthedocs.org/en/latest/index.html. (You install the add-ons into your Python library.)The author, unfortunately, has a typo in the docs, though. You have to do from csvImporter.model import CsvDbModel, not from csv_importer.model import CsvDbModel.In your models.py file, create something like:class MyCSVModel(CsvDbModel): pass class Meta: dbModel = Model_You_Want_To_Reference delimiter = "," has_header = TrueThen, go into your Python shell and do the following command:my_csv = MyCsvModel.import_data(data = open("my_csv_file_name.csv")) |
Is there a way in Robot framework to log if keywork only if there are True? I do know that there are no switch statements in RF. I do have 50 if-keywords (that I use because no switch exists).My log file is very long because literally every 50 if statements are logged (even those who are not true).I would like to know if there is a way to log only the statements that are true?here is how my code is written (there are 50 keywords like these) :# Access Apply ImportExportParams\ Run Keyword If '${Type}' == 'ImportExportParams' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_ImportExportParams ${DealId} ${ScenarioId} ${ProductId}# Access bulk apply cheapest quote\ Run Keyword If '${Type}' == 'BulkApplyCheapest' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_BulkApplyCheapest ${DealId} ${ScenarioId} ${ProductId}# SiteSelection\ Run Keyword If '${Type}' == 'SiteSelection' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelection ${ProductId} ${DealId} ${ScenarioId} ${Name}# SiteSelectionFile\ Run Keyword If '${Type}' == 'SiteSelectionFile' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelectionFile ${ProductId} ${DealId} ${ScenarioId}\ Run Keyword If '${Type}' == 'SiteSelectionFile2' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelectionFile2 ${ProductId} ${DealId} ${ScenarioId}# SiteSelectionMultiple\ Run Keyword If '${Type}' == 'SiteSelectionMultiple' and '${DealId}' != 'None' and '${ScenarioId}' != 'None' Call_API_SiteSelectionMultiple ${ProductId} ${DealId} ${ScenarioId}thanks for your help :) | May be you are looking for --removekeywords and --flattenkeywords command line optionsFor more details have a look at Removing and flattening keywordyour code suggests that all this conditions are looping under for loop from \ an older syntax of FOR loop. So by usingrobot --removekeywords FOR testuitefilename.robot will produce output something like in the below screenshot. In most of the cases passed steps does not required and I think this suffice to serve your requirement.FOR - This mode remove all passed iterations from for loops except the last one.Other possibility is approaching to this problem in other way instead of looking for the option to not to log the false condition keywords in the logs. for example -From the code I could see '${DealId}' != 'None' and '${ScenarioId}' != 'None' this condition is common. So instead of checking it all over check it once.Instead of looping all over use the IN & List or Dictionary to check the ${type} variable value exist in the one of collection. And get that value using keywords.Check this condition before hand '${DealId}' != 'None' and '${ScenarioId}' != 'None' and then use variable as part of keyword syntax to call the specific keywordThis I could reduce to -*Test casesRun Keyword If '${DealId}' != 'None' and '${ScenarioId}' !='None' Execute the type of Keyword SiteSelection*KeywordsExecute the type of Keyword [Arguments] ${type} ${Type_list}= Create List ImportExportParams BulkApplyCheapest SiteSelection SiteSelectionFile ... SiteSelectionFile2 SiteSelectionMultiple ${Status} ${index} Run Keyword And Ignore Error Get Index From List ${Type_list} ${type} ${Keyword_type} Get From List ${Type_list} ${index} log ${Status} ${index} Run Keyword Call_API_${Keyword_type}Output |
Tensorflow: customise LSTM cell with subtractive gating I want to use subtractive gating which is explained in this paper I'm using Tensorflow, and currently the code is: (Using CPU)import tensorflow.contrib.rnn as RNNCell tgt_cell = RNNCell.LSTMCell(num_units=flags.hidden_size, state_is_tuple=True)tgt_dropout_cell = RNNCell.DropoutWrapper(tgt_cell, output_keep_prob=self.keep_prob)tgt_stacked_cell= RNNCell.MultiRNNCell([tgt_dropout_cell] * self.opt.num_layers, state_is_tuple=True)According to the paper the changes are as follows:where LSTM is:The gating should be subtractive rather than multiplicative:when I click on "LSTMCell" in my code, it opens rnn_cells.py and I'm not sure which part should be changed. May someone please help to make changes? | wow thats kind of advanced. Look like RNNCell.LSTMCell and write your own with changes you want. If you look here https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/rnn_cell.py i guess the operation for cells are defined in call like starting from line 220 then find ops you need. |
list' object cannot be interpreted as an integer in RandomForest code i have used following code from machine Learning bookfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.datasets import make_moonsfrom sklearn.model_selection import train_test_splitimport matplotlib.pyplot as pltimport mglearnX,y =make_moons(n_samples=100,noise=0.25,random_state=3)X_train,X_test,y_train,y_test =train_test_split(X,y,stratify=y,random_state=42)#sketch random forestforest =RandomForestClassifier(n_estimators=5,random_state=2)forest.fit(X_train,y_train)#draw random forestfix, axes =plt.subplots(2,3,figsize=(20,10))for i,(ax,tree) in enumerate(list(zip(axes.ravel())),forest.estimators_): ax.set_title("tree{}".format(i)) mglearn.plots.plot_tree_partition(X_train,y_train,tree,ax=ax)mglearn.plots.plot_2d_separator(forest, X_train, fill=True, ax=axes[-1, -1],alpha=.4)axes[-1, -1].set_title("Random Forest")mglearn.discrete_scatter(X_train[:, 0], X_train[:, 1], y_train)it says following error :TypeError: 'list' object cannot be interpreted as an integeri know that in python3 , for zip there is necessary list command, so in book originally it was written for i, (ax, tree) in enumerate(zip(axes.ravel(), forest.estimators_)):and i have added list command, but still it shows me this error, can you help me to clarify what is wrong? | Inenumerate(list(zip(axes.ravel())),forest.estimators_)forest.estimators_ is outside your list(zip()) call and is treated as the second argument for enumerate, which, from the docs, represents the start index. Since forest.estimators_ is a list, this will fail as an integer is required.What you mean to write is:enumerate(list(zip(axes.ravel(), forest.estimators_))) |
get name attr value for formset with appropriate prefix and formset form index I am manually displaying modelformset_factory values in a Django template using the snippet below. One of the inputs is using the select type and I'm populating the options using another context value passed from the view after making external API calls, so there is no model relationship between the data and the form I'm trying to display.viewmy_form = modelformset_factory(MyModel, MyModelForm, fields=("col1", "col2"), extra=0, min_num=1, can_delete=True,)template{{ my_form.management_form }}{% for form in my_form %} <label for="{{ form.col1.id_for_label }}">{{ form.col1.label }}</label> {{ form.col1 }} <label for="{{ form.col2.id_for_label }}">{{ form.col2.label }}</label> <select id="{{ form.col2.id_for_label }}" name="{{ form.col2.name }}"> <option disabled selected value> ---- </option> {% for ctx in other_ctx %} <option value="{{ ctx.1 }}">{{ ctx.2 }}</option> {% endfor %} </select>{% endfor %}The other_ctx populating the select option is a List[Tuple]I am trying to get the name value for the col2 input using {{ form.col2.name }} but only col2 is getting returned instead of form-0-col2. I could prepend the form-0- value to the {{ form.col2.name }} but wondering if:I could do the above automatically? I'm assuming the formset should be aware of the initial formset names with appropriate formset index coming from the view.Is there a way to include the select options in the initial formset that is sent to the template so that I can simply use {{ form.col2 }}?Just saw that I could use form-{{ forloop.counter0 }}-{{ form.col2.name }} as an alternative as well, if getting it automatically does not work. | I think what you are after is form.col2.html_name - docsThis is the name that will be used in the widget’s HTML name attribute. It takes the form prefix into account. |
Identify values in column A not in column B and column C using Python Python newbies looking for help. A dataset has 3 numerical columns: A, B, C. How do I find the values only exist in A but not B and C? | Your question need more details but you can adapt the code below:A = [1, 2, 3]B = [1, 3, 4]C = [1, 4, 5]>>> set(A).difference(set(B).union(C)){2} |
BranchPythonOperator not running with past skipped state task This is how my airflow dag looks like1:Airflow dagThere is a branch task which checks for a condition and then either :Runs Task B directly, skipping task A orRuns task A and then runs task BWhen task A is skipped, in the next(future) run of the dag, branch task never runs(execution stops at main task) although default trigger rule is 'none_failed' and no task is failed in the dag only skipped.default_args = { 'owner': 'airflow', 'depends_on_past': True, 'wait_for_downstream': True, 'email_on_failure': True, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=2), 'trigger_rule': 'none_failed'}dag = DAG( dag_id='main_task', default_args=default_args, schedule_interval='0 2 * * *', start_date=datetime(2021,6,2), max_active_runs=8, )def check_condition(): if(conditionA == conditionB): return ['task_A','task_B'] else : return 'task_B'branch_task = BranchPythonOperator( task_id='branching', python_callable=check_condition, dag=dag, depends_on_past=False,)Using Airflow 1.10.12.Could someone guide me why branch task never runs after task A is skipped in the past run. | The reason is happens isn't related to trigger rules.It happens because default_args in DAG constructor contains wait_for_downstream=Trueso when you do:branch_task = BranchPythonOperator( task_id='branching', python_callable=check_condition, dag=dag, depends_on_past=False,)What actually happens is that depends_on_past is set to True by the constructor of BaseOpeartor.Since wait_for_downstream=True will cause a task instance to also wait for all task instances immediately downstream of the previous task instance to succeed this causes the BranchPythonOperator to not start running. This is a problem as branch operator usually have direct downstream tasks in Skip status.You can fix it by:branch_task = BranchPythonOperator( task_id='branching', python_callable=check_condition, dag=dag, depends_on_past=False, wait_for_downstream=False)I'd like to note that this is an issue only in Airlfow<2.0.0 because wait_for_downstream consider only Success status as accepted (1.10 operator description).For Airflow > 2.0.0 this issue won't happen as the behavior was changed in PR by making wait_for_downstream consider both Successful and Skipped tasks as accepted statuses (2.0 operator description). |
User authentication for Spotify in Python using Spotipy on AWS I am currently building a web-app that requires a Spotify user to login using their credentials in order to access their playlistsI'm using the Spotipy python wrapper for Spotify's Web API and generating an access token using, token = util.prompt_for_user_token(username,scope,client_id,client_secret,redirect_uri)The code runs without any issues on my local machine. But, when I deploy the web-app on AWS, it does not proceed to the redirected uri and allow for user login.I have tried transferring the ".cache-username" file via SCP to my AWS machine instance and gotten it to work in limited fashion. Is there a solution to this issue? I'm fairly new to AWS and hence don't have much to go on or any idea where to look. Any help would be greatly appreciated. Thanks in advance!! | The quick wayRun the script locally so the user can sign in onceIn the local project folder, you will find a file .cache-{userid}Copy this file to your project folder on AWSIt should workThe database wayThere is currently an open feature request on Github that suggests to store tokens in a DB. Feel free to subscribe to the issue or to contribute https://github.com/plamere/spotipy/issues/51 It's also possible to write a bit of code to persist new tokens into a DB and then read from it. That's what I'm doing as part of an AWS Lambda using DynamoDB, it's not very nice but it works perfectly https://github.com/resident-archive/resident-archive/blob/a869b73f1f64538343be1604d43693b6165cc58a/functions/to-spotify/main.py#L129..L157The API wayThis is probably the best way, as it allows multiple users to sign in simultaneously. However it is a bit more complex and requires you host a server that's accessible by URL.This example uses Flask but one could adapt it to Django for example https://github.com/plamere/spotipy/blob/master/examples/app.py |
Python not in dict condition sentence performance Does anybody know about what is better to use thinking about speed and resources? Link to some trusted sources would be much appreciated.if key not in dictionary.keys():orif not dictionary.get(key): | Firstly, you'd doif key not in dictionary:since dicts are iterated over by keys.Secondly, the two statements are not equivalent - the second condition would be true if the corresponding values is falsy (0, "", [] etc.), not only if the key doesn't exist.Lastly, the first method is definitely faster and more pythonic. Function/method calls are expensive. If you're unsure, timeit. |
Python apply a func to two lists of lists, store the result in a Dataframe To simplify my problem, say I have two lists of lists and a function shown below:OP = [[1,2,3],[6,2,7,4],[4,1],[8,2,6,3,1],[6,2,3,1,5], [3,1],[3,2,5,4]]AP = [[2,4], [2,3,1]]def f(listA, listB): return len(listA+listB) # my real f returns a number as wellI want to get the f(OP[i],AP[j]) for each i, j, so my idea is to create a pandas.Dataframe which looks like this: AP[0] AP[1]OP[0] f(AP[0],OP[0]) f(AP[1],OP[0])OP[1] f(AP[0],OP[1]) f(AP[1],OP[1])OP[2] f(AP[0],OP[2]) f(AP[1],OP[2])OP[3] f(AP[0],OP[3]) f(AP[1],OP[3])OP[4] f(AP[0],OP[4]) f(AP[1],OP[4])OP[5] f(AP[0],OP[5]) f(AP[1],OP[5])OP[6] f(AP[0],OP[6]) f(AP[1],OP[6])My real data actually has around 80,000 lists in OP and 20 lists in AP, and the function f is a little bit time consuming, so the computational cost should be worried. My idea to achieve the goal would be constructing a pandas.Series of length len(AP)for each OP, and then append the Series to the final Dataframe. For example, for OP[0], first create a Series which have all the information for f(OP[0],AP[i]) for each i.I am stuck for constructing the Series. I tried pandas.Series.apply() and map()but neither or them worked since my function f needs two parameters. I'm also open to any other suggestions to get f(OP[i],AP[j]) for each i, j, thanks. | You could do so with some nested list comprehension, followed by an application of pandas.DataFrame.from_records:import pandas as pdrecords = [tuple(f(A, O) for A in AP) for O in OP]pd.DataFrame.from_records(records) |
Share functions across colaboratory files I'm sharing a colaboratory file with my colleagues and we are having fun with it. But it's getting bigger and bigger, so we want to offload some of the functions to another colaboratory file. How can we load one colaboratory file into another? | There's no way to do this right now, unfortunately: you'll need to move the code into a .py file that you load (say by cloning from github). |
Cannot return a float value of -1.00 I am currently doing an assignment for a computer science paper at university. I am in my first year.in one of the questions, if the gender is incorrect the function is suppose to return a value of -1. But in the testing column, it says the expected value is -1.00. And I cannot seem to be able to return the value of '-1.00', it will always return a value of -1.0 (with one zero). I used the .format to make the value 2sf (so it will appear with two zero's) but when converting it to a float the value always returns "-1.0". return float('{:.2f}'.format(-1)) | This isn’t as clear as it could be. Does your instructor or testingsoftware expect a string '-1.00'? If so, just return that. Is afloat type expected? Then return -1.0; the number of digits shown doesnot affect the value. |
nosetests default encoding is ascii, main program is utf-8 All my files start with #-*- coding: utf-8 -*-My virtualenv is set to python 3.5, virtualenv -p python3 venvMy app hierarchy looks like this :app/app/[file].py __init__.py /tests/test_[file].py /__init__.py main.pypython --version is 3.5 (venv is active)If I python main.py and uses sys.getdefaultencoding() and print("é") everyting is fine, i get :> utf-8 and > éUnder /tests, if i nosetests i get errors related to unicode, which is normal since sys.getdefaultencoding() gives me :asciiwhich pip, which nosetests and which python all points to my venv.Why would nose default to ascii when everything is not?pip freeze :appdirs==1.4.2beautifulsoup4==4.5.3nose==1.3.7packaging==16.8pkg-resources==0.0.0pyparsing==2.1.10requests==2.13.0six==1.10.0Edit:An example of nose error would be :TypeError: descriptor 'strip' requires a 'str' object but received a 'unicode'. I get why the error is happening, my misunderstanding is why only nose is doing it. I'm on Ubuntu 16.04. | Python 2.7 nose default installation on my system was in fault.Without being in venv, i pip uninstall nose. Then i activated my virtualenv which is using Python 3.5. Being in my venv, nose could then only choose nosetests from it. It worked!It seems nosetests was prioritizing "global" nose before the specific one. I still don't know why it was this way. |
Undefined is not an object (tensorflow imagerecognition) When trying to integrate a pretrained tensorflow model with expo (react-native), the following error occurs within these lines:async classify(photo) { try { const tfImageRecognition = new TfImageRecognition({ model: require('./assets/output_graph.pb'), labels: require('./assets/output_labels.txt') }); const results = await tfImageRecognition.recognize({ image: photo, inputName: "input", //Optional, defaults to "input" inputSize: 224, //Optional, defaults to 224 outputName: "output", //Optional, defaults to "output" maxResults: 3, //Optional, defaults to 3 threshold: 0.1, //Optional, defaults to 0.1 }); results.forEach(result => console.log( result.id, // Id of the result result.name, // Name of the result result.confidence // Confidence value between 0 - 1 ) ); await tfImageRecognition.close(); // Necessary in order to release objects on native side } catch (e) { console.log(e); }}Which generates the following error[23:30:09] undefined is not an object (evaluating 'RNImageRecognition.initImageRecognizer') - node_modules\react-native-tensorflow\index.js:121:35 in TfImageRecognitionI have been trying to find the reason why this is not working but I cannot find a definite solution. The relative paths linking to the assets are correct and the extensions are present in the app.json. Furthermore the model is trained using the tensorflow api which should make it compatible with the react-native implementation.I am using expo SDK version 28.0.0 and react-native-tensorflow version ^0.1.8 | I have the same problem, in my case I forget to link the library.Linking$ react-native link react-native-tensorflow |
Appending two texts keeping the line structure Sorry in advance if my question is not smart enough, but I am new in Python:I have two string files: file A and file B. The are something like this:File A:File A is the master file{ sdfsf sdfsdf sdfsd sdfdf}File B is similar.I want to append file A to file B(and to other files later), but when I try to append it with "with open" it is in one line. I want to manipulate it line by line(to add or remove lines, so I need it to be list), so I am making it list separated by lines, but later, when I try to append it to the other file it is not the same line structure or the text is on one line.So I have tried this and again it doesn't work:import os file_A=open('C:\\Users\\admin\\Desktop\\...\\Sofa.txt').readlines()file_B = open('C:\\Users\\admin\\Desktop\\.... ....\\....\\...\\view_1.txt', 'a') for line in File_A: write.linefile.close() | To append the contents of File_A to File_B, you can just treat it as a single string.with open('C:\\Users\\admin\\Desktop\\...\\Sofa.txt') as file_a: contents_a = file_a.read()with open('C:\\Users\\admin\\Desktop\\.... ....\\....\\...\\view_1.txt', 'a') as file_b: file_b.write(contents_a) |
Making my cython code more efficient I've written a python program which I try to cythonize.Is there any suggestion how to make the for-loop more efficient, as this is taking 99% of the time?This is the for-loop: for i in range(l): b1[i] = np.nanargmin(locator[i,:]) # Closer point locator[i, b1[i]] = NAN # Do not consider Closer point b2[i] = np.nanargmin(locator[i,:]) # 2nd Closer point Adjacents[i,0] = np.array((Existed_Pips[b1[i]]), dtype=np.double) Adjacents[i,1] = np.array((Existed_Pips[b2[i]]), dtype=np.double)This is the rest of the code:import numpy as npcimport numpy as npfrom libc.math cimport NAN #, isnandef PIPs(np.ndarray[np.double_t, ndim=1, mode='c'] ys, unsigned int nofPIPs, unsigned int typeofdist): cdef: unsigned int currentstate, j, i np.ndarray[np.double_t, ndim=1, mode="c"] D np.ndarray[np.int64_t, ndim=1, mode="c"] Existed_Pips np.ndarray[np.int_t, ndim=1, mode="c"] xs np.ndarray[np.double_t, ndim=2] Adjacents, locator, Adjy, Adjx, Raw_Fire_PIPs, Raw_Fem_PIPs np.ndarray[np.int_t, ndim=2, mode="c"] PIP_points, b1, b2 cdef unsigned int l = len(ys) xs = np.arange(0,l, dtype=np.int) # Column vector with xs PIP_points = np.zeros((l,1), dtype=np.int) # Binary indexation PIP_points[0] = 1 # One indicate the PIP points.The first two PIPs are the first and the last observation. PIP_points[-1] = 1 Adjacents = np.zeros((l,2), dtype=np.double) currentstate = 2 # Initial PIPs while currentstate <= nofPIPs: # for eachPIPs in range(nofPIPs) Existed_Pips = np.flatnonzero(PIP_points) currentstate = len(Existed_Pips) locator = np.full((l,currentstate), NAN, dtype=np.double) #np.int* for j in range(currentstate): locator[:,j] = np.absolute(xs-Existed_Pips[j]) b1 = np.zeros((l,1), dtype=np.int) b2 = np.zeros((l,1), dtype=np.int) for i in range(l): b1[i] = np.nanargmin(locator[i,:]) # Closer point locator[i, b1[i]] = NAN # Do not consider Closer point b2[i] = np.nanargmin(locator[i,:]) # 2nd Closer point Adjacents[i,0] = np.array((Existed_Pips[b1[i]]), dtype=np.double) Adjacents[i,1] = np.array((Existed_Pips[b2[i]]), dtype=np.double) ##Calculate Distance Adjx = Adjacents Adjy = np.array([ys[np.array(Adjacents[:,0], dtype=np.int)], ys[np.array(Adjacents[:,1], dtype=np.int)]]).transpose() Adjx[Existed_Pips,:] = NAN # Existed PIPs are not candidates for new PIP. Adjy[Existed_Pips,:] = NAN if typeofdist == 1: #Euclidean Distance ##[D] = EDist(ys,xs,Adjx,Adjy) ED = np.power(np.power((Adjx[:,1]-xs),2) + np.power((Adjy[:,1]-ys),2),(0.5)) + np.power(np.power((Adjx[:,0]-xs),2) + np.power((Adjy[:,0]-ys),2),(0.5)) EDmax = np.nanargmax(ED) PIP_points[EDmax]=1 currentstate=currentstate+1 return np.array([Existed_Pips, ys[Existed_Pips]]).transpose() | A couple of suggestions:Take the calls to np.nanargmin out of the loop (use the axis parameter to let you operate on the whole array at once. This reduces the number of Python function calls you have to make:b1 = np.nanargmin(locator,axis=1)locator[np.arange(locator.shape[0]),b1] = np.nanb2 = np.nanargmin(locator,axis=1)Your assignment to Adjacents is odd - you seem to be creating a length-1 array for the right-hand side first. Instead just doAdjacents[i,0] = Existed_Pips[b1[i]]# ...However, in this case, you can also take both lines outside the loop, eliminating the entire loop:Adjacents = np.vstack((Existing_Pips[b1], Existings_Pips[b2])).TAll of this is relying on numpy, rather than Cython, for the speed-up, but it probably beats your version. |
Referencing results with Python in Maya I've been working on a script in Maya that will allow me to work with the cameras without having to go into the Attribute Editor all the time. Currently I have a menu with a menu item and within that menu item I have the check box flag active as well. When the check box button is toggled it runs a command that prints out the result of the check box. What I would like to do is have an if statement that will toggled the dof attribute in any camera but does this by reading the result of the checkbox flag. I know how to properly work with if statements and also find the correct camera, but I don't know how to query the result. Some of the script is below and line four, the if statement, is where I am having issues. Thank you for your help!#Window Functions go heredef dofToggle(self): print(cmds.menuItem("dof", q=1, cb=1)) # query the result if (cmds.menuItem("dof") == 1): cmds.setAttr(camera1.dof=True) # window settings go here if (cmds.window("Camera Tools", exists=True)): cmds.deleteUI("Camera Tools") cmds.window(title="Camera Tools", nestedDockingEnabled=True, rtf=True, sizeable=False, menuBar=True, menuBarResize=True, menuBarVisible=True) cmds.menu(label="dof") cmds.menuItem("dof", label="on/off", checkBox=True, command=dofToggle) | To get the DOF of the camera use this command:import maya.cmds as cmdsprint(cmds.camera('cameraShape1', q=True, dof=True))To disable the DOF of the camera use this command:cmds.camera('cameraShape1', e=True, dof=False)So your if statement should look like this:if(cmds.camera('cameraShape1', q=True, dof=True) == 1): cmds.camera('cameraShape1', e=True, dof=False) |
Executing a command out of conda env Im activating a conda environment beginning of the script execution but in which I want to execute a command using os.system() out of conda environment with in a loop.Example:-conde continues ...for n in range(5): # Some code here with in conda environment # Only the following command should be executed out of current conda environment os.system('some command ...') # Some code here with in same conda environmentconde continues ...Is this possible? | Commands run with os.system will inherit the environment variables, and hence run in the activated Conda env:$ which python/usr/bin/python$ python -c "import os; os.system('which python')"/usr/bin/python$ conda activate(base) $ which python/Users/user/miniconda3/bin/python(base) $ python -c "import os; os.system('which python')"/Users/user/miniconda3/bin/pythonand there aren't any options to manipulate the environment variables without actually manipulating the current environment, which you likely don't want to do.Instead, you want the subprocess module, which provides more control over how the subprocess is run. As a simple example, let's strip the $PATH of any entries with "conda" in them and rerun with this reduced $PATHimport osimport subprocesspath_cur = os.environ['PATH']# remove '*conda*' entriespath_new = ':'.join(p for p in path_cur.split(':') if 'conda' not in p)subprocess.run(['which', 'python'], env={'PATH': path_cur})# /Users/user/miniconda3/bin/python# CompletedProcess(args=['which', 'python'], returncode=0)subprocess.run(['which', 'python'], env={'PATH': path_new})# /usr/bin/python# CompletedProcess(args=['which', 'python'], returncode=0) |
(Thailanguage)I have problem about read csv file and uploading file by flask I have just started learning Flask and Python. I have problems when I upload file csv and I want to lead data in file show on webpage(generate by html)now mywebpage showTimestamp ... เลือกข้อที่ถูกที่สุด 0 2561/12/25 2:30:50 หลังเที่ยง GMT+7 ... NaN 1 2561/12/25 2:31:40 หลังเที่ยง GMT+7 ... NaN 2 2561/12/25 2:32:01 หลังเที่ยง GMT+7 ... NaN 3 2561/12/25 2:32:15 หลังเที่ยง GMT+7 ... NaN 4 2561/12/25 2:33:18 หลังเที่ยง GMT+7 ... NaN 5 2561/12/25 2:39:02 หลังเที่ยง GMT+7 ... ตัวเลือก 1 6 2561/12/25 2:40:19 หลังเที่ยง GMT+7 ... NaN 7 NaN ... NaN 8 NaN ... NaN 9 ,ขอโทษค่ะ,ตามนั้ค่ะ ... NaN 10 NaN ... NaN 11 NaN ... NaN 12 NaN ... NaN [13 rows x 16 columns]but i wantenter image description hereThank you for help. | Try this pandas.DataFrame.to_htmlFor example>> print(yourdataframe.to_html())Remember that Python and HTML are different structures. You have to set up HTML table properly.The output looks like:<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>foo1</th> <th>foo2</th> <th>foo3</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>-0.623329</td> <td>0.086472</td> <td>0.506933</td> </tr> <tr> <th>1</th> <td>0.988126</td> <td>0.172142</td> <td>0.903697</td> </tr> </tbody></table> |
python pandas: replace a str value of column in another str column with a special character There is a dataframe like as following. id num text 1 1.2 price is 1.2 1 2.3 price is 1.2 or 2.3 2 3 The total value is $3 and $130 3 5 The apple value is 5dollar and $150I want to replace the num in the text with character 'UNK'and the new dataframe is changed to: id num text 1 1.2 price is UNK 1 2.3 price is 1.2 or UNK 2 3 The total value is UNK and 130 3 5 The apple value is UNK dollar and $150zMy current code is as followingdf_dev['text'].str.replace(df_dev['num'], 'UNK')and there is error:TypeError: 'Series' objects are mutable, thus they cannot be hashed | Let us using regex and replacedf.text.replace(regex=r'(?i)'+ df.num.astype(str),value="UNK")0 price is UNK1 price is 1.2 or UNK2 The total value is UNKName: text, dtype: object#df.text=df.text.replace(regex=r'(?i)'+ df.num.astype(str),value="UNK")Update (df.text+' ').replace(regex=r'(?i) '+ df.num.astype(str)+' ',value=" UNK ")0 price is UNK 1 price is 1.2 or UNK 2 The total value is UNK and 130 Name: text, dtype: object |
concatante list of lists dataframe from pd.read_html DF[0] format I have a DF[number] = pd.read_html(url.text)I want to concantante or join the DF lists theres hundreads of e.g. DFs[400] into a single pandas dataframethe dataframes are in list format so list of lists but python index lists like pandas dataframe [ Vessel Built GT DWT Size (m) Unnamed: 5 0 x XIN HUA Bulk Carrier 2012 44543 82269 229 x 32 1 b FRANCESCO CORRADO Bulk Carrier 2008 40154 77061 225 x 32 2 5 NAN XIN 17 Bulk Carrier 2001 40570 75220 225 x 32 3 p DIAMOND INDAH Bulk Carrier 2002 43321 77830 229 x 37 4 NaN PRIME LILY Bulk Carrier 2012 44485 81507 229 x 32 5 s EVGENIA Bulk Carrier 2011 92183 176000 292 x 45 df[number] = pd.read_html(url.text) for number in range(494): df=pd.concat(df[number])methods but that doesn't seem to work df1=pd.concat(df[1]) df2=pd.concat(df[2]) df3=pd.concat(df[3]) dfx=pd.concat([df1,df2,df3],ignore_index=True)this is not what I want as there is hundreads of [] python list dataframesI want one pandas dataframe that joins all of the list dataframes into one just be clear the df[] container of the lists is a dict type while df[1] is list | You can use list comprehension:pd.concat([dfs[i] for i in range(len(dfs))]) |
I am not getting output for map widget in jupyter notebook I am working on Jupyter Notebook and installed ArcGis api. When I called the map from that api then map widget is not showing. All the features of arcgis api is working quite well, except it's map widget. Following is the code:-from arcgis.gis import GISmyGIS = GIS()myGIS.map() The above mentioned code is showing only the following :-MapView(layout=Layout(height='400px', width='100%'))A world map should appear, but it's only showing a line of text i.e; "MapView(layout=Layout(height='400px', width='100%'))" | Using Chrome was my answer. Everyting is working fine, as long as Chrome is the browser. Cheers. |
Python Class: Global/Local variable name not defined I have two sets of code, one which I use 'Class' (Second piece of code) to manage my code, and the other I just define functions, in my second piece of code I get a NameError: global name '...' is not defined.Both pieces of code are are for the same purpose.from Tkinter import *import ttkimport csvUSER_LOGIN = "user_login.csv"class Login: def __init__(self, master): frame = Frame(master) frame.pack() lment1 = StringVar() lment2 = StringVar() self.usernameLabel = Label(frame, text="Username:") self.usernameLabel.grid(row=0, sticky=E) self.passwordLabel = Label(frame, text="Password:") self.passwordLabel.grid(row=1, sticky=E) self.usernameEntry = Entry(frame, textvariable=lment1) self.usernameEntry.grid(row=0, column=1) self.passwordEntry = Entry(frame, textvariable=lment2) self.passwordEntry.grid(row=1, column=1) self.loginButton = ttk.Button(frame, text="Login", command=self.login_try) self.loginButton.grid(row=2) self.cancelButton = ttk.Button(frame, text="Cancel", command=frame.quit) self.cancelButton.grid(row=2, column=1) def login_try(self): ltext1 = lment1.get() ltext2 = lment2.get() if in_csv(USER_LOGIN, [ltext1, ltext2]): login_success() else: login_failed() def in_csv(fname, row, **kwargs): with open(fname) as inf: incsv = csv.reader(inf, **kwargs) return any(r == row for r in incsv) def login_success(): print 'Login successful' tkMessageBox.showwarning(title="Login successful", message="Welcome back") def login_failed(): print 'Failed to login' tkMessageBox.showwarning(title="Failed login", message="You have entered an invalid Username or Password")root = Tk()root.geometry("200x70")root.title("title")app = Login(root)root.mainloop()That is the second piece of code ^^^# **** Import modules ****import csvfrom Tkinter import *import ttkimport tkMessageBox# **** Declare Classes ****lGUI = Tk()lment1 = StringVar()lment2 = StringVar()USER_LOGIN = "user_login.csv"def in_csv(fname, row, **kwargs): with open(fname) as inf: incsv = csv.reader(inf, **kwargs) return any(r==row for r in incsv)def login_try(): ltext1 = lment1.get() ltext2 = lment2.get() if in_csv(USER_LOGIN, [ltext1, ltext2]): login_success() else: login_failed()def login_success(): print 'Login successful' tkMessageBox.showwarning(title="Login successful", message="Welcome back")def login_failed(): print 'Failed to login' tkMessageBox.showwarning(title="Failed login", message="You have entered an invalid Username or Password")lGUI.geometry('200x100+500+300')lGUI.title('PVH')lButton = Button(lGUI, text="Login", command=login_try)lButton.grid(row=3)label_1 = Label(lGUI, text="Username")label_2 = Label(lGUI, text="Password")entry_1 = Entry(lGUI, textvariable=lment1)entry_2 = Entry(lGUI, textvariable=lment2)label_1.grid(row=0)label_2.grid(row=1)entry_1.grid(row=0, column=1)entry_2.grid(row=1, column=1)lGUI.mainloop()And that is the piece of code that works^I get the error:Exception in Tkinter callbackTraceback (most recent call last): File "C:\Python27\lib\lib-tk\Tkinter.py", line 1486, in __call__ return self.func(*args) File "C:/Users/User/Desktop/PVH_work/PVH_program/blu.py", line 33, in login_try ltext1 = lment1.get()NameError: global name 'lment1' is not definedAny help would be appreciated :D | In your first code piece, you define the variable 'lment1' in the __init __ method, making it local to that single method.When you then try to access the same variable in the 'login_try', Python doesn't know what it is.If you wish to access the variable form wherever in the class, you should define it on the class level, by setting it on 'self'def __init__(self, master): [...] self.lment1 = StringVar() [...]That way, you can access it later with:def login_try(self): [...] ltext1 = self.lment1.get() [...]The reason it works in your second code sample, is because you defined it outside of any class - Making it globally available |
Regex to select and replace spaces inside double brackets I'm writing a script which is used to tidy up MediaWiki files prior to conversion to confluence mark-up, this particular scenario I'm needing to fix page links which in MediaWiki are something like this[[this is a page]] the problem being that the actual page link would be this_is_a_page, the universal wiki converter isn't smart enough to realise this when it converts to confluence mark-up so you end up with broken links.I've been trying to create a regex as part of my python script (I've already stripped out html and some other tags like < gallery> etc., the following regex selects all the links in question:'\[\[(.*?)\]\]'I just cant find a programmatic way to select only the spaces inside the [[ ]] so I can substitute them out for underscores. I've attempted using matches with no success. | Try with re.sub and lambda expression>>> import re>>> test = '[[this is a page]] bla bla [[this is another page]]'>>> re.sub(r'\[\[.+?\]\]', lambda x:x.group().replace(" ","_"), test)'[[this_is_a_page]] bla bla [[this_is_another_page]]' |
Bold, underlining, and Iterations with python-docx I am writing a program to take data from an ASCII file and place the data in the appropriate place in the Word document, and making only particular words bold and underlined. I am new to Python, but I have extensive experience in Matlab programming. My code is:#IMPORT ASCII DATA AND MAKE IT USEABLE#Alternatively Pandas - gives better table display resultsimport pandas as pddata = pd.read_csv('203792_M-51_Niles_control_SD_ACSF.txt', sep=",", header=None)#print data#data[1][3] gives value at particular data points within matrixi=len(data[1])print 'Number of Points imported =', i#IMPORT WORD DOCUMENTimport docx #Opens Python Word document toolfrom docx import Document #Invokes Document command from docxdocument = Document('test_iteration.docx') #Imports Word Document to Modifyt = len(document.paragraphs) #gives the number of lines in documentprint 'Total Number of lines =', t#for paragraph in document.paragraphs: # print(para.text) #Prints the text in the entire documentfont = document.styles['Normal'].fontfont.name = 'Arial'from docx.shared import Ptfont.size = Pt(8)#font.bold = True#font.underline = Truefor paragraph in document.paragraphs: if 'NORTHING:' in paragraph.text: #print paragraph.text paragraph.text = 'NORTHING: \t', str(data[1][0]) print paragraph.text elif 'EASTING:' in paragraph.text: #print paragraph.text paragraph.text = 'EASTING: \t', str(data[2][0]) print paragraph.text elif 'ELEV:' in paragraph.text: #print paragraph.text paragraph.text = 'ELEV: \t', str(data[3][0]) print paragraph.text elif 'CSF:' in paragraph.text: #print paragraph.text paragraph.text = 'CSF: \t', str(data[8][0]) print paragraph.text elif 'STD. DEV.:' in paragraph.text: #print paragraph.text paragraph.text = 'STD. DEV.: ', 'N: ', str(data[5][0]), '\t E: ', str(data[6][0]), '\t EL: ', str(data[7][0]) print paragraph.text#for paragraph in document.paragraphs: #print(paragraph.text) #Prints the text in the entire document#document.save('test1_save.docx') #Saves as Word Document after ModificationMy question is how to make only the "NORTHING:" bold and underlined in: paragraph.text = 'NORTHING: \t', str(data[1][0]) print paragraph.text So I wrote a pseudo "find and replace" command that works great if all the values being replaced are the exactly same. However, I need to replace the values in the second paragraph with the values from the second array of the ASCII file, and the third paragraph with the values from the third array..etc. (I have to use find and replace because the formatting of the document is to advanced for me to replicate in a program, unless there is a program that can read the Word file and write the programming back as Python script...reverse engineer it.) I am still just learning, so the code may seem crude to you. I am just trying to automate this boring process of copy and pasting. | Untested, but assuming python-docx is similar to python-pptx (it should be, it's maintained by the same developer, and a cursory review of the documentation suggests that the way it interfaces withthe PPT/DOC files is the same, uses the same methods, etc.)In order to manipulate substrings of paragraphs or words, you need to use the run object:https://python-docx.readthedocs.io/en/latest/api/text.html#run-objectsIn practice, this looks something like:for paragraph in document.paragraphs: if 'NORTHING:' in paragraph.text: paragraph.clear() run = paragraph.add_run() run.text = 'NORTHING: \t' run.font.bold = True run.font.underline = True run = paragraph.add_run() run.text = str(data[1][0]) Conceptually, you create a run instance for each part of the paragraph/text that you need to manipulate. So, first we create a run with the bolded font, then we add another run (which I think will not be bold/underline, but if it is just set those to False).Note: it's preferable to put all of your import statements at the top of a module. This can be optimized a bit by using a mapping object like a dictionary, which you can use to associate the matching values ("NORTHING") as keys and the remainder of the paragraph text as values. ALSO UNTESTEDimport pandas as pdfrom docx import Document from docx.shared import Ptdata = pd.read_csv('203792_M-51_Niles_control_SD_ACSF.txt', sep=",", header=None)i=len(data[1])print 'Number of Points imported =', idocument = Document('test_iteration.docx') #Imports Word Document to Modifyt = len(document.paragraphs) #gives the number of lines in documentprint 'Total Number of lines =', tfont = document.styles['Normal'].fontfont.name = 'Arial'font.size = Pt(8)# This maps the matching strings to the data array valuesdata_dict = { 'NORTHING:': data[1][0], 'EASTING:': data[2][0], 'ELEV:': data[3][0], 'CSF:': data[8][0], 'STD. DEV.:': 'N: {0}\t E: {1}\t EL: {2}'.format(data[5][0], data[6][0], data[7][0]) }for paragraph in document.paragraphs: for k,v in data_dict.items(): if k in paragraph.text: paragraph.clear() run = paragraph.add_run() run.text = k + '\t' run.font.bold = True run.font.underline = True run = paragraph.add_run() run.text = '{0}'.format(v) |
django - view returning no value? I have the following basic views.py to test out doing queries based on the user.def Vendor_Matrix(request): username = request.session.get('username','') queryset = User.objects.filter(username=username).values_list('user_permissions', 'username', 'first_name') return JSONResponse(queryset)I'm logged in (using Mezzanine) into my site. I then have that view referenced in the following urls.pyfrom django.conf.urls import patterns, urlfrom api import viewsurlpatterns = patterns('', url(r'^your-data/vendor-matrix/$', 'api.views.Vendor_Matrix'),)When I go to the URL it comes up with a blank page. specifically this - []I can only imagine it's not registering the logged in user?I've simplified my views.py even further - Definitely not registering the username that is logged in. It still returns nothing.def Vendor_Matrix(request): username = request.session.get('username','') return HttpResponse(username) | That's not where Django keeps the logged-in user...return JSONResponse(operator.attrgetter('user_permissions', 'username', 'first_name')(request.user)) |
Tensorflow Inception Android I am trying to build the [TensorFlow Android Camera Demo][1].As i understand the error something is wrong with build-tools/23.0.1 removed it and reinstalled it but to no effect. what is wrong or any thoughts on how to find out what the problem is?used:ndk: android-ndk-r12btensorflow: master branch ( tried 0.8 and 0.9 as well ) i tried to use buildtoolversion 24.0.0 and got a different error (included below) WORKSPACE file:# Uncomment and update the paths in these entries to build the Android demo.android_sdk_repository( name = "androidsdk", api_level = 23, build_tools_version = "23.0.1", # Replace with path to Android SDK on your system path = "/home/boss/Android/Sdk",)android_ndk_repository( name="androidndk", path="/home/boss/Downloads/android-ndk-r12b", api_level=21)Error: buildtool 23.0.1ERROR: /home/boss/Downloads/tensorflow-master/tensorflow/examples/android/BUILD:47:1: Processing Android resources for //tensorflow/examples/android:tensorflow_demo failed: namespace-sandbox failed: error executing command (cd /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master && \ exec env - \ /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/_bin/namespace-sandbox @/home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/bazel-sandbox/565ee075-9d3c-4af1-adce-59fc5a2f3c06-0.params -- bazel-out/host/bin/external/bazel_tools/tools/android/resources_processor --buildToolsVersion 23.0.1 --aapt bazel-out/host/bin/external/androidsdk/aapt_binary --annotationJar external/androidsdk/tools/support/annotations.jar --androidJar external/androidsdk/platforms/android-23/android.jar --primaryData tensorflow/examples/android/res:tensorflow/examples/android/assets:tensorflow/examples/android/AndroidManifest.xml --rOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo_symbols/R.txt --srcJarOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo.srcjar --proguardOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/proguard/tensorflow_demo/_tensorflow_demo_proguard.cfg --manifestOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo_processed_manifest/AndroidManifest.xml --resourcesOutput bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo_files/resource_files.zip --packagePath bazel-out/local-fastbuild/bin/tensorflow/examples/android/tensorflow_demo.ap_ --debug --packageForR org.tensorflow.demo).Error: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryError: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directoryJul 17, 2016 11:51:48 PM com.google.devtools.build.android.AndroidResourceProcessingAction mainSEVERE: Error during merging resourcesError: Failed to run command: bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.pngError Code: 127Output: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory at com.android.ide.common.res2.MergeWriter.end(MergeWriter.java:54) at com.android.ide.common.res2.MergedResourceWriter.end(MergedResourceWriter.java:113) at com.android.ide.common.res2.DataMerger.mergeData(DataMerger.java:291) at com.android.ide.common.res2.ResourceMerger.mergeData(ResourceMerger.java:48) at com.google.devtools.build.android.AndroidResourceProcessor.mergeData(AndroidResourceProcessor.java:724) at com.google.devtools.build.android.AndroidResourceProcessingAction.main(AndroidResourceProcessingAction.java:254)Caused by: com.android.ide.common.internal.LoggedErrorException: Failed to run command: bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.pngError Code: 127Output: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:123) at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:96) at com.android.ide.common.internal.AaptCruncher.crunchPng(AaptCruncher.java:58) at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:188) at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:139) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)Exception in thread "main" Error: Failed to run command: bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.pngError Code: 127Output: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory at com.android.ide.common.res2.MergeWriter.end(MergeWriter.java:54) at com.android.ide.common.res2.MergedResourceWriter.end(MergedResourceWriter.java:113) at com.android.ide.common.res2.DataMerger.mergeData(DataMerger.java:291) at com.android.ide.common.res2.ResourceMerger.mergeData(ResourceMerger.java:48) at com.google.devtools.build.android.AndroidResourceProcessor.mergeData(AndroidResourceProcessor.java:724) at com.google.devtools.build.android.AndroidResourceProcessingAction.main(AndroidResourceProcessingAction.java:254)Caused by: com.android.ide.common.internal.LoggedErrorException: Failed to run command: bazel-out/host/bin/external/androidsdk/aapt_binary s -i /tmp/android_resources_tmp1770729823994372609/tmp-deduplicated/tensorflow/examples/android/res/drawable-xxhdpi/ic_launcher.png -o /tmp/android_resources_tmp1770729823994372609/merged_resources/drawable-xxhdpi-v4/ic_launcher.pngError Code: 127Output: bazel-out/host/bin/external/androidsdk/aapt_binary.runfiles/androidsdk/build-tools/23.0.1/aapt: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:123) at com.android.ide.common.internal.CommandLineRunner.runCmdLine(CommandLineRunner.java:96) at com.android.ide.common.internal.AaptCruncher.crunchPng(AaptCruncher.java:58) at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:188) at com.android.ide.common.res2.MergedResourceWriter$1.call(MergedResourceWriter.java:139) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)Target //tensorflow/examples/android:tensorflow_demo failed to builderror: buildtool 24.0.0ERROR: /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/external/gif_archive/BUILD:14:1: C++ compilation of rule '@gif_archive//:gif' failed: namespace-sandbox failed: error executing command (cd /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master && \ exec env - \ PATH=/home/boss/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \ /home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/_bin/namespace-sandbox @/home/boss/.cache/bazel/_bazel_boss/f65f721012b7fd201233c0708275aaf3/execroot/tensorflow-master/bazel-sandbox/937cd00e-9340-4e7e-b3fe-a3006d83a7e6-2.params -- /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 -DHAVE_CONFIG_H -iquote external/gif_archive -iquote bazel-out/host/genfiles/external/gif_archive -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/gif_archive/giflib-5.1.4/lib -isystem bazel-out/host/genfiles/external/gif_archive/giflib-5.1.4/lib -isystem external/bazel_tools/tools/cpp/gcc3 -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -MD -MF bazel-out/host/bin/external/gif_archive/_objs/gif/external/gif_archive/giflib-5.1.4/lib/quantize.d -c external/gif_archive/giflib-5.1.4/lib/quantize.c -o bazel-out/host/bin/external/gif_archive/_objs/gif/external/gif_archive/giflib-5.1.4/lib/quantize.o).external/gif_archive/giflib-5.1.4/lib/quantize.c:17:29: fatal error: gif_lib_private.h: No such file or directorycompilation terminated.Target //tensorflow/examples/android:tensorflow_demo failed to build | Actual problem: 64 bit machine's 32 bit compatibilitySolution found: this post |
Need help,writing a python BMI calc I am new to python and and currently learning to use functions properly.h = 1.75w = 70.5bmi = float(w / h ** 2)if bmi < 18.5: print('过轻')elif 18.5 <= bmi < 25: print('正常')elif 25 <= bmi < 28: print('过重')elif 28 <= bmi < 32: print('肥胖')else bmi >= 32: print('严重肥胖')Every time I run this program as an attempt I come into this errorFile "/Users/frank/Coding/bmimyself.py", line 17 else bmi >= 32: ^SyntaxError: invalid syntaxI would appreciate any assistance with my coding errors I must have made | This statement is not "else", it is another "elif".elif bmi >= 32: print 'foo'else: print 'bar' |
Python Turtle fill the triangle with color? I am currently using the turtle.goto cords from a text file. I have the triangle drawn and everything but I don't know how to fill the triangle. | You are ending fill after every new coordinate. You need to call t.begin_fill() before your for loop and call t.end_fill() after the last coordinate, otherwise you are just filling in your single line with each iteration. |
constrain a series or array to a range of values I have a series of values that I want to have constrained to be within +1 and -1.s = pd.Series(np.random.randn(10000))I know I can use apply, but is there a simple vectorized approach?s_ = s.apply(lambda x: min(max(x, -1), 1))s_.head()0 -0.2561171 0.8797972 1.0000003 -0.7113974 -0.400339dtype: float64 | Use clip:s = s.clip(-1,1)Example Input:s = pd.Series([-1.2, -0.5, 1, 1.1])0 -1.21 -0.52 1.03 1.1Example Output:0 -1.01 -0.52 1.03 1.0 |
PyQt5 equivalent of QtWebKitWidgets.QWebView.page.mainFrame() for QtWebEngineWidgets.QWebEngineView()? I am very new to PyQt and started to play around with the following code (which originally comes from this blog post):# Create an applicationapp = QApplication([])# And a windowwin = QWidget()win.setWindowTitle('QWebView Interactive Demo')# And give it a layoutlayout = QVBoxLayout()win.setLayout(layout)# Create and fill a QWebViewview = QWebView()view.setHtml(''' <html> <head> <title>A Demo Page</title> <script language="javascript"> // Completes the full-name control and // shows the submit button function completeAndReturnName() { var fname = document.getElementById('fname').value; var lname = document.getElementById('lname').value; var full = fname + ' ' + lname; document.getElementById('fullname').value = full; document.getElementById('submit-btn').style.display = 'block'; return full; } </script> </head> <body> <form> <label for="fname">First name:</label> <input type="text" name="fname" id="fname"></input> <br /> <label for="lname">Last name:</label> <input type="text" name="lname" id="lname"></input> <br /> <label for="fullname">Full name:</label> <input disabled type="text" name="fullname" id="fullname"></input> <br /> <input style="display: none;" type="submit" id="submit-btn"></input> </form> </body> </html>''')# A button to call our JavaScriptbutton = QPushButton('Set Full Name')# Interact with the HTML page by calling the completeAndReturnName# function; print its return value to the consoledef complete_name(): frame = view.page().mainFrame() print frame.evaluateJavaScript('completeAndReturnName();')# Connect 'complete_name' to the button's 'clicked' signalbutton.clicked.connect(complete_name)# Add the QWebView and button to the layoutlayout.addWidget(view)layout.addWidget(button)# Show the window and run the appwin.show()app.exec_()I made some slight changes in order to try to make it run with the latest pyqt5 version, but I don't understand how I should changeframe = view.page().mainFrame()in order to make the script run without errors. Here is the code I have so far:from PyQt5 import QtWidgets, QtGui, QtCorefrom PyQt5 import QtWebEngineWidgets# Create an applicationapp = QtWidgets.QApplication([])# And a window5win = QtWidgets.QWidget()win.setWindowTitle('QWebView Interactive Demo')# And give it a layoutlayout = QtWidgets.QVBoxLayout()win.setLayout(layout)# Create and fill a QWebView#view = QtWebKitWidgets.QWebView() # depecated?view = QtWebEngineWidgets.QWebEngineView()view.setHtml(''' <html> <head> <title>A Demo Page</title> <script language="javascript"> // Completes the full-name control and // shows the submit button function completeAndReturnName() { var fname = document.getElementById('fname').value; var lname = document.getElementById('lname').value; var full = fname + ' ' + lname; document.getElementById('fullname').value = full; document.getElementById('submit-btn').style.display = 'block'; return full; } </script> </head> <body> <form> <label for="fname">First name:</label> <input type="text" name="fname" id="fname"></input> <br /> <label for="lname">Last name:</label> <input type="text" name="lname" id="lname"></input> <br /> <label for="fullname">Full name:</label> <input disabled type="text" name="fullname" id="fullname"></input> <br /> <input style="display: none;" type="submit" id="submit-btn"></input> </form> </body> </html>''')# A button to call our JavaScriptbutton = QtWidgets.QPushButton('Set Full Name')# Interact with the HTML page by calling the completeAndReturnName# function; print its return value to the consoledef complete_name(): frame = view.page().mainFrame() # THIS raises an error. I'm stuck here. print(frame.evaluateJavaScript('completeAndReturnName();'))# Connect 'complete_name' to the button's 'clicked' signalbutton.clicked.connect(complete_name)# Add the QWebView and button to the layoutlayout.addWidget(view)layout.addWidget(button)# Show the window and run the appwin.show()app.exec_()I have seen this post, which I thought might help, but unfortunately I'm still stuck with it. Does anyone know how to make this work with the latest PyQt5 version? Help would be very much appreciated. | There is nothing equivalent to the Qt WebKit QWebFrame class in Qt Web Engine. Frames are just considered part of the content, so there are no dedicated APIs for dealing with them - there is just a single QWebEnginePage, which provides access to the whole web document.There is also no evaluateJavaScript method. Instead, there is an asynchronous runJavaScript method, which needs a callback to receive the result. So your code should be re-written like this:def js_callback(result): print(result)def complete_name(): view.page().runJavaScript('completeAndReturnName();', js_callback) |
Combining multiple columns in a DataFrame I have a DataFrame with 40 columns (columns 0 through 39) and I want to group them four at a time: import numpy as npimport pandas as pddf = pd.DataFrame(np.random.binomial(1, 0.2, (100, 40)))new_df["0-3"] = df[0] + df[1] + df[2] + df[3]new_df["4-7"] = df[4] + df[5] + df[6] + df[7]...new_df["36-39"] = df[36] + df[37] + df[38] + df[39]Can I do this in a single statement (or in a better way than summing them separately)? The column names in the new DataFrame are not important. | You could select out the columns and sum on the row axis, like this.df['0-3'] = df.loc[:, 0:3].sum(axis=1)A couple things to note:Summing like this will ignore missing data while df[0] + df[1] ... propagates it. Pass skipna=False if you want that behavior.Not necessarily any performance benefit, may actually be a little slower. |
I'm designing a flow rate based traffic controller on raspberry pi, it runs in an infinite loop I'm designing a flow rate based traffic controller on raspberry pi using buttons as traffic simulators. the problem i'm facing is that the maximum value gets selected at first and there can be no increments possible to that value and the code runs in an infinite loop.For ex if i press the button at road 1 on the circuit; it will take it as the maximum value as the count of other three roads viz. road two, road three, road 4 are zero and the loop continues with only the traffic lights at road 1 going green and red and the counts of the button presses at the other three streets are not considered at all.Please help me with the logic as i'm a newbie with python.Here's my code.#!/usr/bin/pythonimport osimport timeimport RPi.GPIO as GPIOGPIO.setmode(GPIO.BCM)count = 0count2 = 0count3 = 0count4 = 0GPIO.setwarnings(False)GPIO.setup(20, GPIO.IN)GPIO.setup(21, GPIO.IN)GPIO.setup(19, GPIO.IN)GPIO.setup(25, GPIO.IN)#red1GPIO.setup(17,GPIO.OUT)#yellow1GPIO.setup(27,GPIO.OUT)#green1GPIO.setup(22,GPIO.OUT)#RT1GPIO.setup(12,GPIO.OUT)#red2GPIO.setup(14,GPIO.OUT)#yellow2GPIO.setup(15,GPIO.OUT)#green2GPIO.setup(18,GPIO.OUT)#RT2GPIO.setup(16,GPIO.OUT)#red3GPIO.setup(10,GPIO.OUT)#yellow3GPIO.setup(9,GPIO.OUT)#green3GPIO.setup(11,GPIO.OUT)#RT3GPIO.setup(24,GPIO.OUT)#red4GPIO.setup(2,GPIO.OUT)#yellow4GPIO.setup(3,GPIO.OUT)#green4GPIO.setup(13,GPIO.OUT)#RT4GPIO.setup(23,GPIO.OUT)while True: if (GPIO.input(20) == False): count=count+1 time.sleep(1) print(count) if (GPIO.input(21) == False): count2=count2+1 time.sleep(1) print(count2) if (GPIO.input(19) == False): count3=count3+1 time.sleep(1) print(count3) if (GPIO.input(25) == False): count4=count4+1 time.sleep(1) print(count4) if count > count2 : if count > count3: if count > count4: print ("Traffic on road 1 highest") #go go go...RT1+Str1 GPIO.output(17,False) GPIO.output(27,False) GPIO.output(22,True) GPIO.output(12,True) GPIO.output(14,True) GPIO.output(15,False) GPIO.output(18,False) GPIO.output(16,False) GPIO.output(10,True) GPIO.output(9,False) GPIO.output(11,False) GPIO.output(24,False) GPIO.output(2,True) GPIO.output(3,False) GPIO.output(13,False) GPIO.output(23,False) time.sleep(10) #RT1 blinks GPIO.output(12,False) time.sleep(1) GPIO.output(12,True) time.sleep(1) GPIO.output(12,False) time.sleep(1) GPIO.output(12,True) time.sleep(1) GPIO.output(12,False) time.sleep(1) GPIO.output(27,True) #yellow time.sleep(3) GPIO.output(27,False) GPIO.output(17,True) time.sleep(1) #red elif count2 > count: if count2 > count3: if count2 > count4: print ("Traffic on road 2 highest") GPIO.output(11,False) GPIO.output(10,True) GPIO.output(18,True) GPIO.output(16,True) GPIO.output(14,False) GPIO.output(24,False) GPIO.output(17,True) GPIO.output(27,False) GPIO.output(22,False) GPIO.output(12,False) GPIO.output(15,False) GPIO.output(9,False) GPIO.output(11,False) GPIO.output(24,False) GPIO.output(2,True) GPIO.output(3,False) GPIO.output(13,False) GPIO.output(23,False) time.sleep(10) #RT2 blinks GPIO.output(16,False) time.sleep(1) GPIO.output(16,True) time.sleep(1) GPIO.output(16,False) time.sleep(1) GPIO.output(16,True) time.sleep(1) GPIO.output(16,False) time.sleep(1) GPIO.output(15,True) time.sleep(3) GPIO.output(15,False) #yellow time.sleep(1) GPIO.output(14,True) #red time.sleep(1) elif count3 > count: if count3 > count2: if count3 > count4: print ("Traffic on road 3 highest") GPIO.output(11,False) GPIO.output(10,False) GPIO.output(18,False) GPIO.output(16,False) GPIO.output(14,True) GPIO.output(24,False) GPIO.output(17,True) GPIO.output(27,False) GPIO.output(22,False) GPIO.output(12,False) GPIO.output(15,False) GPIO.output(9,False) GPIO.output(11,True) GPIO.output(24,True) GPIO.output(2,True) GPIO.output(3,False) GPIO.output(13,False) GPIO.output(23,False) time.sleep(10) #RT3 blinks GPIO.output(24,False) time.sleep(1) GPIO.output(24,True) time.sleep(1) GPIO.output(24,False) time.sleep(1) GPIO.output(24,True) time.sleep(1) GPIO.output(24,False) time.sleep(1) GPIO.output(9,True) time.sleep(3) GPIO.output(9,False) #yellow time.sleep(1) GPIO.output(10,True) #red time.sleep(1) elif count4 > count: if count4 > count2: if count4 > count3: print ("Traffic on road 4 highest") GPIO.output(11,False) GPIO.output(10,True) GPIO.output(18,False) GPIO.output(16,False) GPIO.output(14,True) GPIO.output(24,False) GPIO.output(17,True) GPIO.output(27,False) GPIO.output(22,False) GPIO.output(12,False) GPIO.output(15,False) GPIO.output(9,False) GPIO.output(11,False) GPIO.output(24,False) GPIO.output(2,False) GPIO.output(3,False) GPIO.output(13,True) GPIO.output(23,True) time.sleep(10) #RT2 blinks GPIO.output(23,False) time.sleep(1) GPIO.output(23,True) time.sleep(1) GPIO.output(23,False) time.sleep(1) GPIO.output(23,True) time.sleep(1) GPIO.output(23,False) time.sleep(1) GPIO.output(3,True) time.sleep(3) GPIO.output(3,False) #yellow time.sleep(1) GPIO.output(2,True) #red time.sleep(1) | I was getting confused by all the layered if statements. You can use the keyword "and" in between them to combine all the ifs into one. Still, I didn't see any fault that would cause your problem. Also, if you want to change variables that are declared outside a loop, you will need to use a global statement. Here's some corrected code:#import statementscount1 = 0count2 = 0count3 = 0count4 = 0#declare gpio pinswhile True: global count1 global count2 global count3 global count4 #logicHowever, it is advised to not use global variables in your code. So, if you find another solution, use it instead. |
extracting chars from string using regex and pythonic way I have a string like this: "32H74312"I want to extract some parts and put them in different variables. first_part = 32 # always 2 digitssecond_part = H # always 1 charsthird_part = 743 # always 3 digit fourth_part = 12 # always 2 digitIs there some way to this in pythonic way? | There's now reason to use a regex for such a simple task.The pythonic way could be something like:string = "32H74312"part1 = string[:2]part2 = string[2:3]part3 = string[3:6]part4 = string[6:] |
TimeoutException using selenium with python I'm getting TimeoutException when using this code to get the fill in the CardNum textbox with a numberCardNUM = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.XPATH, '//*[@id="number"]'))) CardNUM.send_keys(cardNum)Xpath is taken directly from right clicking and inspecting the textbox and copying the XPATH for the block <input autocomplete="cc-number" id="number" name="number" type="tel" aria-describedby="error-for-number" data-current-field="number" class="input-placeholder-color--lvl-30" placeholder="Card number" style="color: rgb(151, 151, 151); font-family: &quot;Helvetica Neue&quot;; padding: 0.94em 0.8em; transition: padding 0.2s ease-out;">Is there something else I need to do to be able to fill in the box, for example is the text box hidden and is there some manipulation that I would need to do beforehand to be able to find the text box? | Most likely the element is inside an IFRAME, especially since it seems to be a credit card number. The payment portion of payment pages are typically in an IFRAME for security. Try switching to the IFRAME first then your code should work. |
How to apply linear transform on a 3D feature vector in Tensorflow? Imagine there is a tensor with the following dimensions (32, 20, 3) where batch_size = 32, num_steps = 20 and features = 3. The features are taken from a .csv file that has the following format:feat1, feat2, feat3200, 100, 05.5, 200, 0.523.2, 1, 9.3Each row is transformed into 3-dim vector (numpy array): [200, 100, 0], [5.5, 200, 0.5], [23.2, 1, 9.3].We want to use these features in a recurrent neural network but directly feeding them into rnn won't do, we'd like to process these feature vectors first by applying linear transformation to each 3-dim vector inside the batch sample and reshape the input tensor into (32, 20, 100). This is easily done in Torch for example via: nn.MapTable():add(nn.Linear(3, 100)) which is applied on the input batch tensor of size 20 x 32 x 3 (num_steps and batch_size are switched in Torch). We split it into 20 arrays each 32x3 in size 1 : DoubleTensor - size: 32x3 2 : DoubleTensor - size: 32x3 3 : DoubleTensor - size: 32x3 ...and use nn.Linear(3, 100) to transform them into 32x100 vectors. We then pack them up back into 20 x 32 x 100 tensor. How can we implement the same operation in Tensorflow? | Could reshape into [batchsize*num_steps, features] use a Tensorflow linear layer with 100 outputs and then reshape back would that work?reshaped_tensor = tf.reshape(your_input, [batchsize*num_steps, features])linear_out = tf.layers.dense(inputs=reshaped_tensor, units=100)reshaped_back = tf.reshape(linear_out, [batchsize, num_steps, features] |
matching similar elements in between two lists I'm new to python so apologies if its a silly question.I have two lists L1=['marvel','audi','mercedez','honda'] and L2=['marvel comics','bmw','mercedez benz','audi'].I want to extract matching elements which contains in list L2 matched with list L1. So what I done :for i in L1: for j in L2: if j in i: print (j) output is ['audi']But, I also wants to return elements if its also consist any word match like mercedez in mercedez benz and marvel in marvel comics. so final output would be:j=['audi','mercedez benz','marvel comics'] | I think what you really want here is the elements of L2 that contain any elements in L1. So simply replace if j in i with if i in j:for i in L1: for j in L2: if i in j: print (j)This outputs:marvel comicsaudimercedez benz |
Generate a list from list of dicts if value not exists in another list im triying to filter a list of dicts using the elements of a lista=[{"item_id": "ITEM2090", "seller_id":1009954}, {"item_id": "ITEM2050", "seller_id":1009920}, {"item_id": "ITEM2032", "seller_id":1009960}, {"item_id": "ITEM2080", "seller_id":1009954}]b=["ITEM2032","ITEM2060","ITEM2070","ITEM2090"]expected result (the two dict from list a wich values for item_id not exists in list b):c=[{"item_id": "ITEM2050", "seller_id":1009920}, {"item_id": "ITEM2080", "seller_id":1009954}]I've tried:c=[x["item_id"] for x in a if x["item_id"] not in b]My problem is that it returns a list of the item_id values, not a list of dicts as I would like. | c = [item for item in a if item["item_id"] not in b]Will be better to make "b" as a set in case of a large amount of items. |
Trying to predict with a loaded classification model with .h5 on Tensorflow, returning IndexError: list index out of range I created a classification model with both saved_model format and .h5 format. I am trying to load the model so I can deploy it withnew_model = tf.keras.models.load_model('my_model.h5')Then I predictprint(new_model.predict('/content/images/image.jpg'))Then it returns> IndexError Traceback (most recent call last)<ipython-input-26-749bd8c0774b> in <module>() 1 new_model = tf.keras.models.load_model('my_model.h5')----> 2 print(new_model.predict('/content/images/image.jpg'))>5 frames>/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py in __getitem__(self, key) 887 else: 888 if self._v2_behavior:--> 889 return self._dims[key].value 890 else: 891 return self._dims[key]>IndexError: list index out of rangeI've tried other similar solutions but they don't work. Do I need to retrain the model? What do I do so I can predict on one image in a clean environment? | for model.predict to produce proper predictions it is necessary that the input be of the same nature as the inputs that the model was trained on. For example in training you read in an image from the training set. Then typically you will rescale the pixel values, usually in the range from 0 to +1 or in some cases -1 to +1. Then you typically resize the images so all training images are of the same size. Now when you want to input an image to be predicted you should follow the same process. Read in the image, rescale it and resize it as you did for the training images. |
Setting the minimum value of a pandas column using clip I want to set the minimum value of a column of pandas dataframe using clip method. Below is my codeimport pandas as pddata = pd.DataFrame({'date' : pd.to_datetime(['2010-12-31', '2012-12-31', '2012-12-31']), 'val' : [1,2, 5]})data.clip(lower=pd.Series({'val': 4}), axis=1)Above code is giving error. Could you please help on how to eliminate the error? | You can try Series.clip or set the date column as index then DataFrame.clip.data['val'] = data['val'].clip(4)# ordata = (data.set_index('date') .clip(4) .reset_index())print(data) date val0 2010-12-31 41 2012-12-31 42 2012-12-31 5 |
python plot large dimension data I have a 1800*100000000 matrix, and I want to plot it in python using code below:import matplotlib.pyplot as pltplt.spy(m)plt.show()The result is disappointing, it looks like a line because of little row number compared to column number:How can I do it correctly? | spy() accepts a number of keyword arguments, aspect in particular is interesting...In [1]: import numpy as npIn [2]: import matplotlib.pyplot as pltIn [3]: a = np.random.random((25,250))>0.6In [4]: %matplotlibUsing matplotlib backend: Qt4AggIn [5]: plt.spy(a)Out[5]: <matplotlib.image.AxesImage at 0x7f9ad1a790b8>In [6]: plt.spy(a, aspect='auto')Out[6]: <matplotlib.image.AxesImage at 0x7f9ad1139d30> |
Set schema in pyspark dataframe read.csv with null elements I have a data set (example) that when imported with df = spark.read.csv(filename, header=True, inferSchema=True)df.show()will assign the column with 'NA' as a stringType(), where I would like it to be IntegerType() (or ByteType()).I then tried to set schema = StructType([ StructField("col_01", IntegerType()), StructField("col_02", DateType()), StructField("col_03", IntegerType())])df = spark.read.csv(filename, header=True, schema=schema)df.show()The output shows the entire row with 'col_03' = null to be null.However col_01 and col_02 return appropriate data if they are called withdf.select(['col_01','col_02']).show()I can find a way around this by post casting the data type of col_3df = spark.read.csv(filename, header=True, inferSchema=True)df = df.withColumn('col_3',df['col_3'].cast(IntegerType()))df.show(), but I think it is not ideal and would be much better if I can assign the data type for each column directly with setting schema.Would anyone be able to guide me what I do incorrectly? Or casting the data types after importing is the only solution? Any comment regarding performance of the two approaches (if we can make assigning schema to work) is also welcome.Thank you, | You can set a new null value in spark's csv loader using nullValue:for a csv file looking like this:col_01,col_02,col_03111,2007-11-18,3112,2002-12-03,4113,2007-02-14,5114,2003-04-16,NA115,2011-08-24,2116,2003-05-03,3117,2001-06-11,4118,2004-05-06,NA119,2012-03-25,5120,2006-10-13,4and forcing schema:from pyspark.sql.types import StructType, IntegerType, DateTypeschema = StructType([ StructField("col_01", IntegerType()), StructField("col_02", DateType()), StructField("col_03", IntegerType())])You'll get:df = spark.read.csv(filename, header=True, nullValue='NA', schema=schema)df.show()df.printSchema() +------+----------+------+ |col_01| col_02|col_03| +------+----------+------+ | 111|2007-11-18| 3| | 112|2002-12-03| 4| | 113|2007-02-14| 5| | 114|2003-04-16| null| | 115|2011-08-24| 2| | 116|2003-05-03| 3| | 117|2001-06-11| 4| | 118|2004-05-06| null| | 119|2012-03-25| 5| | 120|2006-10-13| 4| +------+----------+------+ root |-- col_01: integer (nullable = true) |-- col_02: date (nullable = true) |-- col_03: integer (nullable = true) |
ValueError: could not convert string to float in simple code # -*- coding: cp1250 -*-print ('euklides alpha 1.0')a = raw_input('podaj liczbę A : ')b = raw_input('podaj liczbę B : ')a = float('a')b = float('b')if 'a'=='b': print 'a'elif 'a' > 'b': while 'a' > 'b': print('a'-'b') if 'a'=='b': break if 'a' > 'b': continueelif 'b' > 'a': while 'b' > 'a': print('b'-'a') if 'b'=='a': break if 'b' > 'a': continueSo, this is a code, which I made few hours ago. Now I get a ValueError: could not convert string to float: a, and I have no idea why. Can you explain it to me? I'm a beginner. | the float function can take a string but it must contain a possibly signed decimal or floating point number. You want to make the variable a a float not the char 'a'. You don't need all the ' around your variable names. When you put quotes around them 'b' you are making them a string. On another note once you reach on of those while statements there's nothing that will get you out of there.a = float(a)if a == b: # you need to get rid of all the ' unless you are talking about chars # -*- coding: cp1250 -*-print ('euklides alpha 1.0')a = raw_input('podaj liczbę A : ')b = raw_input('podaj liczbę B : ')a = float('a')b = float('b')if a==b: print aelif a > b: while a > b: # nothing will get you out of the while loop print(a-b) if a == b: break if a > b: # no need for this if, the while loop will do that check for you continueelif b > a: while b > a: # nothing will get you out of the while loop print(b-a) if b==a: break if b > a: # no need for this if, the while loop will do that check for you continue |
Can the print function be used reliably in GCE apps? I have a GCE app consisting of a single Python script that has some long running functions (most of these are querying databases and sending the results somewhere). It seems that when the script hangs on one of these longer running tasks that nothing is printed to Stackdriver Logging, even print() statements that come before the place the script is hanging. This seems like bug in Compute Engine or Stackdriver and makes debugging scripts very difficult (e.g. I can't see where the last successful print statement occurred).I'd prefer this bug to just be fixed instead of having to add the logging module as it seems there's a good amount of overhead to set that up. | Per this answer from unix.stackexchange.com, when a process's output is redirected to something other than a terminal, the output may be temporarily stored in a buffer by the operating system. Buffering output increases efficiency by reducing the number of system calls and IO operations. Buffered output can be flushed manually from within a python script or application.In python3, set the flush flag on the print function.print('foo', flush=True)In python2, flush sys.stdout after printing.print 'foo'; sys.stdout.flush() |
SQLAlchemy: session.query.one() and session.add() in one transaction I want to add row with VALUE1 to table TABLE1 only if TABLE2 have row with VALUE2I can do something like that:session.query(TABLE2) .filter(TABLE2.FIELD2 == VALUE2) .update({TABLE2.FIELD2: VALUE2}) # without change. only for checksession.add(TABLE1(FIELD1=VALUE1))session.commit()But I think it is strange that I use update without any update.I want to use one instead of update but it doesn't support transactions.UPDATED: this solution is wrong...This simple solution is wrong also:my_flag = session.query(TABLE2).filter(TABLE2.FIELD2 == VALUE2).first()# database can be updated here!if my_flag: session.add(TABLE1(FIELD1=VALUE1)) session.commit() | Provided you have a unique key on TABLE1.VALUE1, you could first query TABLE2 and try to insert to TABLE1. In case the VALUE1 already exists in TABLE1, the error will be thrown and you will be able to rollback the transaction. from sqlalchemy.sql import existsvalue_exists = session.query(exists().where(TABLE2.KEY2 == VALUE2)).scalar()if not value_exists: returntry: dbsession.add(...) dbsession.commit()except IntegrityError as e: dbsession.rollback() raise e |
Using .txt file as a Dictionary I have a .txt file formatted like a dictionary is, for example:{'planet': 'earth', "country": "uk"}Just that, that's all. I would want to add more to this later. At the moment, I can save more keys to it and have it saved but...How can I import this .txt file and use it as a dictionary? | You can use ast.literal_evalimport astwith open('myfile.txt') as f: mydict = ast.literal_eval(f.read())Some extra reading on using eval vs ast.literal_eval. |
Error during backward migration of DeleteModel in Django I have two models with one-to-one relationship in Django 1.11 with PostgreSQL. These two models are defined in models.py as follows:class Book(models.Model): info = JSONField(default={})class Author(models.Model): book = models.OneToOneField(Book, on_delete=models.CASCADE)The auto-created migration file regarding these models is like:class Migration(migrations.Migration): dependencies = [ ('manager', '0018_some_migration_dependency'), ] operations = [ migrations.CreateModel( name='Book', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('info', JSONField(default={})), ], ), migrations.AddField( model_name='author', name='book', field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to='manager.Book'), ), ]These implementations have worked successfully. In addition to this migration, we also had some other additional migrations related to other tasks of our project.Due to our design changes we made today, we decided to move all of the Book info data into our cloud storage. In order to do that, I have implemented a custom migration code as follows:def push_info_to_cloud(apps, schema_editor): Author = apps.get_model('manager', 'Author') for author in Author.objects.all(): if author.book.info is not None and author.book.info != "": # push author.book.info to cloud storage author.book.info = {} author.book.save()def pull_info_from_cloud(apps, schema_editor): Author = apps.get_model('manager', 'Author') Book = apps.get_model('manager', 'Book') for author in Author.objects.all(): # pull author.book.info back from cloud storage book = Book.objects.create(info=info) author.book = book author.save()class Migration(migrations.Migration): dependencies = [ ('manager', '0024_some_migration_dependency'), ] operations = [ migrations.RunPython(push_info_to_cloud, pull_info_from_cloud) ]As the code tells itself, this migrations push each non-null book info data to our cloud storage and replace that with an empty dict in the database. I have tested this migration back and forth and make sure that both the forward and backward migration work successfully.Then, to get rid of the redundant Book table and book column in Author table, I deleted the Book model and the OneToOneField book field in the Author model and run manage.py makemigrations, which resulted in the following auto-generated migration code:class Migration(migrations.Migration): dependencies = [ ('manager', '0025_some_migration_dependency'), ] operations = [ migrations.RemoveField( model_name='user', name='book', ), migrations.DeleteModel( name='Book', ), ]Running manage.py migrate did worked. In the end, the Book table and the book column of the Author table are deleted.Now, the problem is; when I want to migrate back to 0024_some_migration_dependency, I get the following error during the execution of the latest migration file: Unapplying manager.0026_auto_20190503_1702...Traceback (most recent call last): File "/home/cagrias/Workspace/Project/backend/venv/lib/python3.6/site-packages/django/db/backends/utils.py", line 64, in execute return self.cursor.execute(sql, params)psycopg2.IntegrityError: column "book_id" contains null valuesI have seen this answer. To try that, I have manually re-create Book model and the OneToOneField book field of the Author model, by using blank=True, null=True parameters this time. But after I apply the migrations above successfully, I got the same exceptions when migrating backwards.What might be the problem? | I have managed to solve problem by changing the order of the migrations. As I mentioned in my question, I have applied this answer by adding blank=True, null=True parameters to both info and book fields. But it's related migration file was created after the migration file that moves our book info to the cloud storage. When I've changed the orders of these two migration files, the problem was solved. |
How do I convert Python scripts files to images files representing the code with highlighting? In short, how do I get this:From this:def fiblike(ls, n): store = [] for i in range(n): a = ls.pop(0) ls.append(sum(ls)+a) store.append(a) return store With all the indentation guide and code highlighting.I have written hundreds of Python scripts and I need to convert all of them to images...I have seen this:import Imageimport ImageDrawimport ImageFontdef getSize(txt, font): testImg = Image.new('RGB', (1, 1)) testDraw = ImageDraw.Draw(testImg) return testDraw.textsize(txt, font)if __name__ == '__main__': fontname = "Arial.ttf" fontsize = 11 text = "[email protected]" colorText = "black" colorOutline = "red" colorBackground = "white" font = ImageFont.truetype(fontname, fontsize) width, height = getSize(text, font) img = Image.new('RGB', (width+4, height+4), colorBackground) d = ImageDraw.Draw(img) d.text((2, height/2), text, fill=colorText, font=font) d.rectangle((0, 0, width+3, height+3), outline=colorOutline) img.save("D:/image.png")from hereBut it does not do code highlighting and I want either a numpy or cv2 based solution.How can I do it? | CodeSnap is a very nice tool to do just that for VSCode. |
in discord.py How can i make my bot that the commands only can be use in specific channel or specific server? So, anyone can invite my personal bot to their server. So, I want the command will work on specific channel or specific server with @bot.event not client. | if you use await bot.process_commands(message) you can try [email protected] def on_message(message): if message.channel.id == yourchannelid: await bot.process_commands(message) if message.guild.id = yourguildid: await bot.process_commands(message)you can add checks in on_message so the bot doesn't reply to itself |
How to filter rows and words in lower case in pandas dataframe? Hi I would like to know how to select rows which contains lower cases in the following dataframe:ID Name Note1 Fin there IS A dog outside2 Mik NOTHING TO DECLARE3 Lau no houseWhat I would like to do is to filter rows where Note column contains at least one word in lower case:ID Name Note1 Fin there IS A dog outside3 Lau no houseand collect in a list all the words in lower case: my_list=['there','dog','outside','no','house']I have tried to filter rows is :df1=df['Note'].str.lower()For appending words in the list, I think I should first tokenise the string, then select all the terms in lower case. Am I right? | Use Series.str.contains for filter at least one lowercase character in boolean indexing:df1 = df[df['Note'].str.contains(r'[a-z]')]print (df1) ID Name Note0 1 Fin there IS A dog outside2 3 Lau no houseAnd then Series.str.extractall for extract lowercase words:my_list = df1['Note'].str.extractall(r'(\b[a-z]+\b)')[0].tolist()print (my_list)['there', 'dog', 'outside', 'no', 'house']Or use list comprehension with split sentences and filter by islower:my_list = [y for x in df1['Note'] for y in x.split() if y.islower()]print (my_list)['there', 'dog', 'outside', 'no', 'house'] |
Problem importing matrix in Python from Excel and maybe some problems with if elif statments I'm trying running this code with some problems to solve. I'm at first trying inserting "BOD" as the name of the output and "6" as the number of input parameters. import os import numpy as np import pandas as pd from pandas import ExcelWriter from numpy import * OutputName = input('please enter the name of the output (BOD,COD,TSS)'); InputNum = input('please enter the number of input parameters (6 or 12) = '); file_name = 'biowin_withMalfunction.xlsx' if OutputName == 'BOD': Output_num=1 if InputNum == 6: Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for BOD_6Params') print (Data) elif InputNum ==12: Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for BOD') elif OutputName == 'COD': Output_num=2 if InputNum == 6: Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for COD_6ParamsD') elif InputNum ==12: Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for COD') else: Output_num=3 if InputNum == 6: Data = pd.read_excel(open(r'C:\Users\Elisa\test_conv_Fatone\biowin_withMalfunction.xlsx', 'rb'), sheet_name='ANN full data for TSS_6Params') elif InputNum ==12: Data = pd.read_excel(file_name, sheet_name="ANN full data for TSS") index = Output_num -3; X = Data[0:end-2,0:end]the error is: Traceback (most recent call last): File "C:\Users\Elisa\test_conv\ANN_Converted.py", line 42, in <module> X = Data[0:end-2,0:end] NameError: name 'Data' is not definedIt seems that the variable Data is not created with pd.reading, in fact, if I try print(Data) it does not exist. Can anybody help me finding the problem/problems?Can I share the input excel file? How? | Think about your conditions. What happens if every individual test is False? What happens if all your tests are False ?There is a path through your decision tree in which no file is opened. This is currently obtaining, so Data doesn't exist, as you determined.In this case the problem is likely that input() returns a string, whereas you are testing for an integer.Thus either test for strings:if inputNum == "5"or cast inputNum to an int:inputNum = int(inputNum)before you do any testing. |
My scrollbar is not working with mouse's scroller I have reused the code.I am trying to scroll this frame and the scrollbar is working butI want it to be scrolled using the scroller of mouse.What should I do?I want it to be scrolled vertically only.from tkinter import *root = Tk()root['bg'] = 'wheat'frame_container=Frame(root, width = 1000)frame_container['bg'] = 'wheat'canvas_container=Canvas(frame_container, width = 1000)canvas_container['bg'] = 'wheat'frame2=Frame(canvas_container, width = 1000)frame2['bg'] = 'wheat'scrollbar_tk = Scrollbar(frame_container, orient="vertical",command=canvas_container.yview)#, yscrollcommand=scrollbar_tk.set # will be visible if the frame2 is to to big for the canvascanvas_container.create_window((0,0),window=frame2,anchor='nw')naan = IntVar()roti=IntVar()dal=IntVar()manchurian = IntVar()makhani=IntVar()masala_bhindi = IntVar()chole = IntVar()rajma = IntVar()shahi_panneer = IntVar()kadahi_paneer = IntVar()masala_gobhi = IntVar()allo_gobhi = IntVar()matar_paneer = IntVar()menu_roti = "Tava Roti 25 ₹/piece"menu_dal = "Dal 80 ₹/bowl"menu_makhani = "Dal Makhni 110 ₹/bowl"menu_naan = "Naan 50 ₹/piece"menu_manchurian = "Manchurian 110 ₹/plate" menu_shahi_panneer = "Shahi paneer 110₹/bowl"menu_kadahi_paneer = "Kadhai paneer 150/bowl"menu_masala_gobhi = "Masala gobhi 130₹/bowl"menu_allo_gobhi = "Aloo gobhi 120₹/bowl" menu_matar_paneer = "Matar paneer 135₹/bowl"menu_masala_bhindi = "Masala bhindi 110₹/bowl"menu_chole = "Chole 100₹/bowl" menu_rajma = "Rajama 150₹/bowl"menu_chaap = "Chaap 125₹/bowl"menu_aloo_parntha = "Aloo parantha 35₹/peice" menu_cheele = "Cheele 55₹/peice "listItems = [menu_roti,menu_dal,menu_makhani, menu_naan, menu_manchurian, menu_shahi_panneer, menu_kadahi_paneer, menu_masala_gobhi, menu_allo_gobhi, menu_matar_paneer, menu_masala_bhindi, menu_chole, menu_rajma, menu_chaap, menu_aloo_parntha, menu_cheele]Title = Label(frame2, text = " Food Items Prices Quantities", fg = 'red', bg = 'wheat', font= ("arial", 30))Title.grid()for item in listItems: label = Label(frame2,text=item, fg = 'yellow', bg = 'wheat', font=("arial", 30)) label.grid(column=0, row=listItems.index(item)+1)q_roti = Entry(frame2, font=("arial",20), textvariable = roti, fg="Black", width=10)q_roti.grid(column = 1, row = 1)q_dal = Entry(frame2, font=("arial",20), textvariable = dal, fg="black", width=10)q_dal.grid(column = 1, row = 2)q_makhani = Entry(frame2, font=("arial",20), textvariable = makhani, fg="black", width=10)q_makhani.grid(column = 1, row = 3)q_naan = Entry(frame2, font=("arial",20), textvariable = naan, fg="black", width=10)q_naan.grid(column = 1, row = 4)q_manchurian = Entry(frame2,font=("arial",20), textvariable = manchurian, fg="black", width=10)q_manchurian.grid(column = 1, row = 5)q_shahi_panneer = Entry(frame2, font=("arial",20), textvariable = shahi_panneer, fg="black", width=10)q_shahi_panneer.grid(column = 1, row = 6)q_kadahi_panneer = Entry(frame2, font=("arial",20), textvariable = kadahi_paneer, fg="black", width=10)q_kadahi_panneer.grid(column = 1, row = 7)q_masala_gobhi = Entry(frame2, font=("arial",20), textvariable = masala_gobhi, fg="black", width=10)q_masala_gobhi.grid(column = 1, row = 8)q_allo_gobhi = Entry(frame2, font=("arial",20), textvariable = allo_gobhi, fg="black", width=10)q_allo_gobhi.grid(column = 1, row = 9)q_matar_panneer = Entry(frame2, font=("arial",20), textvariable = matar_paneer, fg="black", width=10)q_matar_panneer.grid(column = 1, row = 10)q_masala_bhindi = Entry(frame2, font=("arial",20), textvariable = masala_bhindi, fg="black", width=10)q_masala_bhindi.grid(column = 1, row = 11)q_cholle = Entry(frame2,font=("arial",20), textvariable = chole, fg="black", width=10)q_cholle.grid(column = 1, row = 12)q_rajma = Entry(frame2,font=("arial",20), textvariable = rajma, fg="black", width=10)q_rajma.grid(column = 1, row = 13)frame2.update() # update frame2 height so it's no longer 0 ( height is 0 when it has just been created )canvas_container.configure(yscrollcommand=scrollbar_tk.set, scrollregion="0 0 0 %s" % frame2.winfo_height()) # the scrollregion mustbe the size of the frame inside it, #in this case "x=0 y=0 width=0 height=frame2height" #width 0 because we only scroll verticaly so don't mind about the width.canvas_container.grid(column = 1, row = 0)scrollbar_tk.grid(column = 0, row = 0, sticky='ns')frame_container.grid()#.pack(expand=True, fill='both')root.mainloop()Sorry for this code. This is not much understandable but maybe it is sufficient for someone of my level. please someone give me some advices to improve my skills. | You can use <MouseWheel> virtual event to scroll the canvas and ultimately the frame.canvas_container.create_window((0,0),window=frame2,anchor='nw')def _on_mousewheel(event): canvas_container.yview_scroll(-1*int(event.delta/120), "units") canvas_container.bind_all("<MouseWheel>", _on_mousewheel) |
Authorization header of GET request in python/wsgi I'm in the process of creating a POST/GET API in Python 3. I'm running Apache2 connected to a WSGI script. I've managed to retrieve very simple GET requests succesfully. My code so far:def application(environ, start_response): status = '200 OK' output = b'Hello' print(environ) # print(environ['HTTP_AUTHORIZATION']) response_headers = [('Content-type', 'text/plain'),('Content-Length', str(len(output)))] start_response(status, response_headers) return [output]I use reqbin to test-send GET requests to my server. When you enter a token inside the Bearer token field, it is automatically added to the headers. I tested this with a server I have a bearer token for and validation completes succesfully, so I know reqbin is actually sending the token.However, I seem to be unable to acces the authorization header on my server. Apparently, it should be inside the environ object prefixed by HTTP_. But printing environ['HTTP_AUTHORIZATION'] yields a KeyError. I then tried printing the full environ object and retrieved it from the apache log:{ 'mod_wsgi.listener_port': '443', 'CONTEXT_DOCUMENT_ROOT': '/var/www/gosharing', 'SERVER_SOFTWARE': 'Apache/2.4.41 (Ubuntu)', 'SCRIPT_NAME': '', 'mod_wsgi.enable_sendfile': '0', 'mod_wsgi.handler_script': '', 'SERVER_SIGNATURE': '<address>Apache/2.4.41 (Ubuntu) Server at domain.ext Port 443</address>\\n', 'REQUEST_METHOD': 'GET', 'PATH_INFO': '/', 'SERVER_PROTOCOL': 'HTTP/1.1', 'QUERY_STRING': '', 'wsgi.errors': <mod_wsgi.Log object at 0x7f0b517c0c10>, 'HTTP_X_REAL_IP': '2a02:a44a:ea1e:1:9053:2c7a:daaa:16', 'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36', 'SERVER_NAME': 'domain.ext', 'REMOTE_ADDR': '206.189.205.251', 'mod_wsgi.queue_start': '1644325870796726', 'mod_wsgi.request_handler': 'wsgi-script', 'apache.version': (2, 4, 41), 'mod_wsgi.version': (4, 6, 8), 'wsgi.url_scheme': 'https', 'PATH_TRANSLATED': '/var/www/gosharing/gosharing.wsgi/', 'SERVER_PORT': '443', 'mod_wsgi.total_requests': 0L, 'wsgi.multiprocess': False, 'SERVER_ADDR': '185.45.113.35', 'DOCUMENT_ROOT': '/var/www/gosharing', 'mod_wsgi.process_group': 'gosharing', 'mod_wsgi.thread_requests': 0L, 'mod_wsgi.daemon_connects': '1', 'mod_wsgi.request_id': 'sn1scyGWCVM', 'SCRIPT_FILENAME': '/var/www/gosharing/gosharing.wsgi', 'SERVER_ADMIN': 'webmaster@localhost', 'mod_wsgi.ignore_activity': '0', 'wsgi.input': <mod_wsgi.Input object at 0x7f0b48f01030>, 'HTTP_HOST': 'domain.ext', 'CONTEXT_PREFIX': '', 'wsgi.multithread': True, 'mod_wsgi.callable_object': 'application', 'mod_wsgi.daemon_restarts': '0', 'REQUEST_URI': '/', 'HTTP_ACCEPT': '*/*', 'mod_wsgi.path_info': '/', 'wsgi.file_wrapper': <type 'mod_wsgi.FileWrapper'>, 'wsgi.version': (1, 0), 'GATEWAY_INTERFACE': 'CGI/1.1', 'wsgi.run_once': False, 'mod_wsgi.script_name': '', 'REMOTE_PORT': '39762', 'mod_wsgi.listener_host': '', 'REQUEST_SCHEME': 'https', 'SSL_TLS_SNI': 'domain.ext', 'wsgi.input_terminated': True, 'mod_wsgi.script_start': '1644325870815229', 'mod_wsgi.application_group': '', 'mod_wsgi.script_reloading': '1', 'mod_wsgi.thread_id': 1, 'mod_wsgi.request_start': '1644325870796210', 'HTTP_ACCEPT_ENCODING': 'deflate, gzip', 'mod_wsgi.daemon_start': '1644325870800682'}In fact, I can add any header on reqbin and be able to see it in my apache log, except for the authorization header. Maybe it is in a more protected place? Please help me out here. | I figured it out. In your 000-default-le-ssl.conf or 000-default.conf file (depending on whether you use a secure connection or not) you're supposed to turn on authorization passing manually by writing WSGIPassAuthorization On inside your VirtualHost tag:<VirtualHost *:443> # or port 80 if you are using an insecure connection # [...] WSGIPassAuthorization On # [...]</VirtualHost> |
How to separate a string of repeating characters? All continuous groups of characters must be grouped together and put into a list. For example, if I have this string:1112221121I would want to split this into a list:['111', '222', '11', '2', '1']`Another example would be 0011100000Output: ['00', '111', '00000']This is what I've come up with:In [146]: t = '0011100000' ...: out = [] ...: prev = None ...: for c in t: ...: if c != prev: ...: prev = c ...: out.append('') ...: out[-1] += c ...: In [147]: outOut[147]: ['00', '111', '00000']Is there a simpler solution? I think I am overthinking this. | itertools.groupby does just that:>>> from itertools import groupby>>> [''.join(g) for _, g in groupby('1112221121')]['111', '222', '11', '2', '1'] |
How to execute script in Anaconda with different installed Python versions? I want to run a script in Anaconda using Python 2.7.I am using Windows 8 with Anaconda 3 and Python 3.6.5.I created another environment with python 2.7.15 and activated it in Anaconda Prompt like advised here: https://conda.io/docs/user-guide/tasks/manage-python.htmlHow can I run this:print "HellWorld!"I remember there was a way to run the script from the spyder console just adding the version to the command line but I cannot remember the syntax.What I did so far:I activated py27 by typing into Anaconda Prompt:conda activate py27I checked if it was correctly activated (yes, it was) by:python --version | Using conda and environments is easy, once you get to know how to manage environments. When creating an environment you may choose the python version to use and also what other libraries. Let's begin creating two different environments.jalazbe@DESKTOP:~$ conda create --name my-py27-env python=2.7jalazbe@DESKTOP:~$ conda create --name my-py36-env python=3.6It might prompt to you a message like: The following NEW packages will be INSTALLED: ca-certificates: 2018.03.07-0 certifi: 2018.8.13-py27_0 libedit: 3.1.20170329-h6b74fdf_2 libffi: 3.2.1-hd88cf55_4Proceed ([y]/n)?Just type Y and press EnterSo now you have two environments. One of them with python 2.7 and the other one with python 3.6 Before executing any script you need to select which environment to use. In this example I'll you the environment with python 2.7jalazbe@DESKTOP:~$ conda activate my-py27-envOnce you activate an environment you will see it at the left side of the prompt and between parenthesis like this (environment-name)(my-py27-env) jalazbe@DESKTOP:~$So now everything to execute will use the libraries in the environment. if you execute python -V(my-py27-env) jalazbe@DESKTOP:~$ python -VThe output will be:Python 2.7.15 :: Anaconda, Inc.Then you may change to another environment by first: exit the one you are on (called deactivate) and entering (activating) the other environment like this:(my-py27-env) jalazbe@DESKTOP:~$ conda deactivatejalazbe@DESKTOP:~$ jalazbe@DESKTOP:~$ conda activate my-py36-env(my-py36-env) jalazbe@DESKTOP:~$ At this point if you execute python -V you get(my-py36-env) jalazbe@DESKTOP:~$ python -VPython 3.6.5 :: Anaconda, Inc.To answer your question you need two environments with different libraries and python versions. When executing an script you have to choose which environment to use.For further usage of conda commands see conda cheet sheet or read documentation about conda |
How to stop Scrapy Selector wrap an xml with html? I do this:xmlstr="<root><first>info</first></root>"res = Selector(text=xmlstr).xpath('.').getall()print(res)The output is:['<html><body><root><first>info</first></root></body></html>']How can I stop Selector wrapping the xml with html and body? Thanks | scrapy.Selector assumes html, but takes a type argument to change that. type defines the selector type, it can be "html", "xml" or None (default). If type is None, the selector automatically chooses the best type based on response type (see below), or defaults to "html" in case it is used together with text.So, to make an xml selector, simply use Selector(text=xmlstr, type='xml') |
Assigning current 'User' as foreign key to nested serializers I am trying to assign current 'User' to two models using nested serializers.class UserAddressSerializer(serializers.ModelSerializer): class Meta: model = UserAddress fields = ('user', 'address_1', 'address_2', 'country', 'state_province', 'city', 'zip_code')class UserProfileSerializer(serializers.ModelSerializer): user_address = UserAddressSerializer() user = serializers.HiddenField(default=serializers.CurrentUserDefault()) class Meta: model = UserProfile fields = ('user', 'first_name', 'middle_name', 'last_name', 'title', 'display_name', 'time_zone', 'user_address', 'default_office') def create(self, validated_data): user = validated_data.pop('user') user_address_data = validated_data.pop('user_address') user_address_object = UserAddress.objects.create( user=user, **user_address_data) user_profile_object = UserProfile.objects.create( user=user, **validated_data) return userWhat I am getting is this output in Postman.{ "user_address": { "user": [ "This field is required." ] }}I want to know a way to pass 'User' to both of these model creation. | You need to remove user from fields of UserAddressSerializer:class UserAddressSerializer(serializers.ModelSerializer): class Meta: model = UserAddress fields = ('address_1', 'address_2', 'country', # <-- Here 'state_province', 'city', 'zip_code') |
Pandas series giving incorrect sum Why is this Pandas series giving sum = .99999999 where as answer is 1. In my program, I need to assert on 'sum is equal to 1'. And, assertion is failing even if condition is correct.s = pd.Series([0.41,0.25,0.25,0.09])print("Pandas version = " + pd.__version__)print(s)print(type(s))print(type(s.values))print(s.values.sum())The output is:Pandas version = 0.23.40 0.411 0.252 0.253 0.09dtype: float64<class 'pandas.core.series.Series'><class 'numpy.ndarray'>0.9999999999999999 | Use np.isclose to determine if two values are arbitrarily close. It's a remnant of how floats are stored in the machine |
print text inside parent div beautifulsoup i'm trying to fetch each product's name and price from https://www.daraz.pk/catalog/?q=risk but nothing shows up.containers = page_soup.find_all("div",{"class":"c2p6A5"})for container in containers: pname = container.findAll("div", {"class": "c29Vt5"}) name = pname[0].text price1 = container.findAll("span", {"class": "c29VZV"}) price = price1[0].text print(name) print(price) | if the page is dynamic, Selenium should take care of thatfrom bs4 import BeautifulSoupimport requestsfrom selenium import webdriverbrowser = webdriver.Chrome()browser.get('https://www.daraz.pk/catalog/?q=risk')r = browser.page_sourcepage_soup = bs4.BeautifulSoup(r,'html.parser')containers = page_soup.find_all("div",{"class":"c2p6A5"})for container in containers: pname = container.findAll("div", {"class": "c29Vt5"}) name = pname[0].text price1 = container.findAll("span", {"class": "c29VZV"}) price = price1[0].text print(name) print(price)browser.close() output:Risk Strategy GameRs. 5,900Risk Classic Board GameRs. 945RISK - The Game of Global DominationRs. 1,295Risk Board GameRs. 1,950Risk Board Game - YellowRs. 3,184Risk Board Game - YellowRs. 1,814Risk Board Game - YellowRs. 2,086Risk Board Game - The Game of Global DominationRs. 975... |
Save unittest results in text file I'm writing code that tests via unittest if several elements exist on a certain homepage. After the test I want that the results were saved in a text file. But the results in the text file look like this:...............................------------------------------------------Ran 12 tests in 22.562sOK.But i want that the output looks like this:test_test1 (HomepageTest.HomePageTest) ... oktest_test2 (HomepageTest.HomePageTest) ... oktest_test3 (HomepageTest.HomePageTest) ... oketc....-------------------------------------------------Ran 12 tests in ...sOKThis is the code I use for saving the output into a text file:class SaveTestResults(object): def save(self): self.f = open(log_file, 'w') runner = unittest.TextTestRunner(self.f) unittest.main(testRunner = runner, defaultTest ='suite', verbosity = 2) def main(): STR = SaveTestResults() STR.save()if __name__ == '__main__':main()What am I missing or doing wrong? | If the output you wish to save in a file corresponds to what is printed out to the console, you have two main options.1 - You're using LinuxThen just redirect the output to a file:python script.py > output.txtHowever, the output will not be printed out to the console anymore.If you want to keep the console output, use the tee unix command:python script.py | tee output.txt2 - You're using Windows, or you don't want to redirect the whole output to a fileYou can achieve more or less the same thing using exclusively Python.You need to set the value of sys.stdout to the file descriptor where you want the output to be written.import syssys.stdout = open("output.txt", 'w')run_tests()This will set the output stream stdout to the given file for the whole script.I would suggest defining a decorator instead:def redirect_to_file(func): def decorated(*args, **kwargs): actualStdout = sys.stdout sys.stdout = open("log.txt", 'a') result = func(*args, **kwargs) sys.stdout = actualStdout return result return decoratedThen, just decorate the functions whose you want to write the output to a file:@redirect_to_filedef run_test(): ...If you want a similar behaviour to tee, have a look at this post.The idea is to define a Tee class that holds the two desired streams:class Tee: def __init__(self, stream1, stream2): self.stream1 = stream1 self.stream2 = stream2 def write(self, data): self.stream1.write(data) self.stream2.write(data) def close(self): self.stream1.close() self.stream2.close()Then, set sys.stdout to a Tee instance whose one of the stream is the actual stdout:tee = Tee(sys.stdout, open("output.txt", 'w'))sys.stdout = teeDon't forget to close the tee instance at the end of your script; else, the data written to output.txt will not be saved:tee.close() |
Multiple metrics to specific inputs I have multiple losses and metrics whether custom or imported from keras. Is there a way to specify which model outputs could be inputted to which metric instead of all of them being printed or calculated? | Yes, you can pass the losses/metrics as a dictionary that maps layer name to a loss/metrics.A quote from the documentation: loss: ... If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses. and metrics: ... To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a': 'accuracy'}.Example:model.compile( optimizer='rmsprop', loss={'output_1': 'loss_1', 'output_2': 'loss_2'}, loss_weights={'output_1': 1., 'output_2': 0.2}, metrics={'output_1': 'metric_1', 'output_2': ['metric_2', 'metric_3']})You can read more about multi-output model with Keras in: https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models |
Power function from math module seems to stop working in Python So i'm trying to write a program which finds a Pythagorean triplet, checks if all the numbers which make up the triplet add up to 1000, and if they do then multiply the 3 numbers together and output the result. Here is my sample code: import mathnumbers = [1,2,3]found = Falsewhile not found: if (math.pow(numbers[0], 2) + math.pow(numbers[1], 2)) == (math.pow(numbers[2], 2)): #Checks to see if its a pythag triplet total = 0 for x in numbers:#adds the 3 numbers together total += x if total == 1000: #if the total of the three numbers is 1000, multiply them all together product = 1 for y in numbers: product *= y print (product) found = True #print the product total and end the while loop else: numbers = [z+1 for z in numbers] #if the total isnt 100, then just add 1 to each of the three numbers print (numbers) else: numbers = [z+1 for z in numbers]#if the three numbers arent pythag triplet, then add 1 to each numberWhen the first triplet has been found the program seems to stop working. It dosnt seem to be able to identify any pythag triplets anymore, so I guess this is due to the "pow" function not working correctly anymore? I am new to programming so would appreciate any advice on how to overcome this and also how I could improve efficiency aswell! | Turns out, your math is incorrect.On each iteration, every number in the triplet is increased by 1After a iterations, in order for it to be a Pythagorean triplet, the following must hold true:(a + 1)**2 + (a + 2)**2 == (a + 3)**2Here 1, 2 and 3 inside the parentheses are the initial contents of the list numbers.This simplifies to 2 * a**2 + 6 * a + 5 == a **2 + 6 * a + 9Which is true only for a == 2So, your code executes print (numbers) on the third (a + 1) iteration and will never terminate since a is always increasing. |
Cannot parse address which contain ".html#/something" using bs4 in python3 My goal is to parse images from second page. I am using bf4 and Python3 for this.Please, look at those two pages:1) Only page with images for all 4 colors (I can parse this page);2) And page which contain images only for 1 color (chrom color in this example). I need to parse this page.Using browser I can see that second page different from the first one. But, using bs4 I got similar results for first and second page as python didn't recognize this ".html#/kolor-chrom" in second page address.First page address: "https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html".Second page address: "https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html#/kolor-chrom".Code to reproduce:from bs4 import BeautifulSoupimport requestsadres1 = "https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html"adres2 = "https://azzardo.com.pl/lampy-techniczne/2111-bross-1-tuba-lampa-techniczna-azzardo.html#/kolor-chrom"def parse_one_page(adres): """Parse one page and get all the img src from adres""" # Use headers to prevent hide our script headers = {'User-Agent': 'Mozilla/5.0'} # Get page page = requests.get(adres, headers=headers) # read_timeout=5 # Get all of the html code soup = BeautifulSoup(page.content, 'html.parser') # Find div divclear = soup.find_all("div", class_="clearfix") divclear = divclear[9] # Find img tag imgtag = [i.find_all("img") for i in divclear][0] # Find src src = [i["src"] for i in imgtag] # See how much images are here print(len(src)) # return list with img src return srcprint(parse_one_page(adres1))print(parse_one_page(adres2))After running those code you will see that output from those two addresses are similar: 24 images from both adresses. In first page here are 24 images (that's correct). But in second page here must be only 2 images, not 24 (incorrect)!So hope, that someone help me how to parse second page in python3 using bs4 correctly. | Yep, looks like it's not possible to parse such responsive page using bs4 |
Python: Scaling numbers column by column with pandas I have a Pandas data frame 'df' in which I'd like to perform some scalings column by column.In column 'a', I need the maximum number to be 1, the minimum number to be 0, and all other to be spread accordingly.In column 'b', however, I need the minimum number to be 1, the maximum number to be 0, and all other to be spread accordingly.Is there a Pandas function to perform these two operations? If not, numpy would certainly do. a bA 14 103B 90 107C 90 110D 96 114E 91 114 | This is how you can do it using sklearn and the preprocessing module. Sci-Kit Learn has many pre-processing functions for scaling and centering data.In [0]: from sklearn.preprocessing import MinMaxScalerIn [1]: df = pd.DataFrame({'A':[14,90,90,96,91], 'B':[103,107,110,114,114]}).astype(float)In [2]: dfOut[2]: A B0 14 1031 90 1072 90 1103 96 1144 91 114In [3]: scaler = MinMaxScaler()In [4]: df_scaled = pd.DataFrame(scaler.fit_transform(df), columns=df.columns)In [5]: df_scaledOut[5]: A B0 0.000000 0.0000001 0.926829 0.3636362 0.926829 0.6363643 1.000000 1.0000004 0.939024 1.000000 |
Reading variable number of columns in pandas I have a poorly formatted delimited file, in which the there are errors with the delimiter, so it sometimes appears that there are an inconsistent number of columns in different rows.When I runpd.read_csv('patentHeader.txt', sep="|", header=0)the process dies with this error: CParserError: Error tokenizing data. C error: Expected 9 fields in line 1034558, saw 15Is there a way to have pandas skip these lines and continuing? Or put differently, is there some way to make read_csv be more flexible about how many columns it encounters? | Try this.pd.read_csv('patentHeader.txt', sep="|", header=0, error_bad_lines=False)error_bad_lines: if False then any lines causing an error will be skipped bad lines, and it will be reported once the reading process is done. |
Displaying ggplot2 graphs from R in Jupyter When I create a plot in Jupyter using the ggplot2 R package, I get a link to the chart that says "View PDF" instead of the chart being presented inline.I know that traditionally in IPython Notebook you were able to show the charts inline using the %matplotlib magic function. Does Jupyter have something similar for R and ggplot2?What do I need to do to show the graph inline versus as a link to a PDF? | You can show the graphs inline with this option.options(jupyter.plot_mimetypes = 'image/png')You can also produce pdf files as you would regularly in R, e.g.pdf("test.pdf")ggplot(data.frame(a=rnorm(100,1,10)),aes(a))+geom_histogram()dev.off() |
Best practise to apply several rules on 1 string i'm getting url as string and need to apply several rules to it. First rule is to remove anchors, then remove '../' notation, because urljoin joins url incorrect in some cases, and finally remove leading slash. For now i have such code:def construct_url(parent_url, child_url): url = urljoin(parent_url, child_url) url = url.split('#')[0] url = url.replace('../', '') url = url.rstrip('/') return urlBut i dont think this is the best practise. I think it can be done much simpler. Could you help me please? Thanks. | Unfortunately, there isn't much that really could make your function simpler here, since you're dealing with some pretty odd cases.But you can make it more robust by using Python's urlparse.urlsplit() to split the URL in well-defined components, do your processing, and put it back together by using urlparse.urlunsplit():from urlparse import urljoinfrom urlparse import urlsplitfrom urlparse import urlunsplitdef construct_url(parent_url, child_url): url = urljoin(parent_url, child_url) scheme, netloc, path, query, fragment = urlsplit(url) path = path.replace('../', '') path = path.rstrip('/') url = urlunsplit((scheme, netloc, path, query, '')) return urlparent_url = 'http://user:[email protected]'child_url = '../../../chrome/#foo'print construct_url(parent_url, child_url)Output:http://user:[email protected]/chromeUsing the tools from urlparse has the advantage that you know exactly what your processing operates on (path and fragment in your case), and it handles all the things like user credentials, query strings, parameters etc. for you.Note: Contrary to what I suggested in the comments, urljoin does in fact normalize URLs:>>> from urlparse import urljoin>>> urljoin('http://google.com/foo/bar', '../qux')'http://google.com/qux'But it does so by strictly following RFC 1808.From RFC 1808 Section 5.2: Abnormal Examples:Within an object with a well-defined base URL ofBase: <URL:http://a/b/c/d;p?q#f>[...]Parsers must be careful in handling the case where there are morerelative path ".." segments than there are hierarchical levels in thebase URL's path. Note that the ".." syntax cannot be used to changethe <net_loc> of a URL.../../../g = <URL:http://a/../g>../../../../g = <URL:http://a/../../g>So urljoin does exactly the right thing by preserving those extraneous ../, therefore you need to remove them by manual processing. |
Trouble with making background of image transparent in python using pygame I have a rather confusing problem in running our game. I am trying to make a game using Python's pygame and I am using images downloaded from the internet. The problem is that some images have a white background and some have a colored background. I used Photoshop to get rid of the white background and re-saved the image. However, when I ran the simulation in Python, it gave me the original picture with the original background. This is slightly perplexing to me. Here's the part of the code using pygame I used to implement the image: self.image = pygame.image.load("jellyfishBad.png").convert() self.image.set_colorkey(white) self.rect = self.image.get_rect()Thanks. | You need to use the .convert_alpha() method when loading the image for per pixel transparency.So,self.image = pygame.image.load("jellyfishBad.png").convert_alpha() |
Attempting to display total amount_won for each user in database via For loop I'm trying to display the Sum of amount_won for each user_name in the database. My database is:Stakes tableidplayer_idstakesamount_wonlast_play_datePlayer tableiduser_namereal_namesite_playedmodels.pyclass Player(models.Model): user_name = models.CharField(max_length=200) real_name = models.CharField(max_length=200) SITE_CHOICES = ( ('FTP', 'Full Tilt Poker'), ('Stars', 'Pokerstars'), ('UB', 'Ultimate Bet'), ) site_played = models.CharField(max_length=5, choices=SITE_CHOICES) def __unicode__(self): return self.user_name def was_created_today(self): return self.pub_date.date() == datetime.date.today()class Stakes(models.Model): player = models.ForeignKey(Player) stakes = models.CharField(max_length=200) amount_won = models.DecimalField(max_digits=12, decimal_places=2) last_play_date = models.DateTimeField('Date Last Updated') def __unicode__(self): return self.stakesclass PlayerForm(ModelForm): class Meta: model = Playerclass StakesForm(ModelForm): class Meta: model = StakesViews.pydef index(request): latest_player_list = Player.objects.all().order_by('id')[:20] total_amount_won = Stakes.objects.filter(player__user_name='test_username').aggregate(Sum('amount_won')) return render_to_response('stakeme/index.html', { 'latest_player_list': latest_player_list, 'total_amount_won': total_amount_won })and index.html<h1> Players </h1>{% if latest_player_list %}<ul>{% for player in latest_player_list %} <li><a href="/stakeme/{{ player.id }}/">{{ player.user_name }} </a><br>Total Won: {{ total_amount_won }}</li>{% endfor %}</ul><br>{% else %}<p>No players are available.</p>{% endif %}<h3><a href="/stakeme/new/">New Player</a></h3>If I leave the views.py section as (player__user_name='test_username') it will display Amount Won: as follows Total Won: {'amount_won__sum': Decimal('4225.00')} using the test_username's amount_won (4225.00) for EVERY user name. Ideally, I'd like it to display Amount Won: for each user name in the for loop and display it as "Amount Won: 4225.00" only.I'm starting to understand this is way over my head, but I've read the docs regarding the differences between aggregate and annotate and I can't wrap my head around it. I'm thinking my DB is not setup correctly to use annotate for this, but I obviously could be wrong. | Check out: https://docs.djangoproject.com/en/dev/topics/db/aggregation/players = Player.objects.annotate(total_amount_won=Sum('stakes__amount_won'))players[0].total_amount_won # This will return the 'total amount won' for the 0th playerSo you could pass players to your template and loop over it.EDITYour views.py would look like:def index(request): players = Player.objects.annotate(total_amount_won=Sum('stakes__amount_won')) return render_to_response('stakeme/index.html', {'players': players,})The template would look like:<h1> Players </h1>{% if players %}<ul>{% for player in players %}<li> <a href="/stakeme/{{ player.id }}/">{{ player.user_name }} </a><br>Total Won: {{ player.total_amount_won }}</li>{% endfor %}</ul> <br />{% else %}<p>No players are available.</p>{% endif %}<h3><a href="/stakeme/new/">New Player</a></h3> |
issue with for loop in python only gets the last item I'm a beginner in python, currently I'm trying to automate filling website field using selenium.I'm trying to iterate over nested lists using for loop but always get only the last element. Any suggestions why?fields = [['a','b','c'],['x','y','z']]for i in range(len(fields)): driver.find_element_by_xpath("element").send_keys(fields[i][0],fields[i[1],fields[i][2]) driver.find_element_by_xpath("element_save").click()#then loop and iterate through 2nd nested list# OUTPUT = x,y,zI expect to iterate starting with index 0 to the end of the list. | You don't need range(len(list_)) for iterating over indeces only.Usual for will do. You can also unpack list with *:fields = [['a','b','c'],['x','y','z']]len_ = len(fields)for i in range(len_): driver.find_element_by_xpath("element").send_keys(*fields[i])You could also iterate trhrough the values of the fields itself:fields = [['a','b','c'],['x','y','z']]for field in fields: driver.find_element_by_xpath("element").send_keys(*field) |
How to render my Sudoku generator results to html table using Django? I am pretty new to Django and currently making a Sudoku web app. I wrote a python program to generate the Sudoku games, here is an example of the result/matrix looks like when i run the code (Sudoku Generator.py). [[3, 8, 2, 7, 5, 6, 1, 4, 9],[1, 4, 5, 2, 3, 9, 6, 7, 8],[6, 7, 9, 1, 4, 8, 2, 3, 5],[2, 1, 3, 4, 6, 5, 8, 9, 7],[4, 5, 6, 8, 9, 7, 3, 1, 2],[7, 9, 8, 3, 1, 2, 4, 5, 6],[5, 2, 1, 6, 7, 3, 9, 8, 4],[8, 3, 7, 9, 2, 4, 5, 6, 1],[9, 6, 4, 5, 8, 1, 7, 2, 3]] My question is, how can I render all these generated numbers to my html file? here is the html codes i've created under the templates: {% extends 'base.html' %}{% block sudoku %}<style>table { border-collapse: collapse; font-family: Calibri, sans-serif; }colgroup, tbody { border: solid medium; }td { border: solid thin; height: 1.4em; width: 1.4em; text-align: center; padding: 0; }</style> <table> <caption>Sudoku of the day</caption> <colgroup><col><col><col> <colgroup><col><col><col> <colgroup><col><col><col> <tbody> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tbody> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tbody> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> <tr> <td> <td> <td> <td> <td> <td> <td> <td> <td> </table>{% endblock %}Basically what I wanted is to get each number populated to each tag accordingly; also, whenever clicks "next game" button, the board will refresh and generate another bunch of numbers to form a new game. Attached is the screen shot of my Django work project directory so far: mysite directoryNow I totally got stuck, not sure if what i've done so far is correct and don't know what to do next... Anyone can help?? | Asuming you use the variable sudoku_numbers to store your array of numbers, then in the template you can use something like this:<table> {% for row in sudoku_numbers %} <tr> {% for col in row %} <td>{{ col }}</td> {% endfor %} </tr> {% endfor %}</table> |
How to convert input stdin to list data structure in python I have a stdin data in this format:100 85 92 292 4288 33500350 36800 450I want something like this [[100, [85, 92], [292, 42], [88, 33]], [500, [350, 36], [800, 45], [0]] | Something like the following (I have tested) should do it:lst = []sublst = []for line in sys.stdin: lineLst = [int(x) for x in line.split()] if len(lineLst) == 1: if sublst: lst.append(sublst) sublst = lineLst else: sublst.append(lineLst) if sublst[0] == 0: lst.append(sublst) |
How do I save to a specific directory using openpyxl? I am trying to save an Excel workbook I created using openpyxl to a specific directory that the user inputs via a Tkinter "browse" button. I have the workbook saving at the inputted "save spot," butI am getting an error saying that it is a directory. Within the function that is producing the workbook, I have:wb.save(save_spot)The "save spot" is generated via a function:def set_save_destination(): global save_spot save_spot = filedialog.askdirectory() save_spot = str(save_spot)The user gets to select the directory by the following Tkinter GUI code, within my GUI class:monthly_browse = ttk.Button(self, text='Select Save Destination', command=set_save_destination)The error message that I receive is an "IsADirectoryError" but I am unsure what the issue is, as is says that you can directly enter the directory into the save method. I am new to programming and completely self-taught, so any help would be great! Thank you! | you need to provide full path to the desired folder please see example belowfrom openpyxl import Workbookwb = Workbook()ws1 = wb.activews1.title = "1st Hour"wb.save('/home/user/Desktop/FileName.xlsx')so you might add additionally filename to the save_spot variable save_spot = str(save_spot)+'/filename.xlsx' |
Definining `fac` with generators. And: Why no stack overflow with generators? Is there a way we can define the following code (a classic example for recursion) via generators in Python? I am using Python 3.def fac(n): if n==0: return 1 else: return n * fac(n-1)I tried this, no success:In [1]: def fib(n): ...: if n == 0: ...: yield 1 ...: else: ...: n * yield (n-1) File "<ipython-input-1-bb0068f2d061>", line 5 n * yield (n-1) ^SyntaxError: invalid syntaxClassic recursion in Python leads to Stack OverflowThis classic example leads to a stack overflow on my machine for an input of n=3000. In the Lisp dialect "Scheme" I'd use tail recursion and avoid stack overflow. Not possible in Python. That's why generators come in handy in Python. But I wonder:Why no stack overflow with generators?Why is there no stack overflow with generators in Python? How do they work internally? Doing some research leads me always to examples showing how generators are used in Python, but not much about the inner workings.Update 1: yield from my_function(...)As I tried to explain in the comments secion, maybe my example above was a poor choice for making a point. My actual question was targeted at the inner workings of generators used recursively in yield from statements in Python 3. Below is an (incomplete) example code that I use to proces JSON files generatred by Firebox bookmark backups. At several points I use yield from process_json(...) to recursively call the function again via generators.Exactly in this example, how is stack overflow avoided? Or is it?# (snip)FOLDERS_AND_BOOKMARKS = {}FOLDERS_DATES = {}def process_json(json_input, folder_path=""): global FOLDERS_AND_BOOKMARKS # Process the json with a generator # (to avoid recursion use generators) # https://stackoverflow.com/a/39016088/5115219 # Is node a dict? if isinstance(json_input, dict): # we have a dict guid = json_input['guid'] title = json_input['title'] idx = json_input['index'] date_added = to_datetime_applescript(json_input['dateAdded']) last_modified = to_datetime_applescript(json_input['lastModified']) # do we have a container or a bookmark? # # is there a "uri" in the dict? # if not, we have a container if "uri" in json_input.keys(): uri = json_input['uri'] # return URL with folder or container (= prev_title) # bookmark = [guid, title, idx, uri, date_added, last_modified] bookmark = {'title': title, 'uri': uri, 'date_added': date_added, 'last_modified': last_modified} FOLDERS_AND_BOOKMARKS[folder_path].append(bookmark) yield bookmark elif "children" in json_input.keys(): # So we have a container (aka folder). # # Create a new folder if title != "": # we are not at the root folder_path = f"{folder_path}/{title}" if folder_path in FOLDERS_AND_BOOKMARKS: pass else: FOLDERS_AND_BOOKMARKS[folder_path] = [] FOLDERS_DATES[folder_path] = {'date_added': date_added, 'last_modified': last_modified} # run process_json on list of children # json_input['children'] : list of dicts yield from process_json(json_input['children'], folder_path) # Or is node a list of dicts? elif isinstance(json_input, list): # Process children of container. dict_list = json_input for d in dict_list: yield from process_json(d, folder_path)Update 2: yield vs yield fromOk, I get it. Thanks to all the comments.So generators via yield create iterators. That has nothing to do with recursion, so no stack overflow here.But generators via yield from my_function(...) are indeed recursive calls of my function, albeit delayed, and only evaluated if demanded. This second example can indeed cause a stack overflow. | OK, after your comments I have completely rewritten my answer.How does recursion work and why do we get a stack overflow?Recursion is often an elegant way to solve a problem. In most programming languages, every time you call a function, all the information and state needed for the function a put on the stack - a so called "stack frame". The stack is a special per-thread memory region and limited in size.Now recursive functions implicitly use these stack frames to store state/intermediate results. E.g., the factorial function is n * (n-1) * ((n-1) -1)... 1 and all these "n-1" are stored on the stack.An iterative solution has to store these intermediate results explicitly in a variable (that often sits in a single stack frame).How do generators avoid stack overflow?Simply: They are not recursive. They are implemented like iterator objects. They store the current state of the computation and return a new result every time you request it (implicitly or with next()).If it looks recursive, that's just syntactic sugar. "Yield" is not like return. It yields the current value and then "pauses" the computation. That's all wrapped up in one object and not in a gazillion stack frames.This will give you a series from ´1 to n!´:def fac(n): if (n <= 0): yield 1 else: v = 1 for i in range(1, n+1): v = v * i yield vThere is no recursion, the intermediate results are stored in v which is most likely stored in one object (on the heap, probably).What about yield fromOK, that's interesting, since that was only added in Python 3.3.yield from can be used to delegate to another generator.You gave an example like:def process_json(json_input, folder_path=""): # Some code yield from process_json(json_input['children'], folder_path)This looks recursive, but instead it's a combination of two generator objects. You have your "inner" generator (which only uses the space of one object) and with yield from you say "I'd like to forward all the values from that generator to my caller".So it doesn't generate one stack frame per generator result, instead it creates one object per generator used.In this example, you are creating one generator object per child JSON-object. That would probably be the same number of stack frames needed if you did it recursively. You won't see a stack overflow though, because objects are allocated on the heap and you have a very different size limit there - depending on your operating system and settings. On my laptop, using Ubuntu Linux, ulimit -s gives me 8 MB for the default stack size, while my process memory size is unlimited (although I have only 8GB of physical memory).Look at this documentation page on generators: https://wiki.python.org/moin/GeneratorsAnd this QA: Understanding generators in PythonSome nice examples, also for yield from:https://www.python-course.eu/python3_generators.phpTL;DR: Generators are objects, they don't use recursion. Not even yield from, which just delegates to another generator object. Recursion is only practical when the number of calls is bounded and small, or your compiler supports tail call optimization. |
Python Request JSON I would like to check each JSON content type with my expectation type. I receive JSON in my python code like this:a = request.json['a']b = request.json['b']when I checked a and b type, it is always return Unicode. I checked it like this:type(a) # ortype(b) # (always return: type 'unicode')How do I check if request.json['a'] is str, if request.json['a'] is always unicode? | I suspect you are on Python 2.x and not Python 3 (because in Python 3 both type('a') and type(u'a') are str, not unicode)So in Python 2, what you should know is str and unicode both are subclasses of basestring so instead of testing withif isinstance(x, (str, unicode)): # equiv. to type(x) is str or type(x) is unicode # somethingyou can do (Python 2.x)if isinstance(x, basestring): # do somethingIn Python 3 you don't have to distinguish between str and unicode, just useif isinstance(x, str): # do something |
Python does not sort sql query result results = conn.execute(SEARCH_SQL, dict(fingerprint="{"+fp_str+"}")).fetchall() print sorted(results)I retrieve some datas from database by using sql alchemy. results is like that:[(0.515625, u'str1'), (0.625, u'str2'), (0.901042, u'str3')]However sort function does not work here, that is it does not do any operation on the list returned from sql query? How can I sort result list? | You have a list of tuples. How would you like to sort them?For example, if you want to sort them according to the first key:sorted(results, key=lambda t:t[0])or in reverse order:sorted(results, key=lambda t:t[0], reverse=True) |
list of class objects (birds). each bird has a color. how do I most efficiently get a set of all colors? I have a list of class objects, say birds. Each bird has a color. I want to easily get a set of bird colors from this list of birds. What is the quickest, most efficient way to do this? | That would probably be:set(bird.color for bird in birds) |
Create multiple Dataframe from XML based on Specific Value I am trying to parse an XML and save the results in Pandas Data-frame. I have succeeded in saving the details in one specific Data-frame. However now am trying to save the results in multiple data-frame based on one specific class value.import pandas as pdimport xml.etree.ElementTree as ETimport osfrom collections import defaultdict, OrderedDicttree = ET.parse('PowerChange_76.xml')root = tree.getroot()df_list = []for i, child in enumerate(root): for subchildren in child.findall('{raml20.xsd}header'): for subchildren in child.findall('{raml20.xsd}managedObject'): match_found = 0 xml_class_name = subchildren.get('class') xml_dist_name = subchildren.get('distName') print(xml_class_name) df_dict = OrderedDict() for subchild in subchildren: header = subchild.attrib.get('name') df_dict['Class'] = xml_class_name df_dict['CellDN'] = xml_dist_name df_dict[header]=subchild.text df_list.append(df_dict)df_cm = pd.DataFrame(df_list) Expected Result is creation of multiple data-frame based on number of 'class'.Current Output:XML File | This is being answered with below method:def ExtractMOParam(xmlfile2):tree2=etree.parse(xmlfile2)root2=tree2.getroot()df_list2=[]for i, child in enumerate(root2): for subchildren in (child.findall('{raml21.xsd}header') or child.findall('{raml20.xsd}header')): for subchildren in (child.findall('{raml21.xsd}managedObject') or child.findall('{raml20.xsd}managedObject')): xml_class_name2 = subchildren.get('class') xml_dist_name2 = subchildren.get('distName') if ((xml_class_name2 in GetMOClass) and (xml_dist_name2 in GetCellDN)): #xml_dist_name2 = subchildren.get('distName') #df_list1.append(xml_class_name1) for subchild in subchildren: df_dict2=OrderedDict() header2=subchild.attrib.get('name') df_dict2['MOClass']=xml_class_name2 df_dict2['CellDN']=xml_dist_name2 df_dict2['Parameter']=header2 df_dict2['CurrentValue']=subchild.text df_list2.append(df_dict2) return df_list2ExtractDump=pd.DataFrame(ExtractMOParam(inputdfile))d = dict(tuple(ExtractDump.groupby('MOClass')))for key in d: d[key]=d[key].reset_index().groupby(['CellDN','MOClass','Parameter'])['CurrentValue'].aggregate('first').unstack() d[key].reset_index(level=0, inplace=True) d[key].reset_index(level=0, inplace=True)writer = pd.ExcelWriter('ExtractedDump.xlsx', engine='xlsxwriter')for tab_name, dframe in d.items(): dframe.to_excel(writer, sheet_name=tab_name,index=False)writer.save() Hope this will help others as well. |
If my function returns list index, what should it return if position does not exist I wrote a function that returns the index of an item in a list if that item exists, otherwise return Falsedef student_exists(ID): for student in students: if student.id == ID: return students.index(student) return FalseBut then I realised that it can be an issue later since later in my code I did something like this:if student_exists(ID) == Falsewhich will be true if the function returned index 0.What can I use instead of false to represent the item not existing in the list? | You can return None if the item does not exist.When you return None, you will avoid the location 0 problem. Note that when trying to ask if something is None you should use: if x is None.Note that the is operator should be used for checking None, otherwise you can experience unexpected behavior (see "is" operator behaves unexpectedly with integers) |
Scikit Learn Categorical data with random forests I am trying to work with the titanic survival challenge in kaggle https://www.kaggle.com/c/titanic.I am not experienced in R so i am using Python and Scikit Learn for the Random Forest ClassifierI am seeing many people using scikit learn converting their categorical of many levels into dummy variables.I don't understand the point of doing this, why can't we just map the levels into a numeric value and be done with it.And also i saw someone do the following:There was a categorical feature Pclass with three levels, he created 3 dummy variables for this and dropped the variable which had the least survival rate. I couldn't understand this either, i though decision trees didn't care about correlated features. | If you just map levels to numeric values, python will treat your values as numeric. That is, numerically 1<2 and so on even if your levels were initially unordered. Think about the "distance" problem. This distance between 1 and 2 is 1, between 1 and 3 is 2. But what were the original distances between your categorical variables? For example, what are the distances between "banana" "peach" and "apple"? Do you suppose that they are all equal? About dummy variable: if you have 3 classes and create 3 dummy variables, they not just correlated, they are linearly dependent. This is never good. |
Python - Enforce specific method signature for subclasses? I would like to create a class which defines a particular interface, and then require all subclasses to conform to this interface. For example, I would like to define a classclass Interface: def __init__(self, arg1): pass def foo(self, bar): passand then be assured that if I am holding any element a which has type A, a subclass of Interface, then I can call a.foo(2) it will work.It looked like this question almost addressed the problem, but in that case it is up to the subclass to explicitly change it's metaclass.Ideally what I'm looking for is something similar to Traits and Impls from Rust, where I can specify a particular Trait and a list of methods that trait needs to define, and then I can be assured that any object with that Trait has those methods defined.Is there any way to do this in Python? | So, first, just to state the obvious - Python has a built-in mechanism to test for the existence of methods and attributes in derived classes - it just does not check their signature.Second, a nice package to look at is zope.interface. Despte the zope namespace, it is a complete stand-alone package that allows really neat methods of having objects that can expose multiple interfaces, but just when needed - and then frees-up the namespaces. It sure involve some learning until one gets used to it, but it can be quite powerful and provide very nice patterns for large projects.It was devised for Python 2, when Python had a lot less features than nowadays - and I think it does not perform automatic interface checking (one have to manually call a method to find-out if a class is compliant) - but automating this call would be easy, nonetheless.Third, the linked accepted answer at How to enforce method signature for child classes? almost works, and could be good enough with just one change. The problem with that example is that it hardcodes a call to type to create the new class, and do not pass type.__new__ information about the metaclass itself. Replace the line:return type(name, baseClasses, d)for:return super().__new__(cls, name, baseClasses, d)And then, make the baseclass - the one defining your required methods use the metaclass - it will be inherited normally by any subclasses. (just use Python's 3 syntax for specifying metaclasses).Sorry - that example is Python 2 - it requires change in another line as well, I better repost it:from types import FunctionType# from https://stackoverflow.com/a/23257774/108205class SignatureCheckerMeta(type): def __new__(mcls, name, baseClasses, d): #For each method in d, check to see if any base class already #defined a method with that name. If so, make sure the #signatures are the same. for methodName in d: f = d[methodName] for baseClass in baseClasses: try: fBase = getattr(baseClass, methodName) if not inspect.getargspec(f) == inspect.getargspec(fBase): raise BadSignatureException(str(methodName)) except AttributeError: #This method was not defined in this base class, #So just go to the next base class. continue return super().__new__(mcls, name, baseClasses, d)On reviewing that, I see that there is no mechanism in it to enforce that a method is actually implemented. I.e. if a method with the same name exists in the derived class, its signature is enforced, but if it does not exist at all in the derived class, the code above won't find out about it (and the method on the superclass will be called - that might be a desired behavior).The answer:Fourth - Although that will work, it can be a bit rough - since it does any method that override another method in any superclass will have to conform to its signature. And even compatible signatures would break. Maybe it would be nice to build upon the ABCMeta and @abstractmethod existind mechanisms, as those already work all corner cases. Note however that this example is based on the code above, and check signatures at class creation time, while the abstractclass mechanism in Python makes it check when the class is instantiated. Leaving it untouched will enable you to work with a large class hierarchy, which might keep some abstractmethods in intermediate classes, and just the final, concrete classes have to implement all methods.Just use this instead of ABCMeta as the metaclass for your interface classes, and mark the methods you want to check the interface as @abstractmethod as usual. class M(ABCMeta): def __init__(cls, name, bases, attrs): errors = [] for base_cls in bases: for meth_name in getattr(base_cls, "__abstractmethods__", ()): orig_argspec = inspect.getfullargspec(getattr(base_cls, meth_name)) target_argspec = inspect.getfullargspec(getattr(cls, meth_name)) if orig_argspec != target_argspec: errors.append(f"Abstract method {meth_name!r} not implemented with correct signature in {cls.__name__!r}. Expected {orig_argspec}.") if errors: raise TypeError("\n".join(errors)) super().__init__(name, bases, attrs) |
Not all parameters were used in the SQL statement when using python and mysql hi I am doing the python mysql at this project, I initial the database and try to create the table record, but it seems cannot load data to the table, can anyone here can help me out with thisimport mysql.connectormydb = mysql.connector.connect( host="localhost",user="root",password="asd619248636",database="mydatabase")mycursor = mydb.cursor()mycursor.excute=("CREATE TABLE record (temperature FLOAT(20) , humidity FLOAT(20))")sql = "INSERT INTO record (temperature,humidity) VALUES (%d, %d)"val = (2.3,4.5)mycursor.execute(sql,val)mydb.commit()print(mycursor.rowcount, "record inserted.")and the error shows "Not all parameters were used in the SQL statement")mysql.connector.errors.ProgrammingError: Not all parameters were used in the SQL statement | Changing the following should fix your problem:sql = "INSERT INTO record (temperature,humidity) VALUES (%s, %s)"val = ("2.3","4.5") # You can also use (2.3, 4.5)mycursor.execute(sql,val)The database API takes strings as arguments, and later converts them to the appropriate datatype. Your code is throwing an error because it isn't expecting %d or %f (int or float) datatypes.For more info on this you can look here |
syntax_error:update for dictionary How can I fix this?# E.g. word_count("I am that I am") gets back a dictionary like:# {'i': 2, 'am': 2, 'that': 1}# Lowercase the string to make it easier.# Using .split() on the sentence will give you a list of words.# In a for loop of that list, you'll have a word that you can# check for inclusion in the dict (with "if word in dict"-style syntax).# Or add it to the dict with something like word_dict[word] = 1.def word_count(string): word_list = string.split() word_dict = {} for word in word_list: if word in word_dict: word_dict.update(word:word_dict(word)+1) else: word_dict[word]=1 return word_dictDisclaimer: A total newbie in Python | To update a key in a dictionary, just assign to the key using [...] subscription syntax:word_dict[word] = word_dict[word] + 1or evenword_dict[word] += 1Your attempt is not valid syntax, for two reasons:word_dict.update() is a method call, everything inside the (...) call syntax must be a valid expression. key: value is not a stand-alone expression, that only is valid within a {key: value} dictionary display. word_dict.update() takes either a dictionary object, or a sequence of (key, value) pairs.word_dict(word) would try to call the dictionary rather than try to retrieve the value for the key word.Using word_dict.update() to update just one key is a little overkill, because it requires creating another dictionary or sequence. Either one of the following would work:word_dict.update({word: word_dict[word] + 1})orword_dict.update([(word, word_dict[word] + 1)])Note that the Python standard library comes with a better solution for counting words: the collections.Counter() class:from collections import Counterdef word_count(string): return Counter(string.split())A Counter() is a subclass of dict. |
Python opencv not receiving camera feed I've been trying to use the SimpleCV (www.simplecv.org) module to run image recognition and manipulation. Unfortunately, my incoming video feed has been quite finicky, and I'm not sure what I did wrong. Just using some basic sample code:import cvwindow = cv.NamedWindow("camera", 1)capture = cv.CreateCameraCapture(0)width = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH)) height = int(cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT))while 1: img = cv.QueryFrame(capture) cv.ShowImage("camera", img) k = cv.WaitKey(1) if(k == 102): cv.destroyWindow("camera") breakWhich works perfectly when I plug in my Logitech Webcam 500. However, when I attempt to use my Vimicro Altair camera, I get a grey screen, and when saving to file, the file is empty.I also attempted to use SimpleCV code, based off their cookbook along the lines of:mycam = Camera()img = mycam.getImage()which was equally unsuccessful, however instead of returning no data, simply returned an image that was completely black.I'm at quite a loss of what is causing this, I tried the exact same system on my laptop, which failed to even get an image from the Logitech cam. I'm running Windows 7 64-bit with Python 2.7 and SimpleCV 1.1.Thanks | I'm one of the SimpleCV developers. It appears you are trying to use the standard python openCV wrapper.What I recommend doing is just run the example here:https://github.com/sightmachine/SimpleCV/blob/develop/SimpleCV/examples/display/simplecam.pyOr here is the code as well:import time, webbrowserfrom SimpleCV import *#create JPEG streamersjs = JpegStreamer(8080)cam = Camera()cam.getImage().save(js)webbrowser.open("http://localhost:8080", 2)while (1): i = cam.getImage() i.save(js) time.sleep(0.01) #yield to the webserver |
Imputer on some Dataframe columns in Python I am learning how to use Imputer on Python.This is my code:df=pd.DataFrame([["XXL", 8, "black", "class 1", 22],["L", np.nan, "gray", "class 2", 20],["XL", 10, "blue", "class 2", 19],["M", np.nan, "orange", "class 1", 17],["M", 11, "green", "class 3", np.nan],["M", 7, "red", "class 1", 22]])df.columns=["size", "price", "color", "class", "boh"]from sklearn.preprocessing import Imputerimp=Imputer(missing_values="NaN", strategy="mean" )imp.fit(df["price"])df["price"]=imp.transform(df["price"])However this rises the following error: ValueError: Length of values does not match length of indexWhat's wrong with my code???Thanks for helping | This is because Imputer usually uses with DataFrames rather than Series. A possible solution is:imp=Imputer(missing_values="NaN", strategy="mean" )imp.fit(df[["price"]])df["price"]=imp.transform(df[["price"]]).ravel()# Or even imp=Imputer(missing_values="NaN", strategy="mean" )df["price"]=imp.fit_transform(df[["price"]]).ravel() |
Why is it dataframe.head() in python and head(dataframe) in R? Why is python like this in general? Beginner here. Shouldnt the required variables be passed as arguments to the function. Why is it variable.function() in python? | It's simple:foo.bar() does the same thing as foo.__class__.bar(foo)so it is a function, and the argument is passed to it, but the function is stored attached to the object via its class (type), so to say. The foo.bar() notation is just shorthand for the above.The advantage is that different functions of the sams name can be attached to many objects, depending object type. So the caller of foo.bar() is calling whatever function is attached to the object by the name "bar". This is called polymorphism and can be used for all sorts of things, such as generic programming. Such functions are called methods. The style is called object orientation, albeit object orientation as well as generic programming can also be achieved using more familiar looking function (method) call notation (e.g. multimethods in Common Lisp and Julia, or classes in Haskell). |
Adding default file directory to FileDialog in Traits I am using the FileDialog class within TraitsUI, which works pretty well, except for the life of me, I have not been able to figure out how to pass a default directory, for the dialogue to use. Ideally, the dialogue box would open at a point in the local file system other than the top of the tree...Any insight or direction very gratefully appreciated from a newbie.Base code pretty generic/standard as follows.demo_id = 'traitsui.demo.standard_editors.file_dialog.file_info'class FileDialog ( HasTraits ): # The name of the selected file: file_name = File # The button used to display the file dialog: open = Button( 'Open...' ) #-- Traits View Definitions ------------------------------------------------ view = View( HGroup( Item( 'open', show_label = False ), '_', Item( 'file_name', style = 'readonly', springy = True ) ), width = 0.5 ) #-- Traits Event Handlers -------------------------------------------------- def _open_changed ( self ): """ Handles the user clicking the 'Open...' button. """ file_name = open_file( extensions = FileInfo(), id = demo_id ) if file_name != '': self.file_name = file_name | I suggest not using the TraitsUI FileDialog. I think you'll do better with pyface.api.FileDialog (toolkit-specific; for the API, see https://github.com/enthought/pyface/blob/master/pyface/i_file_dialog.py). |
Why is PySide's exception handling extending this object's lifetime? tl;dr -- In a PySide application, an object whose method throws an exception will remain alive even when all other references have been deleted. Why? And what, if anything, should one do about this?In the course of building a simple CRUDish app using a Model-View-Presenter architecture with a PySide GUI, I discovered some curious behavior. In my case:The interface is divided into multiple Views -- i.e., each tab page displaying a different aspect of data might be its own class of ViewViews are instantiated first, and in their initialization, they instantiate their own Presenter, keeping a normal reference to itA Presenter receives a reference to the View it drives, but stores this as a weak reference (weakref.ref) to avoid circularityNo other strong references to a Presenter exist. (Presenters can communicate indirectly with the pypubsub messaging library, but this also stores only weak references to listeners, and is not a factor in the MCVE below.)Thus, in normal operation, when a View is deleted (e.g., when a tab is closed), its Presenter is subsequently deleted as its reference count becomes 0However, a Presenter of which a method has thrown an exception does not get deleted as expected. The application continues to function, because PySide employs some magic to catch exceptions. The Presenter in question continues to receive and respond to any View events bound to it. But when the View is deleted, the exception-throwing Presenter remains alive until the whole application is closed. An MCVE (link for readability):import loggingimport sysimport weakreffrom PySide import QtGuiclass InnerPresenter: def __init__(self, view): self._view = weakref.ref(view) self.logger = logging.getLogger('InnerPresenter') self.logger.debug('Initializing InnerPresenter (id:%s)' % id(self)) def __del__(self): self.logger.debug('Deleting InnerPresenter (id:%s)' % id(self)) @property def view(self): return self._view() def on_alert(self): self.view.show_alert() def on_raise_exception(self): raise Exception('From InnerPresenter (id:%s)' % id(self))class OuterView(QtGui.QMainWindow): def __init__(self, *args, **kwargs): super(OuterView, self).__init__(*args, **kwargs) self.logger = logging.getLogger('OuterView') # Menus menu_bar = self.menuBar() test_menu = menu_bar.addMenu('&Test') self.open_action = QtGui.QAction('&Open inner', self, triggered=self.on_open, enabled=True) test_menu.addAction(self.open_action) self.close_action = QtGui.QAction('&Close inner', self, triggered=self.on_close, enabled=False) test_menu.addAction(self.close_action) def closeEvent(self, event, *args, **kwargs): self.logger.debug('Exiting application') event.accept() def on_open(self): self.setCentralWidget(InnerView(self)) self.open_action.setEnabled(False) self.close_action.setEnabled(True) def on_close(self): self.setCentralWidget(None) self.open_action.setEnabled(True) self.close_action.setEnabled(False)class InnerView(QtGui.QWidget): def __init__(self, *args, **kwargs): super(InnerView, self).__init__(*args, **kwargs) self.logger = logging.getLogger('InnerView') self.logger.debug('Initializing InnerView (id:%s)' % id(self)) self.presenter = InnerPresenter(self) # Layout layout = QtGui.QHBoxLayout(self) alert_button = QtGui.QPushButton('Alert!', self, clicked=self.presenter.on_alert) layout.addWidget(alert_button) raise_button = QtGui.QPushButton('Raise exception!', self, clicked=self.presenter.on_raise_exception) layout.addWidget(raise_button) self.setLayout(layout) def __del__(self): super(InnerView, self).__del__() self.logger.debug('Deleting InnerView (id:%s)' % id(self)) def show_alert(self): QtGui.QMessageBox(text='Here is an alert').exec_()if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) app = QtGui.QApplication(sys.argv) view = OuterView() view.show() sys.exit(app.exec_())Open and close the inner view, and you'll see both view and presenter are deleted as expected. Open the inner view, click the button to trigger an exception on the presenter, then close the inner view. The view will be deleted, but the presenter won't until the application exits.Why? Presumably whatever it is that catches all exceptions on behalf of PySide is storing a reference to the object that threw it. Why would it need to do that?How should I proceed (aside from writing code that never causes exceptions, of course)? I have enough sense not to rely on __del__ for resource management. I get that I have no right to expect anything subsequent to a caught-but-not-really-handled exception to go ideally but this just strikes me as unnecessarily ugly. How should I approach this in general? | The problem is sys.last_tracback and sys.last_value.When a traceback is raised interactively, and this seems to be what is emulated, the last exception and its traceback are stores in sys.last_value and sys.last_traceback respectively.Doingdel sys.last_valuedel sys.last_traceback# for consistency, see# https://docs.python.org/3/library/sys.html#sys.last_typedel sys.last_typewill free the memory.It's worth noting that at most one exception and traceback pair can get cached. This means that, because you're sane and don't rely on del, there isn't a massive amount of damage to be done.But if you want to reclaim the memory, just delete those values. |
Pass object along with object method to function I know that in Python that if, say you want to pass two parameters to a function, one an object, and another that specifies the instance method that must be called on the object, the user can easily pass the object itself, along with the name of the method (as a string) then use the getattr function on the object and the string to call the method on the object. Now, I want to know if there is a way (as in C++, for those who know) where you pass the object, as well as the actual method (or rather a reference to the method, but not the method name as a string). An example:def func(obj, method): obj.method();I have tried passing it as follows:func(obj, obj.method)or asfunc(obj, classname.method)but neither works (the second one I know was a bit of a long shot, but I tried it anyway)I know that you can also just define a function that just accepts the method, then call it as func2(obj.method)but I am specifically asking about instances where you want a reference to the object itself, as well as a reference to a desired class instance (not static) method to be called on the object.EDIT:For those that are interested, I found quite an elegant way 'inspired' by the accepted answer below. I simply defined func asdef func(obj, method): #more code here method(obj, parameter); #call method on object objand called it asfunc(obj_ref, obj_class.method);where obj_class is the actual class that obj_ref is an instance of. | A method is just a function with the first parameter bound to an instance. As such you can do things like. # normal_call result = "abc".startswith("a")# creating a bound method method = "abc".startswithresult = method("a") # using the raw function function = str.startswithstring = "abc"result = function(string, "a") |
Django rest framework is taking too long to return nested serialized data We are having four models which are related, While returning queryset serializing the data is too slow(serializer.data). Below are our models and serializer.Why django nested serializer is taking too long to return rendered response. What are we doing wrong here?Note:Our DB lies in AWS when connected from EC2 instance it is ok but when tried from my localhost it is insanely slow. And the size of json it returns is 700KB.models.pyclass ServiceType(models.Model): service_name = models.CharField(max_length = 100) description = models.TextField() is_active = models.BooleanField(default = 1)class Service(models.Model): service_name = models.CharField(max_length = 100) service_type = models.ForeignKey(ServiceType, related_name = "type_of_service") min_duration = models.IntegerField() ##duration in minsclass StudioProfile(models.Model): studio_group = models.ForeignKey(StudioGroup, related_name = "studio_of_group") name = models.CharField(max_length = 120)class StudioServices(models.Model): studio_profile = models.ForeignKey(StudioProfile, related_name = "studio_detail_for_activity") service = models.ForeignKey(Service, related_name = "service_in_studio")class StudioPicture(models.Model): studio_profile = models.ForeignKey(StudioProfile, related_name = "pic_of_studio") picture = models.ImageField(upload_to = 'img_gallery', null = True, blank = True)serializers.pyclass ServiceTypeSerializer(serializers.ModelSerializer): class Meta: model = ServiceType fields = ('id', 'service_name')class ServiceSerializer(serializers.ModelSerializer): service_type = ServiceTypeSerializer() class Meta: model = Service fields = ('id', 'service_type', 'service_name')class StudioServicesSerializer(serializers.ModelSerializer): service = ServiceSerializer() class Meta: model = StudioServices fields = ('service','price','is_active','mins_takes')class StudioPictureSerializer(serializers.ModelSerializer): class Meta: model = StudioPicture fields = ('picture',)class StudioProfileSerializer(serializers.ModelSerializer): studio_detail_for_activity = StudioServicesSerializer(many = True) pic_of_studio = StudioPictureSerializer(many = True) class Meta: model = StudioProfile fields = ('id', 'name','studio_detail_for_activity','pic_of_studio')views.pyclass StudioProfileView(ListAPIView): serializer_class = StudioProfileSerializer model = StudioProfile def get_queryset(self): try: queryset = self.model.objects.all() except Exception ,e: logger_error.error(traceback.format_exc()) return None else: return queryset | Did you checked which part is the slow one? like, How many records do you have in that db? and I would try to run the query and check if the query is slow, then I'd check the serializers with less than 100 registers and so on.I'd recommend you to read this article http://www.dabapps.com/blog/api-performance-profiling-django-rest-framework/ in order to evaluate how to profile your APIRegards |
Pandas json_normalize and null values in JSON I have this sample JSON{ "name":"John", "age":30, "cars": [ { "name":"Ford", "models":[ "Fiesta", "Focus", "Mustang" ] }, { "name":"BMW", "models":[ "320", "X3", "X5" ] }, { "name":"Fiat", "models":[ "500", "Panda" ] } ] }When I need to convert JSON to pandas DataFrame I use following code import jsonfrom pandas.io.json import json_normalizefrom pprint import pprintwith open('example.json', encoding="utf8") as data_file: data = json.load(data_file)normalized = json_normalize(data['cars'])This code works well but in the case of some empty cars (null values) I'm not possible to normalize_json.Example of json{ "name":"John", "age":30, "cars": [ { "name":"Ford", "models":[ "Fiesta", "Focus", "Mustang" ] }, null, { "name":"Fiat", "models":[ "500", "Panda" ] } ] }Error that was thrownAttributeError: 'NoneType' object has no attribute 'keys'I tried to ignore errors in json_normalize, but didn't helpnormalized = json_normalize(data['cars'], errors='ignore')How should I handle null values in JSON? | You can fill cars with empty dicts to prevent this errordata['cars'] = data['cars'].apply(lambda x: {} if pd.isna(x) else x) |
extracting data from json using python Extracting Data from JSONThe program will prompt for a URL, read the JSON data from that URL using urllib and then parse and extract the comment counts from the JSON data, compute the sum of the numbers in the file.Sample data: http://python-data.dr-chuck.net/comments_42.json (Sum=2553)Data FormatThe data consists of a number of names and comment counts in JSON as follows:{ comments: [ { name: "Matthias" count: 97 }, { name: "Geomer" count: 97 } ... ]}Basically , json file reads to be a dictionary . the second element of the dictionary is a list. now this list has dictionaries in it. i need to find values from them. My code where i am stuck at is:import jsonimport urllib total = 0url='http://python-data.dr-chuck.net/comments_42.json'uh=urllib.urlopen(url).read()info =json.loads(uh)for items in info[1]: #print items print items[1:] | You could try:import jsonimport urllib total = 0url='http://python-data.dr-chuck.net/comments_42.json'uh=urllib.urlopen(url).read()info =json.loads(uh)count_values = [ el['count'] for el in info['comments'] ] name_values = [ el['name'] for el in info['comments'] ] print count_valuesprint name_valuesoutput of count_values:[97, 97, 90, 90, 88, 87, 87, 80, 79, 79, 78, 76, 76, 72, 72, 66, 66, 65, 65, 64, 61, 61, 59, 58, 57, 57, 54, 51, 49, 47, 40, 38, 37, 36, 36, 32, 25, 24, 22, 21, 19, 18, 18, 14, 12, 12, 9, 7, 3, 2]output of name_values:[u'Romina', u'Laurie', u'Bayli', u'Siyona', u'Taisha', u'Alanda', u'Ameelia', u'Prasheeta', u'Asif', u'Risa', u'Zi', u'Danyil', u'Ediomi', u'Barry', u'Lance', u'Hattie', u'Mathu', u'Bowie', u'Samara', u'Uchenna', u'Shauni', u'Georgia', u'Rivan', u'Kenan', u'Hassan', u'Isma', u'Samanthalee', u'Alexa', u'Caine', u'Grady', u'Anne', u'Rihan', u'Alexei', u'Indie', u'Rhuairidh', u'Annoushka', u'Kenzi', u'Shahd', u'Irvine', u'Carys', u'Skye', u'Atiya', u'Rohan', u'Nuala', u'Maram', u'Carlo', u'Japleen', u'Breeanna', u'Zaaine', u'Inika'] |
Simpler way with datetime time deltas? and thanks in advance! I've got a function I wrote that generates and appends a url to a list in the form of "http://www.examplesite.com/'year' + '-' + 'month'", appending a string format of the given year for each month. The function works just fine for what I'm trying to do, but I'm wondering if there's a simpler way to go about it using Python 3's datetime module, possibly working with time deltas. source = 'https://www.examplesite.com/' year = 2017 month = ['12', '11', '10', '09', '08', '07', '06', '05', '04', '03', '02', '01'] while year >= 1989: for entry in month: page = source + str(year) + '-' entry pageRepository.append(page) year -= 1 | You have to subtract 1 to decrease the year even using datetime object as:>>> from datetime import date>>> print date.today().year - 1result is 2016. I think the way you process year is good enough.just want to simplify the month, using range() other than hard coded month list:>>> for month in range(12,0,-1):... str(month).zfill(2)...'12''11''10''09''08''07''06''05''04''03''02''01'str.zfill(width): Return a copy of the string left filled with ASCII '0' digits to make a string of length width. |
Ajax query not working in python django? I want to change the status of data coming from table but it seems like like i have messed up some code in it.my ajax request:-function changeStatusDataById(object) { var baseURL = location.protocol + '//' + location.hostname + (location.port ? ':' + location.port : ''); var r = confirm("Are You sure we want to change status ?"); if (r == true) { var requestData = {}; var action = object.getAttribute("action"); var id = object.getAttribute("id"); requestData.action = action; requestData.id = id; $.ajax({ url: baseURL + 'promoted-user/list/changeStatus/', method: 'POST', dataType: "json", contentType: "application/json", data: JSON.stringify(requestData), beforeSend: function () { var text = 'changing status . please wait..'; ajaxLoaderStart(text); }, success: function (data) { ajaxLoaderStop(); location.reload(); }, error: function (jqXHR, ex) { ajaxLoaderStop(); } }); } return false;}my django url:-url(r'^promoted-user/list/changeStatus/$', delete.promoter_change_status, name='promoter-change-status')my views:-@login_required@csrf_exemptdef promoter_change_status(request): response_data = dict() message = '' status = "ERROR" if request.method == "GET": message = "GET method is not allowed" if request.method == "DELETE": message = "Delete method is not allowed" if request.method == "POST": request_data = body_loader(request.body) print 'hello' try: action = request_data['action'] id = request_data['id'] if action is not None and id is not None and action != '' and id != '': status = "OK" message = "Status Changed successfully........." if action == "newsDelete": object_data = News.objects.using("cms").get(id=id) object_data.status = not object_data.status object_data.save() messages.success(request, 'Status Changed successfully') else: message = "action and id is required field................." except ObjectDoesNotExist: status = "ERROR" message = "id does not exist..........." except Exception as e: print e message = e.message + " is required field................." response_data['message'] = message response_data['status'] = status return HttpResponse(json.dumps(response_data))calling ajax on td of table:-<td class="text-center"> <a href="#" class="fg_red changeStatusDataById" data-toggle="modal" action="{{ object_name }}" id="{{ item.newsId.id }}"> <i class="fa fa-trash"></i> </a> </td>but its not working.even the hello is not printed from my view | i was just missing / on the url when i was calling ajax. $.ajax({ url: baseURL + '/promoted-user/list/changeStatus/', method: 'POST', dataType: "json", contentType: "application/json", data: JSON.stringify(requestData), beforeSend: function () { var text = 'changing status . please wait..'; ajaxLoaderStart(text); },rest of the code is fine |
Access coastal outlines (e.g. from Basemap, or somewhere else) without installing Basemap I would like to have polygons or vertices of coastlines on the Earth to manipulate in Blender (and in Python stand-alone), but I would like to avoid installing into each of the multiple Pythons on my computer. Basically it looks a bit tricky to do once, much less four times.All I want is points along coastline contours, say at 1 or even 10 kilometer (1000m or 10000m) resolution. I'm assuming they would be in latitude/longitude, and in that case I would just convert to x, y, z in space myself.I've downloaded Basemap - is there any way I can access the contours directly in the data folder?An alternate data source would also be acceptable. | I found a simple solution which does not involve Basemaps or the like, thanks to the answer in GIS.stackexchange hereI am reposting some of the info here:The answer by @artwork21 is the accepted answer. I am just adding some supplementary information that others may find useful.I downloaded some coastline data from the link provided in the answer. In this example, I used physical vector data from here. Then reading about pyshp I just copy/pasted the script shapefile.py and then did the following:coast = Reader("ne_50m_coastline") # defined in shapefile.pyplt.figure()for shape in coast.shapes()[:20]: # first 20 shapes out of 1428 total x, y = zip(*shape.points) plt.plot(x, y)plt.xlim(110, 180)plt.ylim(-40, 20)plt.savefig("Australia Australia Australia Australia we love ya' Amen") # https://www.youtube.com/watch?v=_f_p0CgPeyA&feature=youtu.be&t=121plt.show() |
Evaluating the performance gain from multi-threading in python I tried to compare the performance gain from parallel computing using multithreading module and the normal sequence computing but couldn't find any real difference. Here's what I did:import time, threading, Queueq=Queue.Queue()def calc(_range): exponent=(x**5 for x in _range) q.put([x**0.5 for x in exponent])def calc1(_range): exponent=(x**5 for x in _range) return [x**0.5 for x in exponent]def multithreds(threadlist): d=[] for x in threadlist: t=threading.Thread(target=calc, args=([x])) t.start() t.join() s=q.get() d.append(s) return dthreads=[range(100000), range(200000)]start=time.time()#out=multithreads(threads)out1=[calc1(x)for x in threads]end=time.time()print end-startTiming using threading:0.9390001297Timing running in sequence:0.911999940872The timing running in sequence was constantly lower than using multithreading.I have a feeling there's something wrong with my multithreading code.Can someone point me in the right direction please.Thanks. | The reference implementation of Python (CPython) has a so-called interpreter lock where always one thread executes Python byte-code. You can switch for example to IronPython which has no GIL or you can take a look at the multiprocessing module which spawns several Python processes which can execute your code independently. In some scenarios using threads in Python can even be slower than a single-thread because the context-switches between threads on the CPU also introduce some overhead.Take a look at this page for some deeper insights and help.If you want to dive much more deeper in this topic I can highly recommend this talk by David Beazley. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.