questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
---|---|
Reading and Processing Large CSVs with Python I have a question that is similar in spirit to this previously asked question. Nonetheless, I can't seem to figure out a suitable solution. Input: I have CSV data that looks like id,prescriber_last_name,prescriber_first_name,drug_name,drug_cost1000000001,Smith,James,AMBIEN,1001000000002,Garcia,Maria,AMBIEN,2001000000003,Johnson,James,CHLORPROMAZINE,10001000000004,Rodriguez,Maria,CHLORPROMAZINE,20001000000005,Smith,David,BENZTROPINE MESYLATE,1500Output: from this I simply need to output each drug, the total cost which is summed over all prescriptions and I need to get a count of the unique number of prescribers. drug_name,num_prescriber,total_costAMBIEN,2,300.0CHLORPROMAZINE,2,3000.0BENZTROPINE MESYLATE,1,1500.0I was able to accomplish this pretty easily with Python. However, when I try to run my code with a much larger (1gb) input, my code does not terminate in a reasonable amount of time.import sys, csvdef duplicate_id(id, id_list): if id in id_list: return True else: return Falsedef write_file(d, output): path = output # path = './output/top_cost_drug.txt' with open(path, 'w', newline='') as csvfile: fieldnames = ['drug_name', 'num_prescriber', 'total_cost'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() for key, value in d.items(): print(key, value) writer.writerow({'drug_name': key, 'num_prescriber': len(value[0]), 'total_cost': sum(value[1])})def read_file(data): # TODO: https://codereview.stackexchange.com/questions/88885/efficiently-filter-a-large-100gb-csv-file-v3 drug_info = {} with open(data) as csvfile: readCSV = csv.reader(csvfile, delimiter=',') next(readCSV) for row in readCSV: prescriber_id = row[0] prescribed_drug = row[3] prescribed_drug_cost = float(row[4]) if prescribed_drug not in drug_info: drug_info[prescribed_drug] = ([prescriber_id], [prescribed_drug_cost]) else: if not duplicate_id(prescriber_id, drug_info[prescribed_drug][0]): drug_info[prescribed_drug][0].append(prescriber_id) drug_info[prescribed_drug][1].append(prescribed_drug_cost) else: drug_info[prescribed_drug][1].append(prescribed_drug_cost) return(drug_info)def main(): data = sys.argv[1] output = sys.argv[2] drug_info = read_file(data) write_file(drug_info, output)if __name__ == "__main__": main()I am having trouble figuring out how to refactor this to handle the larger input and was hoping someone could take a look and provide me some suggestions for how to solve this problem. | IF you can use pandas, Please try the following. Pandas reads your file and store them in dataframe. It is much faster than our manual file processing using iterator.import pandas as pddf = pd.read_csv('sample_data.txt')columns = ['id','drug_name','drug_cost']df1 = df[columns]gd = df1.groupby('drug_name')cnt= gd.count()s=gd.sum()out = s.join(cnt,lsuffix='x')out['total_cost']=out['drug_costx']out['num_prescriber']=out['drug_cost']fout = out[['num_prescriber','total_cost']]fout.to_csv('out_data.csv')I am getting the following output.drug_name,num_prescriber,total_costAMBIEN,2,300BENZTROPINE MESYLATE,1,1500CHLORPROMAZINE,2,3000Hope this helps. |
Installing pyOpt on python in ubuntu I have downloaded pyOpt from its website and installed it on python in ubuntu, using the instructions on the website. Still, I cannot import and use it in my pycharm projects. | I had the same problem, but I did not dare to try Akin's solution for my research project (I think it is a good solution if anyone wants to stick to PyOpt package with python3). I used Pyomo instead.By the way, Laurent's answer actually points to another package. PyOpt and pyopt are two different packages. |
convert dataframe from wide layout to SQL-style slim layout How can I convert a dataframe like this: a b c0 1.067683 -1.110463 0.2086701 -1.321405 0.368915 -1.0553422 -0.807333 0.082980 -0.873361into det value0 a 1.0676831 a -1.3214052 a -0.8073333 b -1.1104634 b 0.3689155 b 0.0829806 b 0.0829807 c 0.2086708 c -1.0553429 c -0.873361 | You can do this with melt:In [11]: from pandas.core.reshape import meltIn [12]: melt(df)Out[12]: variable value0 a 1.0676831 a -1.3214052 a -0.8073333 b -1.1104634 b 0.3689155 b 0.0829806 c 0.2086707 c -1.0553428 c -0.873361 |
dataframe Sort_values giving improper results Hi I am trying to get the top 10 values.I would like to get the top 10 attendance and punctuality by staff.here's my code for sorting:newData =data.sort_values("Attendance", ascending=False).head(10)I'm not sure why am I getting 8.82 and 8.57. Please advise. I tried Nth largest but it didn't work for me. | Because values in column Attendance are strings, so sorted in lexicographic order.So need convert them to numeric:data['Attendance'] = data['Attendance'].astype(float)#if possible some non numeric values convert them to NaNs#data['Attendance'] = pd.to_numeric(data['Attendance'], errors='coerce')newData = data.sort_values("Attendance", ascending=False).head(10) |
PyGears How to make counter I want to make a counter which can start counting at posedge of a specific signal (enable signal). And once it counts to 256, stop counting, set the counter to 0 and output something. | When designing with PyGears, you should try to think more in terms of functions (although asynchronous) that are being invoked by receiving commands via input interfaces. Instead of thinking of the enable signal that triggers the counter, try to think of a counter you described as a function that receives a number to count to as an input command and outputs something once it's done counting. There are two ways we could go about implementing this:An async gear, in which we write procedural code to describe the logic from pygears import gear, sim from pygears.typing import Uint from pygears.sim import log from pygears.lib import once, qrange @gear async def counter(cmd: Uint, *, something) -> b'type(something)': async with cmd as c: async for i in qrange(c): pass yield something @gear async def collect(data): async with data as d: log.info(f'Got something: {d}') once(val=256) \ | counter(something=Uint[8](2)) \ | collect sim()Here, the counter has an interface called cmd, where a number of cycles to count is received, and a compile time parameter called something, which will be sent through the output interface once the counting is done. In the body of the function, we first wait for the input command to arrive: async with cmd as c:, next we wait for the counter to finish and finally we output something. qrange is a builtin gear and you are welcome to inspect its implementation here: rng.pyThen there's the collect gear which simply prints out whatever it receives. Finally, we generate the command for the counter using once, and feed the output of the counter to the collect module. When sim() is invoked, we get the following output showing that collect got something in the cycle 255: 0 [INFO]: -------------- Simulation start -------------- 255 /collect [INFO]: Got something: u8(2) 256 [INFO]: ----------- Simulation done --------------- 256 [INFO]: Elapsed: 0.03 We could use a hierarchical gear to connect existing builtin modules to achieve the same thing: from pygears import gear, sim from pygears.typing import Uint from pygears.sim import log from pygears.lib import once, qrange, when @gear def counter(cmd: Uint, *, something): last = qrange(cmd)['eot'] return when(last, something) @gear async def collect(data): async with data as d: log.info(f'Got something: {d}') once(val=256) \ | counter(something=Uint[8](2)) \ | collect sim()Everything is pretty much the same, except that the counter isn't defined with the async keyword, meaning that it is a hierarchical gear and only describes interconnection between its submodules. The qrange gear outputs both the iterator and a flag, called eot (for end-of-transaction), which marks the last count. We feed eot (via eot variable) to the when gear that will only output something when it sees value True on eot interface. |
How to open a file in binary mode in google storage bucket from cloud function? In my cloud function, I need to get a file in cloud storage and send the file to an API through HTTP POST request. I tried the following code:storage_client = storage.Client()bucket = storage_client.bucket(BUCKET_NAME)source_blob_name = "/compressed_data/file_to_send.7z"blob = bucket.blob(source_blob_name)url = UPLOADER_BACKEND_URLfiles = {'upload_file': blob}values = {'id': '1', 'ouid': OUID}r = requests.post(url, files=files, data=values)It gave an error saying:Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker_v2.py", ... \ line 90, in encode_multipart_formdata body.write(data) TypeError: a bytes-like object is required, not 'Blob'If this code was to run on an actual VM, the following would work:url = UPLOADER_BACKEND_URLfiles = {'upload_file': open('/tmp/file_to_send.7z','rb')}values = {'id': '1', 'name': 'John'}r = requests.post(url, files=files, data=values)So the question is: In cloud functions, how can I load a file from cloud storage such that it has the same output as the python open(filename, 'rb') function?I know that I can do blob.download_to_file() and then open() the file, but I'm wondering if there is a quicker way. | In your Cloud Functions reference, you don't provide the Blob content to the API call but only the Blob reference (file path + Bucket name).You can, indeed download the file locally in the in memory file system /tmp directory. and then handle this tmp file as any file. Don't forget to delete it after the upload!!You can also have a try to the gcsfs library where you can handle files in a python idiomatic way. I never tried to do this when I call an API, but it should work. |
Tkinter image application keeps freezing system after it runs I'm testing an app with the following code:#!/usr/bin/env python3import osfrom tkinter import *from tkinter import filedialogfrom PIL import Image, ImageTkroot = Tk()root.title("Image Viewer App")root.withdraw()location_path = filedialog.askdirectory()root.resizable(0, 0)#Load files in directory pathim=[]def load_images(loc_path): for path,dirs,filenames in os.walk(loc_path): for filename in filenames: im.append(ImageTk.PhotoImage(Image.open(os.path.join(path, filename))))load_images(location_path)root.geometry("700x700")#Display test image with Labellabel=Label(root, image=im[0])label.pack()root.mainloop()The problem is that when I run it, my system will freeze and Linux distro will crash. I'm not able to tell what I'm doing wrong except I'm not sure if it's a good idea to store an entire image within a list variable vs just storing the location itself. Right now, it's just testing the ability to open one image with img=[0]. | The loading of images may take time and cause the freeze. Better to run load_images() in a child thread instead:import osimport threadingimport tkinter as tkfrom tkinter import filedialogfrom PIL import Image, ImageTkroot = tk.Tk()root.geometry("700x700")root.title("Image Viewer App")root.resizable(0, 0)root.withdraw()#Display test image with Labellabel = tk.Label(root)label.pack()location_path = filedialog.askdirectory()root.deiconify() # show the root window#Load files in directory pathim = []def load_images(loc_path): for path, dirs, filenames in os.walk(loc_path): for filename in filenames: im.append(ImageTk.PhotoImage(file=os.path.join(path, filename))) print(f'Total {len(im)} images loaded')if location_path: # run load_images() in a child thread threading.Thread(target=load_images, args=[location_path]).start() # show first image def show_first_image(): label.config(image=im[0]) if len(im) > 0 else label.after(50, show_first_image) show_first_image()root.mainloop()Note that I have changed from tkinter import * to import tkinter as tk as wildcard import is not recommended. |
Input Transformation for Keras LSTM I am working on a project to try to enhance my understanding of LSTM networks. I am following the steps outlined in this blog post here. My dataset looks like the following: Open High Low Close VolumeDate 2014-04-21 197.080002 206.199997 194.000000 204.380005 52582002014-04-22 206.360001 219.330002 205.009995 218.639999 98047002014-04-23 216.330002 216.740005 207.000000 207.990005 72956002014-04-24 210.809998 212.800003 203.199997 207.860001 54952002014-04-25 202.000000 206.699997 197.649994 199.850006 6996700As you can see this is a small snapshot of TSLA Stock movement.I understand that with LSTM, this data needs to be reshaped into three dimensions:Batch SizeTime StepsFeaturesMy initial idea was to use some sort of medium batch size (to allow for the best generalization). Also, to look back at 10 days of history as the Time Step. Features as Open, High, Low, Volume, Close.Here is where I am a bit stuck. I have two questions specifically:What is the approach for breaking the data into the new representation (transforming it)?How do we take this and split it into the train, test, and validation sets? I am having trouble conceptualizing exactly what is being broken down. My initial thought was to use sklearn:train_test_split()But this does not seem like it will work in this case.Obviously, once the data has been transformed and then split it is easy building the Keras model. It is just a matter of calling fit.(data).Any suggestions or resources (pointing in the right direction) would be greatly appreciated.My current code is:from sklearn.model_selection import train_test_split # Split the Data into Training and Testing Datatsla_train, tsla_test = train_test_split(tsla)tsla_train.shapetsla_test.shapefrom sklearn.preprocessing import MinMaxScaler# Scale the Datascaler = MinMaxScaler()scaler.fit(tsla_train)tsla_train_scaled = scaler.transform(tsla_train)tsla_test_scaled = scaler.transform(tsla_test)# Define the parameters of the modelbatch_size = 20# Set the model to look back on four days of historical data and try to predict the fifthtime_steps = 10from keras.models import Sequentialfrom keras.layers import LSTM, Denselstm_model = Sequential()There is some explanation found in this post here. | The train_test_split function would indeed not give the desired results here. It assumes that each row is an independent data point, which is not the case since you're using a single time series.The most common option would be to use earlier data points for training and later data points for testing (and a range of points in the middle for validation if applicable), which would give you the same results as if you had used all the available data for training on the last day in the training set and actually used it for predictions on the following days.Once you have the data sets split, then the idea is that each training batch will need to have the inputs and corresponding outputs for a randomly selected set of date ranges, where each input is the chosen number of days of historical data (i.e. days × features, with the full batch being batch size × days × features) and the output is just the data for the next day,Hopefully that helps with some of the intuition behind the procedure. The article you linked has examples of most of the code you would need--it's going to be pretty dense, but I would recommend trying to go line by line and understand everything it's doing, possibly even just typing it out verbatim. |
Showing step by step solving of sudoku Is there any way to show the steps of solving a sudoku? My code just solve it within 0.5 second and I wish to modify it to show the changes on the sudoku grid step by step on processing. (I am using python) | You can store all steps of solving sudoku (e.g. Grid data) into a list. For each step you modify the sudoku state, you clone a copy and append it to the global list. After solved it, you can loop through this list and render each state with some seconds delay. |
How to put extras_require in setup.cfg setuptools 30.3.0 introduced declarative package config, allowing us to put most of the options we used to pass directly to setuptools.setup in setup.cfg files. For example, given following setup.cfg:[metadata]name = hello-worlddescription = Example of hello world[options]zip_safe = Falsepackages = hello_worldinstall_requires = examples example1A setup.py containing onlyimport setuptoolssetuptools.setup()will do all the right things.However, I haven't been able to figure out the correct syntax for extras_require. In setup args, it is a dictionary, likesetup(extras_require={'test': ['faker', 'pytest']})But I can't figure out the right syntax to use in setup.cfg. I tried reading the docs, but I can't find the correct syntax that setuptools expects for a dictionary there. I tried a few guesses, too[options]extras_require = test=faker,pytestit fails.Traceback (most recent call last): File "./setup.py", line 15, in <module> 'pylint', File "/lib/site-packages/setuptools/__init__.py", line 128, in setup _install_setup_requires(attrs) File "/lib/site-packages/setuptools/__init__.py", line 121, in _install_setup_requires dist.parse_config_files(ignore_option_errors=True) File "/lib/python3.6/site-packages/setuptools/dist.py", line 495, in parse_config_files self._finalize_requires() File "/lib/python3.6/site-packages/setuptools/dist.py", line 419, in _finalize_requires for extra in self.extras_require.keys():AttributeError: 'str' object has no attribute 'keys'Reading the code, I'm not 100% sure this is supported, but based on PEP 508 it seems this should be a supported use case. What am I missing? | It is supported. You need a config section:[options.extras_require]test = faker; pytestSyntax is documented here. |
Change bar order and legend order in plot (matplotlib/pandas) I would like to have the order of the legend and of the bars as the one defined in label_orderfor feat in df.columns: label_order = ['Very Low', 'Low', 'Average', 'High', 'Very High'] df.groupby('class')[feat].value_counts().unstack(0).plot.bar() plt.ylabel('Count') plt.xlabel('Score') plt.legend() plt.title('Answers to ' + str(feat) + ' divided for each risk class') plt.show()enter image description here | The order of columns is determined by the column order in the dataframe you are plotting, therefore simply reordering the columns between unstacking and plotting will do the trick:df.groupby('class')[feat].value_counts().unstack(0)[label_order].plot.bar()Here's a sample plot with and without the [label_order] addition |
rearranging 2*2 pixel images, each given by 1 by 4 numpy vectors, into a single 8 by 8 matrix without using a for loop in an assignment for a uni class i am given multiple images in vectors, and i need to display multiple of them by rearranging them into a single matrix.assume the given vectors:[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12],[13,14,15,16]]where each pair of 4 values within a vector describes a 2x2 pixel imagemy task is to rearrange this into a 4x4 matrix:[[1,2,5,6],[3,4,7,8],[8,9,12,13],[10,11,15,16]]without using a single for loop. i have tried multiple variants of reshapes, but have no idea how to actually solve this problem. | Let's use reshape, and swapaxes:arrs = [[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12],[13,14,15,16]]np.array(arrs).reshape(2,2,2,2).swapaxes(1,2).reshape(4,4)Output:array([[ 1, 2, 5, 6], [ 3, 4, 7, 8], [ 9, 10, 13, 14], [11, 12, 15, 16]]) |
python - json keep returning JSONDecodeError when reading from file I want to write data to a json file. If it does not exists, I want to create that file, and write data to it. I wrote code for it, but I'm getting json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0).Here's part of my code:data = {"foo": "1"}with open("file.json", "w+") as f: try: file_data = json.load(f) json.dump(data, f) print("got data from file:", file_data) except json.decoder.JSONDecodeError: json.dump(data, f) print("wrote")I put print statements so that I can "track" what's going on, but If I try to run this code multiple times, I keep getting wrote message.Thanks in advance for help! | The problem is that you open the file for write/read therefore once you open the file it will be emptied.Then you want to load the content with json.load and it obviously fails because the file is not a valid JSON anymore.So I'd suggest to open the file for reading and writing separately:import jsonwith open("file.json") as json_file: file_data = json.load(json_file) print("got data from file:", file_data)data = {"foo": "1"}with open("file.json", "w") as json_file: json.dump(data, json_file)Hope it helps! |
Regression statistics for subsets of Pandas dataframe I have a dataframe consisting of multiple years of data with multiple environmental parameters as columns. The dataframe looks like this:import pandas as pdimport numpy as npfrom scipy import statsParameters= ['Temperature','Rain', 'Pressure', 'Humidity']nrows = 365daterange = pd.date_range('1/1/2019', periods=nrows, freq='D')Vals = pd.DataFrame(np.random.randint(10, 150, size=(nrows, len(Parameters))), columns=Parameters) Vals = Vals.set_index(daterange) print(Vals)I have created a column with month names as Vals['Month'] = Vals.index.month_name().str.slice(stop=3) and I want to calculate the slope from the regression between two variables, Rain and Temperature and extract them in a dataframe. I have tried a solution as below:pd.DataFrame.from_dict({y:stats.linregress(Vals['Temperature'], Vals['Rain'])[:2] for y, x in Vals.groupby('Month')},'index').\rename(columns={0:'Slope',1:'Intercept'})But the output is not what I expected. I want the monthly regression statistics but the result is like this Slope InterceptApr -0.016868 81.723291Aug -0.016868 81.723291Dec -0.016868 81.723291Feb -0.016868 81.723291Jan -0.016868 81.723291Jul -0.016868 81.723291Jun -0.016868 81.723291Mar -0.016868 81.723291May -0.016868 81.723291Nov -0.016868 81.723291Oct -0.016868 81.723291Sep -0.016868 81.723291It seems the regression is calculated from the total dataset and stored in each month index. How can I calculate the monthly statistics from the similar process? | Here is a bit of code that I have used in the past. I used sklearn.LinearModel because I think its a bit easier to use, but you can change to scipy.stats if you like.This code uses apply and does the linear regression in the function linear_model.import pandas as pdimport numpy as npfrom sklearn.linear_model import LinearRegressiondef linear_model(group): x,y = group.Temperature.values.reshape(-1,1), group.Rain.values.reshape(-1,1) model = LinearRegression().fit(x,y) m = model.coef_ i = model.intercept_ r_sqd = model.score(x,y) return (pd.Series({ 'slope':np.squeeze(m), 'intercept':np.squeeze(i), 'r_sqd':np.squeeze(r_sqd)}))Parameters= ['Temperature','Rain', 'Pressure', 'Humidity']nrows = 365daterange = pd.date_range('1/1/2019', periods=nrows, freq='D')Vals = pd.DataFrame(np.random.randint(10, 150, size=(nrows, len(Parameters))), columns=Parameters) Vals = Vals.set_index(daterange) Vals.groupby(Vals.index.month).apply(linear_model)Result:Vals.groupby(Vals.index.month).apply(linear_model)Out[15]: slope intercept r_sqd1 -0.06334408633973578 80.98723450432585 0.0034802 -0.1393001910724248 85.40023995141723 0.0204353 -0.0535505295232336 69.09958112535743 0.0034814 0.23187299827488306 57.866651248302546 0.0487415 -0.04813654915436082 74.31295680099751 0.0018676 0.31976921541526526 48.496345031992746 0.0890277 -0.1979417421554613 94.84215558468942 0.0520238 0.22239030327077666 68.62700822940076 0.0618499 0.054607306452220644 72.0988798639258 0.00287710 -0.07841007716276265 91.9211204014171 0.00608511 -0.13517307855088803 100.44769438307809 0.01604512 -0.1967407738498068 101.7393002049148 0.042255Your attempt was close. When you use a for loop with groupby object, you group's name and data in return. The typical convention is:for name, group in Vals.groupby('Month'): #do stuff with groupSince you called x for name and y for group, you could change Vals to y, the code will produce the same result as above.pd.DataFrame.from_dict({y:stats.linregress(x['Temperature'], x['Rain'])[:2] for y, x in Vals.groupby('Month')},'index').\rename(columns={0:'Slope',1:'Intercept'}) Slope InterceptApr 0.231873 57.866651Aug 0.222390 68.627008Dec -0.196741 101.739300Feb -0.139300 85.400240Jan -0.063344 80.987235Jul -0.197942 94.842156Jun 0.319769 48.496345Mar -0.053551 69.099581May -0.048137 74.312957Nov -0.135173 100.447694Oct -0.078410 91.921120Sep 0.054607 72.098880 |
How do I import a python file using Ansible Playbook? This is on a Linux machine. I have a run.yml like this---- name: Appspec hosts: localhost become: true tasks: - name: test 1 script: test.pytest.py uses a python file (helper.py) by 'import helper' which is in the same path as the ansible-playbook and while running the playbook.yml it still gives me a 'Import Error: cannot import name helper'. How should I do this? | Copy both test.py and helper.py over to same directory on the remote machine (possibly to a temporary directory) and run python test.py as a command task. Something like this:- name: Create temporary directory tempfile: state: directory register: tmpdir- name: Copy test.py copy: src: /wherever/test.py dest: "{{tmpdir.path}}/test.py"- name: Copy helper.py copy: src: /wherever/helper.py dest: "{{tmpdir.path}}/helper.py"- name: Run test.py command: python test.py args: chdir: "{{tmpdir.path}}" |
How to only create relevant model fields during a django test? I am testing a method that requires me to create a fake record in my model. The model has over 40 fields. Is it possible to create a record with only the relevant model fields for the test so I don't have to populate the other fields? If so how would I apply it to this test case example. models.pyclass Contract(): company = models.CharField(max_length=255), commission_rate = models.DecimalField(max_digits=100, decimal_places=2) type = models.CharField(max_length=255) offer = models.ForeignKey('Offer', on_delete=models.PROTECT) notary = models.ForeignKey('Notary', on_delete=models.PROTECT) jurisdiction = models.ForeignKey('Jurisdiction', on_delete=models.PROTECT) active = models.BooleanField() ...test.pyimport pytestfrom app.models import Contractdef calculate_commission(company, value): contract = Contract.objects.get(company='Apple') return value * [email protected]_dbdef test_calculate_commission(): #The only two model fields I need for the test Contract.objects.create(company='Apple', commission_rate=0.2) assert calculate_commission('Apple', 100) == 20 | Try to use model_bakery to make an object record. Just populate fields you want and leave another blank, model_bakery will handle it. For the Detail, you can check this out model_bakery |
Find local duplicates (which follow each other) in pandas I want to find local duplicates and give them a unique id, directly in pandas.Reallife example:Time-ordered purchase data where a customer id occures multiple times (because he visits a shop multiple times a week), but I want to identify occasions where the customer purches multiple items at the same time.My current approach would look like this:def follow_ups(lst): lst2 = [None] + lst[:-1] i = 0 l = [] for e1, e2 in zip(lst, lst2): if e1 != e2: i += 1 l.append(i) return lfollow_ups(['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C'])# [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9]# for pandasdf['out'] = follow_ups(df['test'])But I have the feeling there might be a much simpler and cleaner approach in pandas which I am unable to find.Pandas Sample dataimport pandas as pddf = pd.DataFrame({'test':['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C']})# test# 0 A# 1 B# 2 B# 3 C# 4 B# 5 D# 6 D# 7 D# 8 E# 9 A# 10 B# 11 Cdf_out = pd.DataFrame({'test':['A', 'B', 'B', 'C', 'B', 'D', 'D', 'D', 'E', 'A', 'B', 'C'], 'out':[1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 8, 9]})# test out# 0 A 1# 1 B 2# 2 B 2# 3 C 3# 4 B 4# 5 D 5# 6 D 5# 7 D 5# 8 E 6# 9 A 7# 10 B 8# 11 C 9 | You can compare whether your column test is not equal to it's shifted version, using shift() with ne(), and use cumsum() on that:df['out'] = df['test'].ne(df['test'].shift()).cumsum()Which prints:df test out0 A 11 B 22 B 23 C 34 B 45 D 56 D 57 D 58 E 69 A 710 B 811 C 9 |
Can not connect to smtp.gmail.com in Django I'm trying to send email using smtp.gmail.com in Django project.This is my email settings.settings.py# Email SettingsEMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'EMAIL_HOST = 'smtp.gmail.com'EMAIL_USE_TLS = TrueEMAIL_PORT = 587EMAIL_HOST_USER = '[email protected]'EMAIL_HOST_PASSWORD = 'mygooglepassword'views.py...send_mail( "message title", "message content", "[email protected]", ["[email protected]"], fail_silently=False)Whenever I try to send email, I get this errorgaierror at /contact-us/[Errno-2] Name or service not knownI tried the followings.I set my google account's less secure app access on.I unchecked avast antivirus setting 'Setting->Protection->Core Shields->Mail Shield->Scan outbound emails(SMTP)'Tried different ports in email settings. 587 and 25Switched the ssl and tls in email settings.But it's not sending yet. When I use 'django.core.mail.backends.console.EmailBackend' instead of 'django.core.mail.backends.smtp.EmailBackend', it prints email on console.I double checked my gmail username and password on settings.Please help me.Thank you. | You may need to do some configuration on Google side.Reference answer::Go to your Google Account settings, find Security -> Account permissions -> Access for less secure apps, enable this option.https://accounts.google.com/DisplayUnlockCaptcha |
displaying flask input() to a html template how can i display an input to my web application? Ive tried many ways but not succesfully...import randomimport refrom flask import Flask, render_templateapp = Flask(__name__)app.debug = [email protected]("/")def index(): return render_template("play.html")@app.route("/hangman")def hangman(): answer = input("Hi, wanna play? Y/N: ") return render_template("game.html", answer=answer) | In game.html template you should put an input tag:<form method="post" action="/want-to-play"> <input type="text" placeholder="Do you want to play?" /> <input type="submit" value="OK"/></form>Then just put an endpoint /want-to-play in your flask app with what you want to do. |
Scraping multiple anchor tags which are under the same header/class I am trying to scrape the top episode data from IMDB and extract the name of the show and the name of the episode. However I am facing an issue where the show name and episode name are both anchor tags which are under the same header. Screenshot of elementHere is the code:url = "https://www.imdb.com/search/title/?title_type=tv_episode&num_votes=1000,&sort=user_rating,desc&ref_=adv_prv"response = requests.get(url)soup = BeautifulSoup(response.content, 'html.parser')series_name = []episode_name = []episode_data = soup.findAll('div', attrs={'class': 'lister-item mode-advanced'})for store in episode_data: sName = store.h3.a.text series_name.append(sName) # eName = store.h3.a.text # episode_name.append(eName)Anyone know how to get through this problem? | in the last part you should specify morefor store in episode_data: h3=store.find('h3', attrs={'class': 'lister-item-header'}) sName =h3.findAll('a')[0].text series_name.append(sName) eName = h3.findAll('a')[1].text episode_name.append(eName)note that the name of 'attack of titan' has been changed to it's Japanese name!!, which is different than the html that has been shown in the browser and I don't know why!?! |
How do I map values to values with a common key in Python In the dictionaries below I want to check whether the value in aa matches the value in bb and produce a mapping of the keys of aa to the keys of bb. Do I need to rearrange the dictionaries? I import the data from a tab separated file, so I am not attached to dictionaries. Note that aa is about 100 times bigger than bb (100k lines for aa), but this is to be run infrequently and offline. Input:aa = {1: 'a', 3: 'c', 2 : 'b', 4 : 'd'}bb = {'apple': 'a', 'pear': 'b', 'mango' : 'g'}Desired output (or any similar data structure):dd = {1 : 'apple', 2 : 'pear'} | aa = {1:'a', 3:'c', 2:'b', 4:'d'}bb = {'apple':'a', 'pear':'b', 'mango': 'g'}bb_rev = dict((value, key) for key, value in bb.iteritems()) # bb.items() in python3dd = dict((key, bb_rev[value]) for key, value in aa.iteritems() # aa.items() in python3 if value in bb_rev)print dd |
Find most recent date from different dataframe I have a data frame (df1) and want to get a previous most recent survey_date for the ID and associated score from another data frame (df2)df1 = pd.DataFrame({'ID' : [1,2], 'start_date':['2018-08-04','2018-08-09']})df1df2 = pd.DataFrame({'ID' : [1,1,2,2], 'survey_date':['2018-08-01','2018-08-05','2018-08-08','2018-08-10'], 'score':[200,100, 400, 800]})df2 desired outputIDstart dateprev_survey_datescore12018-08-042018-08-0120022018-08-092018-08-08400How can I do this in python? | You can try merge_asof#df1.start_date = pd.to_datetime(df1.start_date)#df2.survey_date = pd.to_datetime(df2.survey_date)out = pd.merge_asof(df1, df2, by = 'ID', left_on = 'start_date', right_on = 'survey_date')Out[366]: ID start_date survey_date score0 1 2018-08-04 2018-08-01 2001 2 2018-08-09 2018-08-08 400 |
Return list of all cell addresses within a Range I have a list of Ranges (loaded from an Excel workbook via openpyxl) in a list (e.g., rng_list = ['$A$1:$A$3', '$B$1:$B$3', '$C$1:$C$3']) and I would like to "unpack" each of those ranges into separate lists within a list of lists (i.e., unpacked_list = [['$A$1','$A$2','$A$3'], ['$B$1','$B$2','$B$3'], ['$C$1','$C$2','$C$3']]).Please see the code below on what I have tried so far in a Jupyter Notebook. Any thoughts on why I am getting the error below? or if you have ideas on how I might want to approach this from a different angle, that would be much appreciated! Thanks! import os from openpyxl import Workbook from openpyxl.utils import get_column_letter # create temp worksheet wb_A = Workbook() sheet_A = wb_A.create_sheet('sheetA') # list with Excel ranges as str items in list rng_list = ['$A$1:$B$10', '$C$1:$D$10', '$E$1:$F$10'] temp_list = [] unpacked_list = [] for item in rng_list: for row in sheet_A(item): # use range from item in rng_list to iterate through range in temp worksheet for cell in row: x = cell.row y = cell.column addr = get_column_letter(y) + str(x) temp_list.append(addr) unpacked_list.append(addr) # delete temp worksheet wb_A.remove(sheet_A) unpacked_listI was hoping to use the range str from the list to iterate through a "dummy worksheet" created just to iterate through the cell range and capture the corresponding cell addresses within the range. I get the following error:---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-85-13b28d369550> in <module> 14 15 for item in rng_list:---> 16 for row in sheet_A(item): # use range from item in rng_list to iterate through range in temp worksheet 17 for cell in row: 18 x = cell.rowTypeError: 'Worksheet' object is not callable | After correcting syntax error in my original code (thanks, Rahasya Prabhakar!), I modified my original code to work as needed.Specifically, I needed to redefine the '''temp_list''' as an empty list at the start of the initial For loop, and append to the '''unpacked_list''' at the end of the initial For loop to obtain the list of list of unpacked ranges as desired.''' import os from openpyxl import Workbook from openpyxl.utils import get_column_letter# create temp worksheetwb_A = Workbook() sheet_A = wb_A.create_sheet('sheetA')# list with Excel ranges as str items in listrng_list = ['$A$1:$B$10', '$C$1:$D$10', '$E$1:$F$10']temp_list = []unpacked_list = []for item in rng_list: temp_list=[] for row in sheet_A[item]: # use range from item in rng_list to iterate through range in temp worksheet for cell in row: x = cell.row y = cell.column addr = get_column_letter(y) + str(x) temp_list.append(addr) unpacked_list.append(temp_list)# delete temp worksheetwb_A.remove(sheet_A)print(unpacked_list)''' |
Pandas Date Time subraction - assigning nan values If I have code as below, df['variance'] = (pd.to_datetime(df.last_date) - pd.to_datetime(df.first_date)) / np.timedelta64(1, 'M')This gives me number of months, but if one of the columns does not have a date and the result for this code for that value is NaN, is there a way where I can assign the value of the NaN to a certain value like 'Void'?So instead of the number of months, I would see void as the value? ThanksThanks | This should do it:df = df.fillna(value='Void') |
Django Filewrapper memory error serving big files, how to stream I have code like this:@login_requireddef download_file(request): content_type = "application/octet-stream" download_name = os.path.join(DATA_ROOT, "video.avi") with open(download_name, "rb") as f: wrapper = FileWrapper(f, 8192) response = HttpResponse(wrapper, content_type=content_type) response['Content-Disposition'] = 'attachment; filename=blabla.avi' response['Content-Length'] = os.path.getsize(download_name) # response['Content-Length'] = _file.size return responseIt seems that it works. However, If I download bigger file (~600MB for example) my memory consumption increase by this 600MB. After few such a downloads my server throws: Internal Server Error: /download/ Traceback (most recent call last): File "/home/matous/.local/lib/python3.5/site-packages/django/core/handlers/exception.py", line 35, in inner response = get_response(request) File "/home/matous/.local/lib/python3.5/site-packages/django/core/handlers/base.py", line 128, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/matous/.local/lib/python3.5/site-packages/django/core/handlers/base.py", line 126, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/matous/.local/lib/python3.5/site-packages/django/contrib/auth/decorators.py", line 21, in _wrapped_view return view_func(request, *args, **kwargs) File "/media/matous/89104d3d-fa52-4b14-9c5d-9ec54ceebebb/home/matous/phd/emoapp/emoapp/mainapp/views.py", line 118, in download_file response = HttpResponse(wrapper, content_type=content_type) File "/home/matous/.local/lib/python3.5/site-packages/django/http/response.py", line 285, in init self.content = content File "/home/matous/.local/lib/python3.5/site-packages/django/http/response.py", line 308, in content content = b''.join(self.make_bytes(chunk) for chunk in value) MemoryErrorWhat I am doing wrong? Is it possible to configure it somehow to stream it the piece by piece from hard-drive without this insane memory storage?Note: I know that big files should not be served by Django, but I am looking for simple approach that allows to verify user access rights for any served file. | Try to use StreamingHttpResponse instead, that will help, it is exactly what you are looking for.Is it possible to configure it somehow to stream it the piece by piece from hard-drive without this insane memory storage?import osfrom django.http import StreamingHttpResponsefrom django.core.servers.basehttp import FileWrapper #django <=1.8from wsgiref.util import FileWrapper #django >1.8@login_requireddef download_file(request): file_path = os.path.join(DATA_ROOT, "video.avi") filename = os.path.basename(file_path) chunk_size = 8192 response = StreamingHttpResponse( FileWrapper(open(file_path, 'rb'), chunk_size), content_type="application/octet-stream" ) response['Content-Length'] = os.path.getsize(file_path) response['Content-Disposition'] = "attachment; filename=%s" % filename return responseThis will stream your file in chunks without loading it in memory; alternatively, you can use FileResponse, which is a subclass of StreamingHttpResponse optimized for binary files. |
How to use Python to get all cookies from web? Input import requestsfrom http import cookiejarheaders = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64;rv:57.0) Gecko/20100101 Firefox/57.0'}url = "http://www.baidu.com/"session = requests.Session()req = session.put(url = url,headers=headers)cookie = requests.utils.dict_from_cookiejar(req.cookies)print(session.cookies.get_dict())print(cookie)Gives output: {'BAIDUID': '323CFCB910A545D7FCCDA005A9E070BC:FG=1', 'BDSVRTM': '0'} {'BAIDUID': '323CFCB910A545D7FCCDA005A9E070BC:FG=1'}as here.I try to use this code to get all cookies from the Baidu website but only return the first cookie. I compare it with the original web cookies(in the picture), it has 9 cookies. How can I get all the cookies? | You didn't maintain your session, so it terminated after the second cookie.import requestsfrom http import cookiejarheaders = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64;rv:57.0) Gecko/20100101 Firefox/57.0'}url = "http://www.baidu.com/"with requests.Session() as s: req = s.get(url, headers=headers) print(req.cookies.get_dict())>> print(req.cookies.get_dict().keys())>>> ['BDSVRTM', 'BAIDUID', 'H_PS_PSSID', 'BIDUPSID', 'PSTM', 'BD_HOME'] |
How to show only a few Many-to-many relations in DRF? If for an example I have 2 models and a simple View:class Recipe(model.Model): created_at = model.DateField(auto_add_now=True)class RecipeBook(model.Model): recipes = model.ManyToManyField(Recipe)...class RecipeBookList(ListAPIView): queryset = RecipeBook.objects.all() serializer_class = RecipeBookSerializer...class RecipeBookSerializer(serializers.ModelSerializer): recipe = RecipeSerializer(many=True, read_ony=True) class Meta: model = RecipeBook fields = "__all__"What would be the best way, when showing all Restaurants with a simple GET method, to show only the first 5 recipes created and not all of them? | QuerySet way:You can specify custom Prefetch operation in your queryset to limit the prefetched related objects:queryset.prefetch_related(Prefetch('recipes', queryset=Recipe.objects.all()[:5]))Docs: https://docs.djangoproject.com/en/3.2/ref/models/querysets/#prefetch-objectsSerializer way:You can use source to provide a custom method to return a custom querysetclass RecipeBookSerializer(serializers.ModelSerializer): recipes = RecipeSerializer(many=True, read_only=Treue, source='get_recipes') class Meta: model = RecipeBook fields = "__all__" def get_recipes(self, obj): return obj.recipes.all()[:5]Then use prefetch_related("recipes") to minimize related queries.Source: django REST framework - limited queryset for nested ModelSerializer?The problem with the serializer way is that either a related query for recipes is performed per recipe book object or all related recipes are pre-fetched from the beginning. |
Panda dataframe replace() method for row numbers I need to replace some values in a column with a specific value using the row numbers list of the required values as an array like following array.Can I use dataframe.replace() for that?row_numbers = [ 4, 7, 15, 18, 49, 60, 78, 80] | You can use locdf.loc[row_numbers, 'col'] = 3in case your index is not numberdf['col'].iloc[row_numbers] = 3 |
How is this Python function read? Wikipedia has the following example code for softmax.>>> import numpy as np>>> z = [1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]>>> softmax = lambda x : np.exp(x)/np.sum(np.exp(x))>>> softmax(z)array([0.02364054, 0.06426166, 0.1746813 , 0.474833 , 0.02364054 , 0.06426166, 0.1746813 ])When I run it, it runs successfully. I don't understand how to read the lambda function. In particular, how can the parameter x refer to an array element in the numerator and span all the elements in the denominator?[Note: The question this question presumably duplicates is about lambdas in general. This question is not necessarily about lambda. It is about how to read the np conventions. The answers by @Paul Panzer and @Mihai Alexandru-Ionut both answer my question. Too bad I can't check both simultaneously as answering the question.To confirm that I understand their answers (and to clarify what my question was about):x is the entire array (as it should be since the array is passed as the argument). np.exp(x) returns the array with each element x[i] replaced by np.exp(x[i]). Call that new array x_new.x_new/np.sum(x_new) divides each element of x_new by the sum of x_new.] | Three remarks.The use of lambda in the example is actually bad style, cf. this paragraph from the Python style guide: Always use a def statement instead of an assignment statement that binds a lambda expression directly to an identifier. Yes: def f(x): return 2*x No: f = lambda x: 2*x The first form means that the name of the resulting function object is specifically 'f' instead of the generic ''. This is more useful for tracebacks and string representations in general. The use of the assignment statement eliminates the sole benefit a lambda expression can offer over an explicit def statement (i.e. that it can be embedded inside a larger expression)Re the content. What you are seeing is array arithmetic. np.exp is a numpy ufunc it operates element-wise, so it will return an array of the same shape as its argument. np.sum is a reducing function, when called with an array as its sole argument it will return a scalar. The / operator is overloaded with a binary ufunc; like np.exp it operates element-wise. In addition, it does broadcasting: In this case the scalar denominator will be paired with every element of the array numerator resulting in an array.And finally: Here is how to implement the softmax properly. |
Problems with converting a Python script into a Windows service I already have a python script that runs continuously. It's very similar to this: https://github.com/walchko/Black-Hat-Python/blob/master/BHP-Code/Chapter10/file_monitor.pySimilar as in, when running it as a script, it opens a CMD which shows some data when stuff happens - it's not user-interactible so it's not mandatory that it shows (just in case someone wishes to point out that windows services can't have interfaces)I've tried to convert it to a service. It starts for a fraction of a second and then automatically stops. When trying to start it via services.msc (instead of python script.py start) it doesn't start at all, Windows error says something like: "The service on local computer started and then stopped" which sounds just about what's happening if I try to start it with the argument. I've tried modifying the script to allow it to run as a service - adding the skeleton I found here: Is it possible to run a Python script as a service in Windows? If possible, how?I've also tried just getting the skeleton script above and just trying to make it run the other script with examples from here: What is the best way to call a Python script from another Python script? Does anyone have any idea what the best course of action would be to run that script above as a service?Thanks! | Edited "...services can be automatically started when the computer boots, can be paused and restarted, and do not show any user interface." ~Introduction to Windows Service ApplicationsWindows services require the implementation to make a specific interface available:Service ProgramsServicesSo you will need to access the Windows API through Python:You can see the example code, from Python Programming On Win32, within which Chapter 18 Services (ch18_services folder) contains a sample (SmallestService.py) demonstrating the smallest possible service written in Python:# SmallestService.py# # A sample demonstrating the smallest possible service written in Python.import win32serviceutilimport win32serviceimport win32eventclass SmallestPythonService(win32serviceutil.ServiceFramework): _svc_name_ = "SmallestPythonService" _svc_display_name_ = "The smallest possible Python Service" def __init__(self, args): win32serviceutil.ServiceFramework.__init__(self, args) # Create an event which we will use to wait on. # The "service stop" request will set this event. self.hWaitStop = win32event.CreateEvent(None, 0, 0, None) def SvcStop(self): # Before we do anything, tell the SCM we are starting the stop process. self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) # And set my event. win32event.SetEvent(self.hWaitStop) def SvcDoRun(self): # We do nothing other than wait to be stopped! win32event.WaitForSingleObject(self.hWaitStop, win32event.INFINITE)if __name__=='__main__': win32serviceutil.HandleCommandLine(SmallestPythonService)You may need to download the appropriate pywin32 wheel for your particular python environment here:http://www.lfd.uci.edu/~gohlke/pythonlibs/#pywin32And install it (system wide, from admin cmd prompt):> cd \program files\python<ver>\scripts> pip install \path\to\pywin32‑221‑cp<ver>‑cp<ver>m‑win_<arch>.whlOr install it (per user, from regular cmd prompt):> cd \program files\python<ver>\scripts> pip install --user \path\to\pywin32‑221‑cp<ver>‑cp<ver>m‑win_<arch>.whlBe sure to replace the occurences of <ver> and <arch> appropriately. |
I need a HTML front end to test and use a Django API newbie here. I followed a guide online and successfully deploy a Keras model with Django API. I wanted to create a HTML file which connected to the Django API, where I can load image into the model for processing, and then send back the prediction.Below are the codes for the API. I need someone to guide me. import datetimeimport pickleimport jsonfrom django.shortcuts import renderfrom django.http import HttpResponsefrom rest_framework.decorators import api_viewfrom api.settings import BASE_DIRfrom custom_code import image_converter@api_view(['GET'])def __index__function(request): start_time = datetime.datetime.now() elapsed_time = datetime.datetime.now() - start_time elapsed_time_ms = (elapsed_time.days * 86400000) + (elapsed_time.seconds * 1000) + (elapsed_time.microseconds / 1000) return_data = { "error" : "0", "message" : "Successful", "restime" : elapsed_time_ms } return HttpResponse(json.dumps(return_data), content_type='application/json; charset=utf-8')@api_view(['POST','GET'])def predict_plant_disease(request): try: if request.method == "GET" : return_data = { "error" : "0", "message" : "Plant Assessment System" } else: if request.body: request_data = request.data["plant_image"] header, image_data = request_data.split(';base64,') image_array, err_msg = image_converter.convert_image(image_data) if err_msg == None : model_file = f"{BASE_DIR}/ml_files/cnn_model.pkl" saved_classifier_model = pickle.load(open(model_file,'rb')) prediction = saved_classifier_model.predict(image_array) label_binarizer = pickle.load(open(f"{BASE_DIR}/ml_files/label_transform.pkl",'rb')) return_data = { "error" : "0", "data" : f"{label_binarizer.inverse_transform(prediction)[0]}" } else : return_data = { "error" : "4", "message" : f"Error : {err_msg}" } else : return_data = { "error" : "1", "message" : "Request Body is empty", } except Exception as e: return_data = { "error" : "3", "message" : f"Error : {str(e)}", } return HttpResponse(json.dumps(return_data), content_type='application/json; charset=utf-8') | If you just need to test your API, download Postman and make requests from the application. It is much easier than actually making a whole HTML script to test your API. However, if you absolutely need to test your API through a frontend app, try the steps below.You need an image upload function in your HTML code.<body> <input type="file accept="image/png, image/jpeg"/></body>Make sure you have a valid API request URL// For examplePOST <site>/v1/plant-imageMake the request from JS Script<script type="text/javascript"> var HTTP = new XMLHttpRequest(); HTTP.onreadystatechange = function () { if (HTTP.readyState === 4 && HTTP.status === 200) { console.log(HTTP.responseText); } } HTTP.open('POST', 'https://<site>/v1/plant-image', true); HTTP.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded; charset=UTF-8'); HTTP.send();</script>Since you are using DRF, make sure to handle errors by raising errors instead of returning them as responses. This can make error handling much easier. from rest_framework.exceptions import ValidationError// some code...raise ValidationError() |
flask_sqlalchemy create model from different file I am trying to define and create my models with flask_sqlalchemy.If I do it all in one script, it works:all_in_one.pyfrom config import DevConfigfrom flask import Flaskfrom flask_sqlalchemy import SQLAlchemyapp = Flask(__name__)app.config.from_object(DevConfig)app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = Falseapp.config['SQLALCHEMY_DATABASE_URI'] = app.config.get("DB_URI")db = SQLAlchemy(app)class Members(db.Model): id = db.Column(db.String, primary_key=True, nullable=False)def main(): db.drop_all() db.create_all()if __name__ == "__main__": main()The Members table is created.If I split this process into files, I can't seem to get the db object to register my Members model and do anything.root│-- config.py│-- create.py│-- database.py│-- members.pydatabase.pyfrom flask_sqlalchemy import SQLAlchemydb = SQLAlchemy()members.pyfrom database import dbclass Members(db.Model): id = db.Column(db.String, primary_key=True, nullable=False)create.pyfrom database import dbfrom config import DevConfigfrom flask import Flaskapp = Flask(__name__)app.config.from_object(DevConfig)app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = Falseapp.config['SQLALCHEMY_DATABASE_URI'] = app.config.get("DB_URI")def main(): db.init_app(app) with app.app_context(): db.drop_all() db.create_all()if __name__ == "__main__": main()The Members table does not get created. | add import members below db.init_app(app)from database import dbfrom config import DevConfigfrom flask import Flaskapp = Flask(__name__)app.config.from_object(DevConfig)app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = Falseapp.config['SQLALCHEMY_DATABASE_URI'] = app.config.get("DB_URI")def main(): db.init_app(app) import members with app.app_context(): db.drop_all() db.create_all()if __name__ == "__main__": main() |
Turning on Jinja2 extensions in Salt I'm writing a lot of Salt states and I want to use the do tag extension as suggested in this StackOverflow answer.According to the Salt docs, I should be able to edit /etc/salt/master to add these lines:jinja_env: extensions: ['jinja2.ext.do']jinja_sls_env: extensions: ['jinja2.ext.do']and then restart the salt-master service and have access to the do tag. However, I tried that and I get the same error as before, so it's not recognizing the tag.I've confirmed that the extension is available on the server by testing it at the command line:>>> import jinja2>>> jinja2.Environment(extensions=['jinja2.ext.do']).parse(open('/path/to/mytemplate.jinja').read())Template(body=[...])What am I missing? How do I configure Salt to use the {% do %} Jinja tag? | From reviewing the Salt source code, it appears that it applies these extensions automatically if they're available. The error I was getting about the template failing to render appears to be from an unrelated syntax error.So the true answer all along is that you don't have to do anything to make use of the {% do %} template tag. |
Auto set field in Django Model, depending on another user submitted field Say I have this code:from django.db import modelsclass Restaurant(models.Model): name = models.CharField(max_length=50) address = models.CharField(max_length=80)then I'm able to create a 'Place':>>> p1 = Restaurant(name='Demon Dogs', address='944 W. Fullerton')>>> p1.save()Here's the point. I'd like to have a preset dictionary like:autoadress = {'Demon Dogs':'944 W. Fullerton', 'Eat attack':'100 Green Meadows', 'Pizza Fast':'50 E. High Hill'}so that when a user creates a 'Restaurant' by only specifying it's name:>>> p1 = Restaurant(name='Demon Dogs')>>> p1.save()A new Restaurant with name='Demon Dogs' and address='944 W. Fullerton' was createdHow should I do this?thanks. | AUTOADDRESS = {'Demon Dogs':'944 W. Fullerton', 'Eat attack':'100 Green Meadows', 'Pizza Fast':'50 E. High Hill'}class Restaurant(models.Model): name = models.CharField(max_length=50) address = models.CharField(max_length=80) def clean(self): if not self.address: self.address = AUTOADDRESS.get(self.name, '') |
Realtime list of viewers in a Google Drive document I'm working on an app which wraps Google Docs (using GAE/Python), and I want to keep track of who is viewing these docs in real-time. I can't find any APIs for this in the Google Drive SDK.What's a good way to do this? Naively, I might imagine repeatedly polling each document individually and parsing the returned HTML. I expect there to be ~150 docs total in the system; would this be too inefficient? | Realtime api is not for using with gdocs, only for your own custom formats. Instead see the changes api in drive but you wont be able to detect viewers only modifications https://developers.google.com/drive/manage-changes |
How to tell if boto.sqs.Queue.write() succeeded? All documentation of this method that I can find says that Queue.write returns True or False, depending on whether the write succeeded, but this doesn't square with reality. The docs say: The write method returns a True if everything went well. If the write didn't succeed it will either return a False (meaning SQS simply chose not to write the message for some reason) or an exception if there was some sort of problem with the request.But in fact the method simply returns the message that gets passed in. Here is the relevant source code from https://github.com/boto/boto/blob/develop/boto/sqs/queue.py:def write(self, message, delay_seconds=None): """ Add a single message to the queue. :type message: Message :param message: The message to be written to the queue :rtype: :class:`boto.sqs.message.Message` :return: The :class:`boto.sqs.message.Message` object that was written. """ new_msg = self.connection.send_message(self, message.get_body_encoded(), delay_seconds) message.id = new_msg.id message.md5 = new_msg.md5 return messageMy question then is: How do I really tell if the write was successful? | The documentation quote you provide comes from the SQS tutorial. The SQS API docs correctly describe the current return value. The SQS tutorial is simply out of date and needs to be corrected. I have created an issue to track this.If the write fails for any reason, the service will return an HTTP error code which, in turn, will cause boto to raise an SQSError exception. If no exception is raised, the write was successful. |
What is the difference between timesteps and features in LSTM? I have a dataframe representing numerical values in many time periods, and I have formatted that dataframe in the way there are represented as a concatenation of previous values. For example:+------+------+------+| t1 | t2 | t3 |+------+------+------+| 4 | 7 | 10 |+------+------+------+| 7 | 10 | 8 |+------+------+------+| 10 | 8 | 11 |+------+------+------+...When I format the dataset to work with a LSTM, I reshape it to a 3 dimensional vector [samples, time steps, features].But, which value do I have to put for time steps and features? Should features be 3 because I learn with the last 3 elements?By the moment I have this one:trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) | I try to explain on example. So assume we have some measurement with temperature and pressure and we want to predict temperature at some point of future. We have two features right(temperature & pressure). So we can use them for feeding LSTM and try to predict. Now I'm not sure how you stand with LSTM theory, but there are two variables at game, cell state C and previous output h(t-1). We concentrate to h(t-1). So you gave LSTM cell(assume now only one neuron) input(temperature and pressure). LSTM produce output and cell state and now, if you have time steps at 1, when you give LSTM new input, output will be dependent only at cell state and input. But if your timesteps is set to five. Second input will be dependent on cell state, input and previous output. Third output will be dependent on second output, cell state, current input. This sequence continues at the moment of sixth input, when you again depend on input and cell state. These h(t-1) stuff is refereed as short time memory. So if you set time steps to 1, you loose your short memory.EditMy bad I don't look at your data at right way. You have one feature, t and three steps. But you don't frame it right way, you treat three t values as separate features and feed LSTM with them. But you can instead reshape your data to samples x 3 x 1. So you feed LSTM with t1 of first sample, next t2 of first sample but LSTM output will be affected by output from previous time step. |
Python relations between run_until_complete and ensure_future This is a follow up question to this question:Why do most asyncio examples use loop.run_until_complete()?I'm trying to figure out how asynchronous programming work in python. There's something very basic which I'm still not sure about..when having this line code: asyncio.ensure_future(someTask) , will this line ALONE actually enqueue the Future returned in the default event loop and start the task? Or do I ALSO need to call loop.run_until_complete(someTask) (or some other kind of run) before that in order to get the event loop up and running? | asyncio.ensure_future(someTask) will this line ALONE actually enqueue the Future returned in the default event loop and start the task?It will schedule the coroutine, but it won’t run it. You still need to run the loop to do that. You can do that with loop.run_forever()If you want the loop to run until someTask is done rather than forever, usefuture = asyncio.ensure_future(someTask)loop.run_until_complete(future)Don’t call both asyncio.ensure_future(someTask) and loop.run_until_complete(someTask) or you’ll end up with a RuntimeError since someTask will have already been scheduled. |
Cleaner pandas/numpy code to find equivalency matrix? I have pandas DataFrame and would like to generate an equivalency matrix (or whatever it's called) where each cell has one value if the the df.Col[i] == df.Col[j] and another value when !=.The following code works:df = pd.DataFrame({"Col":[1, 2, 3, 1, 2]}, index=["A","B","C","D","E"])df ColA 1B 2C 3D 1E 2sm = pd.DataFrame(columns=df.index, index=df.index)for i in df.index: for j in df.index: if df.Col[i] == df.Col[j]: sm.loc[i, j] = 3 else: sm.loc[i, j] = -1sm A B C D EA 3 -1 -1 3 -1B -1 3 -1 -1 3C -1 -1 3 -1 -1D 3 -1 -1 3 -1E -1 3 -1 -1 3But there must be a better way. Perhaps using numpy? Any thoughts?[Edit]Using what piRsquared wrote, perhaps something like?m = df.values == df.values[:, 0]sm = pd.DataFrame(None, df.index, df.index).where(m, 3).where(~m, -1)Can this be improved? | v = df.valuesm = v == v[:, 0]pd.DataFrame(np.where(m, 1, -1), df.index, df.index) A B C D EA 1 -1 -1 1 -1B -1 1 -1 -1 1C -1 -1 1 -1 -1D 1 -1 -1 1 -1E -1 1 -1 -1 1 |
Send XML to activeMQ using Django I am trying to send a XML file generated using 'ElementTree' to activeMQ server using python django 'requests' library .My views.py code is :from django.shortcuts import renderimport requestsimport xml.etree.cElementTree as ET# Create your views here.def index(request): return render(request,"indexer.html")def xml(request): root = ET.Element("root") doc = ET.SubElement(root, "doc") field1 = ET.SubElement(doc,"field1") ET.SubElement(doc, "field2", fame="yeah", name="asdfasd").text = "some vlaue2" ET.SubElement(field1,"fielder", name="ksd").text = "valer" tree = ET.ElementTree(root) headers = {} tree.write("filename.xml", encoding = "us-ascii", xml_declaration = 'utf-8', default_namespace = xml, method = "xml") url = 'http://localhost:8082/testurl/' headers = {'Content-Type': 'application/xml'} files = {'file': open('filename.xml', 'rb')} requests.post(url, files=files, headers = headers) return render(request,"indexer.html")and there is a simple submit button on indexer.html page.<html><head></head><body><form method="post" action="/xml/">{% csrf_token %} <input type="submit" value="submit"></form> </body></html>When I click submit button it's generating filename.xml file and then sending it successfully to activeMQ server, but at activeMQ i am getting XML message which contains header information also . So ,is it possible to send only body part without header or how to omit header at activeMQ side and keep only body/data part ?At activeMQ I'm getting following message:--6dc760762ba245eb8e4c3d72aa38062bContent-Disposition: form-data; name="file"; filename="filename.xml"<root><doc><field1><fielder name="ksd">valer</fielder></field1><field2 fame="yeah" name="asdfasd">some vlaue2</field2></doc></root>--6dc760762ba245eb8e4c3d72aa38062b-- | I suggest looking at using the available STOMP protocol instead of HTTP. You'll have more control over message payloads and message headers.Python library: https://pypi.python.org/pypi/stomp.pyActiveMQ Support: http://activemq.apache.org/stomp.html |
Django's runscript: No (valid) module for script 'filename' found I'm trying to run a script from the Django shell using the Django-extension RunScript. I have done this before and but it refuses to recognize my new script:(env) mint@mint-VirtualBox ~/GP/GP $ python manage.py runscript fill_in_random_variantsNo (valid) module for script 'fill_in_random_variants' foundTry running with a higher verbosity level like: -v2 or -v3While running any other script works fine:(env) mint@mint-VirtualBox ~/GP/GP $ python manage.py runscript fill_in_variantsSuccess! At least, there were no errors.I have double checked that the file exists, including renaming it to something else. I have also tried running the command with non-existent script names:(env) mint@mint-VirtualBox ~/GP/GP $ python manage.py runscript thisfiledoesntexistNo (valid) module for script 'thisfiledoesntexist' foundTry running with a higher verbosity level like: -v2 or -v3and the error is the same.Why can't RunScript find my file? | RunScript has confusing error messages. It gives the same error for when it can't find a script at all and when there's an import error in the script.Here's an example script to produce the error:import nonexistrentpackagedef run(): print("Test")The example has the only stated requirement for scripts, namely a run function.Save this as test_script.py in a scripts folder (such as project root/your app/scripts/test_script.py). Then try to run it:(env) mint@mint-VirtualBox ~/GP/GP $ python manage.py runscript test_scriptNo (valid) module for script 'test_script' foundTry running with a higher verbosity level like: -v2 or -v3Which is the same error as the file not found one. Now outcomment the import line and try again:(env) mint@mint-VirtualBox ~/GP/GP $ python manage.py runscript test_scriptTestAs far as I know, the only way to tell the errors apart is to use the verbose (-v2) command line option and then look at the first (scroll up) error returned:(env) mint@mint-VirtualBox ~/GP/GP $ python manage.py runscript test_script -v2Check for www.scripts.test_scriptCannot import module 'www.scripts.test_script': No module named 'nonexistrentpackage'.Check for django.contrib.admin.scripts.test_scriptCannot import module 'django.contrib.admin.scripts.test_script': No module named 'django.contrib.admin.scripts'.Check for django.contrib.auth.scripts.test_scriptCannot import module 'django.contrib.auth.scripts.test_script': No module named 'django.contrib.auth.scripts'.Check for django.contrib.contenttypes.scripts.test_scriptCannot import module 'django.contrib.contenttypes.scripts.test_script': No module named 'django.contrib.contenttypes.scripts'.Check for django.contrib.sessions.scripts.test_scriptCannot import module 'django.contrib.sessions.scripts.test_script': No module named 'django.contrib.sessions.scripts'.Check for django.contrib.messages.scripts.test_scriptCannot import module 'django.contrib.messages.scripts.test_script': No module named 'django.contrib.messages.scripts'.Check for django.contrib.staticfiles.scripts.test_scriptCannot import module 'django.contrib.staticfiles.scripts.test_script': No module named 'django.contrib.staticfiles.scripts'.Check for django_extensions.scripts.test_scriptCannot import module 'django_extensions.scripts.test_script': No module named 'django_extensions.scripts'.Check for scripts.test_scriptCannot import module 'scripts.test_script': No module named 'scripts'.No (valid) module for script 'test_script' foundwhere we can see the crucial line:No module named 'nonexistrentpackage'.The commonality of the errors seems to be because the extension runs the script using import. It would be more sensible if it first checked for the existence of the file using os.path.isfile and if not found, the threw a more sensible error message. |
Multiple filters on exists I'm trying to filter my exists query set into looking through 3 fields to check if a release date of this game, platform and region already exists. What I seek to accomplish: if ReleaseDate.objects.filter(game=game.id).filter(platform=release_date_object['platform']).filter(region=release_date_object['region']).exists(): | Very simple - just put them all together in one filter() with commas:if ReleaseDate.objects.filter(game=game.id, platform=release_date_object['platform'], region=release_date_object['region']).exists():Sometimes more complicated queries require Q objects but for a simple multiple-field query just put them all in one filter(). |
How to minimize two loss using TensorFlow? I am working on a project which is to localize object in a image. The method I am going to adopt is based on the localization algorithm in CS231n-8.The network structure has two optimization heads, classification head and regression head. How can I minimize both of them when training the network?I have one idea that summarizing both of them into one loss. But the problem is classification loss is softmax loss and regression loss is l2 loss, which means they have different range. I don't think this is the best way. | It depends on your network status.If your network is just able to extract features [you're using weights kept from some other net], you can set this weights to be constants and then train separately the two classification heads, since the gradient will not flow trough the constants.If you're not using weights from a pre-trained model, youHave to train the network to extract features: thus train the network using the classification head and let the gradient flow from the classification head to the first convolutional filter. In this way your network now can classify objects combining the extracted features.Convert to constant tensors the learned weights of the convolutional filters and the classification head and train the regression head.The regression head will learn to combine the features extracted from the convolutional layer adapting its parameters in order to minimize the L2 loss.Tl;dr:Train the network for classification first.Convert every learned parameter to a constant tensor, using graph_util.convert_variables_to_constants as showed in the 'freeze_graph` script.Train the regression head. |
Django - Save a new table1.PK and table2.PK and table2.FK I created 2 forms based on django-crispy forms. Form1 shows the OrderHeaderForm2 shows the OrderLines in a formsetWhen i open an existing OrderHeader, i see the Header and the Lines, i can adjust and save the open forms just fine. When i open the form empty, i select a customer in the OrderHeader and then Add some order lines, but here i can not save as the OrderLine has no FK value for the OrderLine.orderheader. Q: How can i save the OrderHeader.pk in the OrderLine.orderheader when i hit submit?My views.pydef orderline_formset(request, id=None): if id: orderid = OrderHeader.objects.get(pk=id) else: orderid = OrderHeader() OrderLineFormSet = inlineformset_factory(OrderHeader, OrderLine, OrderLineForm, extra = 1, can_delete=True) form = OrderHeaderForm(instance=orderid) formset = OrderLineFormSet(instance=orderid) helper = OrderLineFormSetHelper() if request.method == 'POST': OrderLine.orderheader = orderid formset = OrderLineFormSet(request.POST,instance=orderid) if formset.is_valid(): formset.save() messages.success(request, 'Order saved succesfully!') else: messages.error(request, 'Order save error, please check fields below') else: formset = OrderLineFormSet(instance=orderid) return render_to_response("order.html", {'orderform' : form,'formset': formset, 'helper': helper}, context_instance=RequestContext(request))My forms.pyclass OrderHeaderForm(forms.ModelForm): def __init__(self, *args, **kwargs): super(OrderHeaderForm, self).__init__(*args, **kwargs) self.helper = FormHelper(self) self.helper.form_tag = False class Meta: model = OrderHeaderclass OrderLineForm(forms.ModelForm): def __init__(self, *args, **kwargs): super(OrderLineForm, self).__init__(*args, **kwargs) class Meta: model = OrderLineclass OrderLineFormSetHelper(FormHelper): def __init__(self, *args, **kwargs): super(OrderLineFormSetHelper, self).__init__(*args, **kwargs) self.form_method = 'post' self.template = 'bootstrap3/table_inline_formset.html' self.render_required_fields = True self.form_tag = False | The error was that i was only saving the formset and not the form. So i changed my views.py to the following; if request.method == 'POST': form = OrderHeaderForm(request.POST,instance=orderid) formset = OrderLineFormSet(request.POST,instance=orderid) if form.is_valid() and formset.is_valid(): form.save() formset.save() messages.success(request, 'Order saved succesfully!') |
Use AJAX to display dictionary data returned by Django view in a table on the template I saw some posts on this topic but none quite similar. I am getting back a dictionary in JSON format from a Django view as shown below:# display game statistics on the developer homepagedef gamestats(request): countlist = [] datedict = {} if request.method=='POST' and request.is_ajax: id = request.POST.get('id',None) gameobj = Games.objects.filter(pk=id) # get queryset of Scores objects with the selected game ID scores = Scores.objects.filter(game=gameobj) # get list of distinct registration dates from "scores" queryset datelist = scores.values_list('registration_date',flat=True).distinct() for dateobj in datelist: scoredate = scores.filter(registration_date=dateobj) c = scoredate.count() countlist.append(c) n = len(countlist) for i in range(n): t = datelist[i].strftime('%d/%m/%Y') datedict[t] = countlist[i] print(t) print(datedict[t]) json_stats = json.dumps(datedict) return HttpResponse(json_stats,content_type='application/json')This dictionary holds data in the form:{ "29/01/2015" : 2, "21/12/2014" : 1, "23/01/2015": 3 }Now, on the client side, the AJAX code is given below:$(".stats").click(function(){ var game = $(this); var id = game.attr('id'); var csrftoken = getCookie('csrftoken'); $.ajax({ type : "POST", url : "/gamestats/", data : {'id': id, 'csrfmiddlewaretoken': csrftoken}, dataType : "json", success : function(data){ alert(data); //var obj = $.parseJSON(response); //for (var key in obj){ // alert("inside for loop"); // var value = obj[key]; // alert(value); // $("#gamestats").html(value); //} } }); event.preventDefault(); });The relevant HTML code: ...<tr> <td width="10%" align="center"><button class="stats" id="{{game.id}}"> View game statistics</button></td></tr>...<b>Game statistics: </b><p id="gamestats"></p>Due to my very limited knowledge of AJAX, I do not know how to handle the response inside the "success" parameter of the request. I want to display the JSON data in a table, with 2 columns, one for the dates (keys) and other for the corresponding numbers (values). I want to do this inside the "gamestats" section of the page. I tried some things but they don't display anything. Any help is appreciated, thank you!! | If you want to take a JSON object and put it into a table you can loop over it like so:var tableData = '<table>'$.each(data, function(key, value){ tableData += '<tr>'; tableData += '<td>' + key + '</td>'; tableData += '<td>' + value + '</td>'; tableData += '</tr>';});tableData += '</table>';$('#table').html(tableData); |
Pandas timeseries indexing fails when the index is hierarchical I tried the following code snippet.In [84]:from datetime import datetimefrom dateutil.parser import parserng = [datetime(2017,1,13), datetime(2017,1,14), datetime(2017,2,15), datetime(2017,2,16)]s = Series([1,2,3,4], index=rng)s['2017/1']Out[84]:2017-01-13 12017-01-14 2dtype: int64As I expected, I could successfully retrieve only those items belonging to JAN by only specifying up to JAN like s['2017/1'].Next time, I tried a bit extended version of the above code, where a hierarchical index was used instead:from datetime import datetimefrom dateutil.parser import parserng1 = [datetime(2017,1,1), datetime(2017,1,1), datetime(2017,2,1), datetime(2017,2,1)]rng2 = [datetime(2017,1,13), datetime(2017,1,14), datetime(2017,2,15), datetime(2017,2,16)]midx = pd.MultiIndex.from_arrays([rng1, rng2])s = Series([1,2,3,4], index=midx)s['2017/1']The above code snippet, however, generates an error:TypeError: unorderable types: int() > slice()Would you give me some help? | It seems it is more complicated.Partial string indexing on datetimeindex when part of a multiindex is implemented in DataFrame in pandas 0.18.So if use:rng1 = [pd.Timestamp(2017,5,1), pd.Timestamp(2017,5,1), pd.Timestamp(2017,6,1), pd.Timestamp(2017,6,1)]rng2 = pd.date_range('2017-01-13', periods=2).tolist() + pd.date_range('2017-02-15', periods=2).tolist()s = pd.Series([1,2,3,4], index=[rng1, rng2])print (s)2017-05-01 2017-01-13 1 2017-01-14 22017-06-01 2017-02-15 3 2017-02-16 4Then for me works:print (s.to_frame().loc[pd.IndexSlice[:, '2017/1'],:].squeeze())2017-05-01 2017-01-13 1 2017-01-14 2Name: 0, dtype: int64print (s.loc['2017/6'])2017-06-01 2017-02-15 3 2017-02-16 4dtype: int64But this return empty Series:print (s.loc[pd.IndexSlice[:, '2017/2']])Series([], dtype: int64 |
High Scores! From ACM 2017 testcases = int(input())for i in range(testcases): n = int(input()) names = [] for a in range(n): names.append(input()) prefix = '' for b in range(len(names[0])): for c in names: if c.startswith(prefix) == True: common = True else: common = False if common == False: break prefix += names[0][b] print(prefix)I am given a list of names and I need to find the common prefix that applies to every name. My program works, but always returns one more letter than is supposed to be there. Why is this, and how do I fix it? | If the current prefix matches all the entered names, you add one more character to it. When it fails to match, you break out of the loop - but the character that caused the failure is still attached to the end of prefix.There are various ways to fix this, but one possibility is to just remove the last character by adding this statement outside the loop:prefix = prefix[:-1] # python slice notation - remove the last elementThere are some style issues with your code. Those would be best addressed on CodeReview rather than Stackoverflow.I would have done it this way (after replacing your input statements with a hard-coded test case):x = ["Joseph", "Jose", "Josie", "Joselyn"]n = 0try: while all(a[n] == x[0][n] for a in x[1:]): n += 1except IndexError: passprint(x[0][:n])This script prints "Jos". |
Adding a dimension to an array If I have an array that I loaded from a nifti file with shape (112, 176, 112) and I want to add a fourth dimension but not be limited to shape (112, 176, 112, 3)Why does this code allow me to add however many layers in the 4th dimension I want:data = np.ones((112, 176, 112, 20), dtype=np.int16)print(data.shape) >>>(112, 176, 112, 20)But when I try to add a higher layer number to the fourth dimension of my file I get an error. The code only works correctly if axis = 3. If axis = 2 the shape is (112, 176, 336, 1)filepath = '3channel.nii' img = nib.load(filepath)img = img.get_fdata()print(img.shape) >>>(112, 176, 112)img2 = img.reshape((112, 176, 112, -1))img2 = np.concatenate([img2, img2, img2], axis = 20)Error:AxisError: axis 20 is out of bounds for array of dimension 4 | @hpaulj got it, I was looking this up, and this shows the issue; note the shape of the arrays. I modified the original array so you can see what is being added...import numpy as npdata = np.ones((112, 176, 115, 20), dtype=np.int16)data2=np.ones((112, 176, 115), dtype=np.int16)data2a = data2.reshape((112, 176, 115, -1))print(data2a.shape)print("concatenate...")img2 = np.concatenate([data2a, data2a, data2a],axis=0)print(img2.shape)img2 = np.concatenate([data2a, data2a, data2a],axis=1)print(img2.shape)img2 = np.concatenate([data2a, data2a, data2a],axis=2)print(img2.shape)img2 = np.concatenate([data2a, data2a, data2a],axis=3)print(img2.shape)# This throws the errorimg2 = np.concatenate([data2a, data2a, data2a],axis=4)print(img2.shape) |
How can I get the last 10 records of each day? I have a DataFrame with 96 records each day, for 5 consecutive days.Data: {'value': {Timestamp ('2018-05-03 00:07:30'): 13.02657778, Timestamp ('2018-05-03 00:22:30'): 10.89890556, Timestamp ('2018-05-03 00:37:30'): 11.04877222,... (more days and times)Datatypes: DatetimeIndex (index column) and float64 ('flow' column).I want to save 10 records before an indicated hour (H), of each day.I only managed to do that for one day:df.loc[df['time'] < '09:07:30'].tail(10) | I would suggest filter the record by an hour and then group by date.Data setup:import pandas as pdstart, end = '2020-10-01 01:00:00', '2021-04-30 23:30:00'rng = pd.date_range(start, end, freq='5min')df=pd.DataFrame(rng,columns=['DateTS'])set the hournoon_hour=12 # fill the hour , for filterationResult, if head or tail does not work on your data, you would need to sort it.df_before_noon=df.loc[df['DateTS'].dt.hour < noon_hour] # records before noondf_result=df_before_noon.groupby([df_before_noon['DateTS'].dt.date]).tail(10) # group by date |
get error when using sum and case in sqlalchemy I'm using sqlalchemy func.sum with case in a having condition but get below error.code:query = query.having( func.sum(case([(e.c.escalation_type.in_(escalation_types), 1)], else_=0)) > 0 )escalation_types above is Python listget this error:asyncpg.exceptions.UndefinedFunctionError: function sum(text) does not existHINT: No function matches the given name and argument types. You might need to add explicit type casts.Here is the SQL printed by above:HAVING sum(CASE WHEN (escalation_1.escalation_type IN (:escalation_type_1)) THEN :param_1 ELSE :param_2 END) > :sum_1what am I missing here? Thanks! | Looks like there is a bug in one of the libraries of sqlalchemy, asyncpg. I have to cast 1 and 0 to integer to make it work. here is working code:query = query.having( func.sum( case( [(e.c.escalation_type.in_(escalation_types), cast(1, Integer))], else_=cast(0, Integer), ) ) > 0 ) |
iterate, Nonetype converting to String I am Scraping Financial Data from "http://profit.ndtv.com/stock/hindustan-unilever-ltd_hindunilvr/financials-historical"Code : import requestsfrom bs4 import BeautifulSoupimport reurl = "http://profit.ndtv.com/stock/hindustan-unilever-ltd_hindunilvr/financials-historical"page = requests.get(url)soup = BeautifulSoup(page.text, 'html.parser')table = soup.find("table", {"id": "finsummaryTab"})tr = table.findAll("tr")def periodEnding(index): td = BeautifulSoup(str(tr[2]), 'html.parser') td_list = td.find_all("td") return td_list[index].getText()b = print(periodEnding(1))a = str(b)print(type(a))for i in a: print(i)Output :216.35<class 'str'>NoneI dont know why this happens, can anybody help me with this.thannkyouI want to iterate this numbers | You are using the return value of print():b = print(periodEnding(1))print() always returns None. You then tried to print each individual character of the string "None" (produced by a = str(b)), so you indeed get the letters N, o, n and e printed.Store the return value of periodEnding() instead:b = periodEnding(1)print(b)You are also needlessly reparsing the tr[2] object here:td = BeautifulSoup(str(tr[2]), 'html.parser')td_list = td.find_all("td")There is no point in doing this. tr[2] is a Tag object and supports find_all directly:def periodEnding(index): td_list = tr[2].find_all("td") return td_list[index].getText()This gives you the exact same result without converting a whole subtree to a string then back again into virtually the same BeautifulSoup object tree. |
Could not import django.contrib.syndication.views.feed. View does not exist in module django.contrib.syndication.views. using django and rss I'm trying to get RSS to work with djangoI have a social bookmarking app.when I try to access the rss page at localhost:8000/feeds/recent/I get the following error:Could not import django.contrib.syndication.views.feed. View does not exist in module django.contrib.syndication.views.I am using python 2.7.3 and django 1.5.1I am only going to show the code that I think is relevant.I have the following code in feeds.pyfrom django.contrib.syndication.views import Feedfrom bookmarks.models import Bookmarkclass RecentBookmarks(Feed): title = 'Django Bookmarks | Recent Bookmarks' link = '/feeds/recent/' description = 'Recent bookmarks posted to Django Bookmarks' def items(self): return Bookmark.objects.order_by('id')[:10]The urls.py has the following code I have left out the urls that are not relevant.import os.pathfrom django.conf.urls.defaults import *from bookmarks.views import *from bookmarks.feeds import *from django.views.generic import TemplateViewfrom bookmarks.models import Link, Bookmark, Tag, SharedBookmarksite_media = os.path.join( os.path.dirname(__file__), 'site_media')# Uncomment the next two lines to enable the admin:from django.contrib import adminadmin.autodiscover()admin.site.register(Link)class BookmarkAdmin(admin.ModelAdmin): list_display = ('title', 'link', 'user') list_filter = ('user',) ordering = ('title',) search_fields = ('title',)admin.site.register(Bookmark, BookmarkAdmin)admin.site.register(Tag)admin.site.register(SharedBookmark)feeds = { 'recent': RecentBookmarks}urlpatterns = patterns('', # Feeds (r'^feeds/(?P<url>.*)/$', 'django.contrib.syndication.views.feed', {'feed_dict': feeds }),)The models.py looks like the following:from django.db import modelsfrom django.contrib.auth.models import Userclass Link(models.Model): url = models.URLField(unique=True) def __str__(self): return self.urlclass Bookmark(models.Model): title = models.CharField(max_length=200) user = models.ForeignKey(User) link = models.ForeignKey(Link) def __str__(self): return '%s %s' % (self.user.username, self.link.url) def get_absolute_url(self): return self.link.url | The book I have been learning Django from is quite old.I discovered from looking at the Django Documentation that the url pattern that's required can now go straight to RecentBookmarks.I first looked hereand compared it with thisFrom this comparison I discovered that what I need to do is change the url pattern to the following.(r'^feeds/(?P<url>.*)/$', RecentBookmarks()),I also found that in RSS does not work with google chrome browser unless I installed the RSS Subscription extensionAfter making these changes it now works correctly. |
PyQt5 Application Window Not Showing I am trying to code an application that will allow the user to view a list of Tag IDs, as well as its description, and allow the user to check off each Tag ID that they would like to import data from. At this point I am working on developing the UI only.The code below worked and would show the application window until I added the itemChanged function & connection. Now, when I run this code, only the print statement from the new function will show. The window never shows and the entire application promptly exits (see image for outcome of running script).Additionally, you'll notice that we get the checkState of each type of item - I only want the checkState of the Tag ID.import sysfrom PyQt5.QtWidgets import QApplication, QWidget, QLineEdit, QTableView, QHeaderView, QVBoxLayout, QAbstractItemViewfrom PyQt5.QtCore import Qt, QSortFilterProxyModelfrom PyQt5.QtGui import QStandardItemModel, QStandardItemclass myApp(QWidget): def __init__(self): super().__init__() self.resize(1000, 500) mainLayout = QVBoxLayout() tagIDs = ('Tag_1', 'Tag_2', 'Tag_3', 'Tag_4', 'Tag_5') descriptions = ('Description_1', 'Description_2', 'Description_3', 'Description_4', 'Description_5') model = QStandardItemModel(len(tagIDs), 2) model.itemChanged.connect(self.itemChanged) model.setHorizontalHeaderLabels(['Tag IDs', 'Description']) for i in range(len(tagIDs)): item1 = QStandardItem(tagIDs[i]) item1.setCheckable(True) item2 = QStandardItem(descriptions[i]) model.setItem(i, 0, item1) model.setItem(i, 1, item2) filterProxyModel = QSortFilterProxyModel() filterProxyModel.setSourceModel(model) filterProxyModel.setFilterCaseSensitivity(Qt.CaseInsensitive) filterProxyModel.setFilterKeyColumn(1) searchField = QLineEdit() searchField.setStyleSheet('font-size: 20px; height: 30px') searchField.textChanged.connect(filterProxyModel.setFilterRegExp) mainLayout.addWidget(searchField) table = QTableView() table.setStyleSheet('font-size: 20px;') table.verticalHeader().setSectionResizeMode(QHeaderView.Stretch) table.horizontalHeader().setSectionResizeMode(1, QHeaderView.Stretch) table.setModel(filterProxyModel) table.setEditTriggers(QAbstractItemView.NoEditTriggers) mainLayout.addWidget(table) self.setLayout(mainLayout) def itemChanged(self, item): print("Item {!r} checkState: {}".format(item.text(), item.checkState()))def main(): app = QApplication(sys.argv) myAppControl = myApp() myAppControl.show() sys.exit(app.exec_())if __name__ == "__main__": main() | Header settings that depend on the model must always be set when a model is set.Move table.setModel(filterProxyModel) right after the creation of the table or, at least, before table.horizontalHeader().setSectionResizeMode (the vertical setSectionResizeMode() is generic for the whole header and doesn't cause problems). |
Python - store chinese characters read from excel I am trying to read in an excel sheet using xlrd, but I'm having some problems storing Chinese characters.I am not sure why values get translated when I store it in a list:Code:for rownum in range(sh.nrows): Temp.append(sh.row_values(rownum)) print TempOutput: u'\u8bbe\u5168\u96c6\u662f\u5b9e\u6570\u96c6R\uff0cM= {x|-2&lt;=x&lt;=2}\uff0cN{x|x&lt;1}\uff0c\u5219bar(M) nn N\u7b49\u4e8e \n[A]\uff1a{x|x&lt;-2} [B]\uff1a {x|-2&lt;1} [C]\uff1a{x|x&lt;1} [D]\uff1a{x|-2&lt;=x&lt;1}'However when I print out a single cell value, they are printed out correctly as per excel sheet:Code: cell_test = sh.cell(1,3).value print cell_testOutput: 设全集是实数集R,M={x|-2&lt;=x&lt;=2},N={x|x&lt;1},则bar(M) nn N等于 [A]:{x|x&lt;-2} [B]:{x|-2&lt;1} [C]:{x|x&lt;1} [D]:{x|-2&lt;=x&lt;1}What should I do to get Python to store the above data at its original value?Thanks! | First. You XSL parser seem to return unicode values.Second. When you do print some_complex_object (as you do print Temp), Python usually outputs the result of repr function on the elements of that object. And when you do print repr(some_unicode_string), the usual output is something like u'\u8bbe\u5168\u96c6\u662f'.Third. There is nothing wrong with storing of the values - they are correctly stored, you just have problems with printing. Try something like:for i in Temp: print i |
Web.py todo list with login I try to add a login functionality to the web.py todo example.This is my code:""" Basic todo list using webpy 0.3 """import webimport model### Url mappingsurls = ( '/', 'Index', '/login', 'Login', '/logout', 'Logout', '/del/(\d+)', 'Delete',)### Templatesrender = web.template.render('templates', base='base')app = web.application(urls, locals())session = web.session.Session(app, web.session.DiskStore('sessions'))allowed = ( ('user','pass'), ('tom','pass2'))class Login: login_form = web.form.Form( web.form.Textbox('username'), web.form.Password('password'), web.form.Button('Login'), ) def GET(self): f = self.login_form() return render.login(f) def POST(self): # Validation if not self.login_form.validates(): print "it didn't validate!" session.logged_in = True raise web.seeother('/')class Logout: def GET(self): session.logged_in = False raise web.seeother('/')class Index: form = web.form.Form( web.form.Textbox('title', web.form.notnull, description="I need to:"), web.form.Button('Add todo'), ) def GET(self): print "logged_in " + str(session.get('logged_in', False)) if session.get('logged_in', False): """ Show page """ todos = model.get_todos() form = self.form() return render.index(todos, form) else: raise web.seeother('/login') def POST(self): """ Add new entry """ form = self.form() if not form.validates(): todos = model.get_todos() return render.index(todos, form) model.new_todo(form.d.title) raise web.seeother('/')class Delete: def POST(self, id): """ Delete based on ID """ id = int(id) model.del_todo(id) raise web.seeother('/')app = web.application(urls, globals())if __name__ == '__main__': app.run()When the user does a POST in /login, logged_in is always False.Any ideas why? | I just fixed it. I was missing some session initialization code.Here's the working code:""" Basic todo list using webpy 0.3 """import webimport model### Url mappingsurls = ( '/', 'Index', '/login', 'Login', '/logout', 'Logout', '/del/(\d+)', 'Delete',)web.config.debug = Falserender = web.template.render('templates', base='base')app = web.application(urls, locals())session = web.session.Session(app, web.session.DiskStore('sessions'))allowed = ( ('user','pass'),)class Login: login_form = web.form.Form( web.form.Textbox('username', web.form.notnull), web.form.Password('password', web.form.notnull), web.form.Button('Login'), ) def GET(self): f = self.login_form() return render.login(f) def POST(self): if not self.login_form.validates(): return render.login(self.login_form) username = self.login_form['username'].value password = self.login_form['password'].value if (username,password) in allowed: session.logged_in = True raise web.seeother('/') return render.login(self.login_form)class Logout: def GET(self): session.logged_in = False raise web.seeother('/')class Index: form = web.form.Form( web.form.Textbox('title', web.form.notnull, description="I need to:"), web.form.Button('Add todo'), ) def GET(self): if session.get('logged_in', False): """ Show page """ todos = model.get_todos() form = self.form() return render.index(todos, form) else: raise web.seeother('/login') def POST(self): """ Add new entry """ form = self.form() if not form.validates(): todos = model.get_todos() return render.index(todos, form) model.new_todo(form.d.title) raise web.seeother('/')class Delete: def POST(self, id): """ Delete based on ID """ id = int(id) model.del_todo(id) raise web.seeother('/')app = web.application(urls, globals())if web.config.get('_session') is None: session = web.session.Session(app, web.session.DiskStore('sessions'), {'count': 0}) web.config._session = sessionelse: session = web.config._sessionif __name__ == '__main__': app.run() |
TypeError at /add_team/ 'dict' object is not callable views.py:class AddTeamView(View): template_name = 'add_team.html' def get (self, request): form = TeamForm() context = {'form': form} return render(request, 'add_team.html', context) def post(self, request): form = TeamForm(request.POST) if form.is_valid(): team = Team() team.name = form.cleaned_data('name') team.details = form.cleaned_data('detials') context = {'form': form, 'team.name':team.name,'team.details':team.details} return render(request, self.template_name, context)add_team.html : {% extends 'base.html' %}{% block title %}add team{% endblock %}{% block content %}<form action="/add_team/" method="post"> {% csrf_token %} {{ form }} <input type="submit" value="Submit"></form>{% endblock %}forms.py :from django import formsclass TeamForm(forms.Form): name = forms.CharField(label='name of team') details = forms.CharField(label='details of team')when I went to the browser it appeared this: TypeError at /add_team/ 'dict' object is not callable Request Method: POST Request URL: http://127.0.0.1:8000/add_team/ Django Version: 2.1.1 Exception Type: TypeError Exception Value: 'dict' object is not callable Exception Location: C:\Users\Acer\Desktop\teammanager\teams\views.py in post, line 52 Python Executable: C:\Users\Acer\Desktop\teammanager_env\Scripts\python.exe Python Version: 3.7.0 | The form.cleaned_data is a dictionary, so you obtain elements by subscripting, or by using the .get(..) method (to return None or a default value in case the key is missing), so you should rewrite:team.name = form.cleaned_data('name')team.details = form.cleaned_data('detials')to:team.name = form.cleaned_data['name']team.details = form.cleaned_data['details'] # typo: detials -> detailsThat being said, it is probably better to make a ModelForm:class TeamForm(forms.ModelForm): name = forms.CharField(label='name of team') details = forms.CharField(label='details of team')then the view looks like:class AddTeamView(View): template_name = 'add_team.html' def get (self, request): form = TeamForm() context = {'form': form} return render(request, 'add_team.html', context) def post(self, request): form = TeamForm(request.POST) if form.is_valid(): team = form.save() context = {'form': form, 'name':team.name,'details':team.details} return render(request, self.template_name, context)You should also consider using a CreateView, instead of a simple view, and redirect when a post(..) is done successful, since rendering in case of a POST, can result in errors when the user refreshes the page (see this Wikipedia article for the POST-REDIRECT-GET pattern). |
after4 - Simple python task (index and list issues) this is my first time asking a question on stack overflow. It has been really valuable to me while I have been learning python 2.7The question is as follows:"Given a non-empty list numlist of ints, write a function after4(numlist) that returns a new list containing the elements from the original numlist that come after the last 4 in the original numlist. The numlist will contain at least one 4. after4([2, 4, 1, 2]) → [1, 2]after4([4, 1, 4, 2]) → [2]after4([4, 4, 1, 2, 3]) → [1, 2, 3]"I believed the question to be rather simple but I just can seem to get the code right for what I had planned in my head. def after4(numlist): """ Given a list of numbers, will print all numbers after the last 4 :param x: list - list of numbers including the 4 :return: list - New list of all numbers after the last 4 """ indices = [i for i, x in enumerate(numlist) if x == 4] index = max(indices) print x[index:]But I keep getting this error and I'm not sure how to work around it.'int' object has no attribute 'getitem'" (the error is on the final line of the code "print x[index:]")Thank you in advance. | You use the name x for two different purposes: as the list parameter for the function after4() and as an integer in the list comprehension for the variable indices.The interpreter thinks you mean the integer one in the last line, but you mean the list parameter one. Change one of those names to a different name and see what happens.You should use more descriptive variable names from now on. For example, instead of using x for the list parameter, use something like number_list, which makes it clear just what it is. Keep short names like x for mathematical parameters (such as math.sin(x)) and for list comprehensions. |
(1) Running a .py in cmd and (2) with variable in same line I figured out the zip code in same line out. It's sys.argv[1], I had other code I neglected to comment out when trying out [1] that gave me the error. All I need help with now is getting weather.py to run without having to call the whole file path.I will preface with I'm not very experienced with python and may get certain names wrong or think something might work that obviously doesn't, bear with me I tried to word this to make as much sense as possible.So I need to run a program using the command line. The program is complete and 100% functioning when ran in PyDev. The program is called weather.py, and what needs to trigger it in cmd is python weather.py (5 digit zip)I cannot get the program to run using just 'python weather.py' first off. I have added C:\python27 to PATH as well as C:\python27\python.exe (not sure if that does anything). Getting the .py to run via those two keywords doesn't seem to work with what I've tried. I also need to be able to add a zip code to the same line to trigger the program. I was told about zipcode = sys.argv[0]to allow the zip code to be automatically initialized as a variable, but I get IndexError: list index out of rangewhen I run the program using python C:\python27\weather.pyI tried replacing 0 with 1 or 2 because I'm unfamiliar with .argv but neither of those worked either. Any help getting the program to run using just python weather.py OR getting the zip code input to function on the same line is greatly appreciated. | Make sure you import sys in your code.import syszipCode = sys.argv[1]and actually provide an argumentEDIT:For clarity, if sys was not imported, you would get NameError and not an IndexError. Additionally, when passing args in from the command line, the indexing actually begins at 0 where sys.argv[0] is always the program name and the provided args begin at 1. So, in this case, the zip code would be at sys.argv[1]EDIT2:variable name to avoid using reserve words :) |
Python, range(), double loops, Codes and result are shown below.I'm curious about the prints beginning wiht 1 instead of 0 as start.Where does the program get 1 from?Can someone please help me here? Thanks!for i in range(5) : for j in range(i) : print(i, end=" ") print()1 2 2 3 3 3 4 4 4 4The same result with codes below:for i in range(5) : for j in range(0, i) : print(i, end=" ") print()By altering (0, i) to (1, i), it's also logically omitted 1, but how does it come to a single 2 as the result shown below?for i in range(5) : for j in range(1, i) : print(i, end=" ") print() 2 3 3 4 4 4 | Because for j in range(0): loops 0 times, so it never prints i when its 0. If you look closely at your output, you'll see that the first line is actually blank. |
python error in decoding base64 string I'm trying to unzip a base64 string,this is the code I'm usingdef unzip_string(s) : s1 = base64.decodestring(urllib.unquote(s)) sio = StringIO.StringIO(s1) gzf = gzip.GzipFile(fileobj=sio) guff = gzf.read() return json.loads(guff)i'm getting error Error: Incorrect paddingwhere I try to unzip the same string using node.js code it works without a problem.where:s == H4sIAAAAAAAAA22PW0/CQBCF/8s81wQosdA3TESJhhhb9cHwMN1O6Ybtbt0LhDT97+5yU4yPc+bMnO90YCyyDaSfHRimieQSG4IUaldABC1qbAykHbQsrzWZWokSUumEiMCQ3nJGCy9ADH0EFvWarJ+eHv11v4qgEIptqHyTlovzWes0q9HQ3X87Lh80Msp5gDhqzGlN0or9B1pWU5ldxV72c2/ODg0C7lUXu/U2p8XLpY35+6Mmtsn4WqLILFrnTRUKQxFwk7+fSL23+zX215VD/jE16CeojIzhSi5kpQ6xzVkIz76wuSmHRVINRuVtheMxDuLJJB5Nk5hRMkriaTGJh8MDn5LWv8v3bejzvFjez15/5EsNbuZo7FzpHepyJoTaBWqrHfX9N0/UAJ7qAQAA.bi0I1YDZ3V6AXu6aYTGO1JWi5tE5CoZli7aa6bFtqM4I've seen some suggestions to add '=' and other magic but it just results in the gzip module failing to open the file.any ideas? | This worked for me (Python 3). The padding is indeed important, as you've seen in other answers:import base64import zlibimport jsons = b'H4sIAAAAAAAAA22PW0/CQBCF/8s81wQosdA3TESJhhhb9cHwMN1O6Ybtbt0LhDT97+5yU4yPc+bMnO90YCyyDaSfHRimieQSG4IUaldABC1qbAykHbQsrzWZWokSUumEiMCQ3nJGCy9ADH0EFvWarJ+eHv11v4qgEIptqHyTlovzWes0q9HQ3X87Lh80Msp5gDhqzGlN0or9B1pWU5ldxV72c2/ODg0C7lUXu/U2p8XLpY35+6Mmtsn4WqLILFrnTRUKQxFwk7+fSL23+zX215VD/jE16CeojIzhSi5kpQ6xzVkIz76wuSmHRVINRuVtheMxDuLJJB5Nk5hRMkriaTGJh8MDn5LWv8v3bejzvFjez15/5EsNbuZo7FzpHepyJoTaBWqrHfX9N0/UAJ7qAQAA.bi0I1YDZ3V6AXu6aYTGO1JWi5tE5CoZli7aa6bFtqM4'decoded = base64.urlsafe_b64decode(s + b'=')uncompressed = zlib.decompress(decoded, 16 + zlib.MAX_WBITS)unjsoned = json.loads(uncompressed.decode('utf-8'))print(unjsoned)The zlib.decompress(decoded, 16 + zlib.MAX_WBITS) is a slightly more compact way to un-gzip a byte string. |
Python 3 Sockets - Receiving more then 1 character So when I open up the CMD and create a telnet connection with:telnet localhost 5555It will apear a "Welcome", as you can see on the screen below.After that every single character I type into the CMD will be printed out/send immediately.My Question is: Is it, and if yes, how is it possible to type in messages and then send them so I receive them as 1 sentence and not char by char.import socketimport sysfrom _thread import *host = ""port = 5555s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)try: s.bind((host,port))except socket.error as e: print(str(e))s.listen(5) #Enable a server to accept connections.print("Waiting for a connection...")def threaded_client(conn): conn.send(str.encode("Welcome\n")) while True: # for m in range (0,20): #Disconnects after x chars data = conn.recv(2048) #Receive data from the socket. reply = "Server output: "+ data.decode("utf-8") print(data) if not data: break conn.sendall(str.encode(reply)) conn.close()while True: conn, addr = s.accept() print("connected to: "+addr[0]+":"+str(addr[1])) start_new_thread(threaded_client,(conn,)) | You need to keep reading until the stream ends:string = ""while True:# for m in range (0,20): #Disconnects after x chars data = conn.recv(1) #Receive data from the socket. if not data: reply = "Server output: "+ string conn.sendall(str.encode(reply)) break else: string += data.decode("utf-8")conn.close()By the way, using that method you'll read one char at a time. You may adapt it to the way your server is sending the data. |
Set handler for GPIO state change using python signal module I want to detect change in gpio input of raspberry pi and set handler using signal module of python. I am new to signal module and I can't understand how to use it. I am using this code now:import RPi.GPIO as GPIOimport timefrom datetime import datetimeimport picamerai=0j=0camera= picamera.PiCamera()camera.resolution = (640, 480)# handle the button eventdef buttonEventHandler (pin): global j j+=1 #camera.close() print "handling button event" print("pressed",str(datetime.now())) time.sleep(4) camera.capture( 'clicked%02d.jpg' %j ) #camera.close()def main(): GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(2,GPIO.IN,pull_up_down=GPIO.PUD_UP) GPIO.add_event_detect(2,GPIO.FALLING) GPIO.add_event_callback(2,buttonEventHandler) # RPIO.add_interrupt_callback(2,buttonEventHandler,falling,RPIO.PUD_UP,False,None) while True: global i print "Hello world! {0}".format(i) i=i+1 time.sleep(5) # if(GPIO.input(2)==GPIO.LOW): # GPIO.cleanup()if __name__=="__main__": main() | I just changed code in a different manner tough you are free to implement same using SIGNAL module.You can start new thread and poll or register call back event their, by using following code and write whatever your functional logic in it's run() method.import threadingimport RPi.GPIO as GPIOimport timeimport timefrom datetime import datetimeimport picamerai=0j=0camera= picamera.PiCamera()camera.resolution = (640, 480)PIN = 2class GPIOThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): while True: if GPIO.input(PIN) == False: # adjust this statement as per your pin status i.e HIGH/LOW global j j+=1 #camera.close() print "handling button event" print("pressed",str(datetime.now())) time.sleep(4) camera.capture( 'clicked%02d.jpg' %j )def main(): GPIO.setmode(GPIO.BCM) GPIO.setwarnings(False) GPIO.setup(PIN,GPIO.IN,pull_up_down=GPIO.PUD_UP) GPIO.add_event_detect(PIN,GPIO.FALLING) gpio_thread = GPIOThread() gpio_thread.start() while True: global i print "Hello world! {0}".format(i) i=i+1 time.sleep(5)if __name__=="__main__": main()The above code will iterate until PIN input goes high, so once PIN goes high the condition in while loop inside run method breaks and picture is captured.So, in order to call above thread do this.gpio_thread = GPIOThread()gpio_thread.start() this will call the thread constructor init and will initialize the variable inside constructor if any, and execute the run method.You can also call join() method , to wait until thread completes it's execution.gpio_thread.join() This always works for me, so Cheers!! |
rpy + matplotlib + arcpy I am trying to use ryp with my arcpy scripts but I have the following error:import rpy2.robjects as robjects Traceback (most recent call last):File "<pyshell#0>", line 1, in <module> import rpy2.robjects as robjectsFile "C:\Python26\ArcGIS10.0\lib\site-packages\rpy2\robjects\__init__.py", line 12, in <module> import rpy2.rinterface as rinterfaceFile "C:\Python26\ArcGIS10.0\lib\site-packages\rpy2\rinterface\__init__.py", line 39, in <module> import win32apiImportError: No module named win32apiThis error comes even after the installation of the pywin32 for my version of python. I've noticed that this seems to be a common error that is usually solved with the installation of pywin32.I also have a problem with the matplotlib installation, every time i try to use it (import matplotlib.pyplot as plt), python crashes...Versions:Python 2.6.6matplotlib installation: matplotlib-1.1.0.win32-py2.6.exe | You will need to run these scripts with PROPER Python. It seems to me that the ArcPy distribution does not include the win32api module (It also does not exist from example in Python on Mac or Linux). I would install PythonXY which includes R bindings, and see if your scripts run there. If they run there, then I (guess) I am correct, and ArcPy does not include these modules. A nice BONUS of PythonXY is it's an excellent Python IDE (Spyder), but the real bonus is what the commenter above me said: different compiler versions can cause hell of a lot of Problems.So, in PythonXY you get a whole bundle compiled with the same compiler.Let us know if these made your RPy script run. |
Add Variables to Tuple I am learning Python and creating a database connection.While trying to add to the DB, I am thinking of creating tuples out of information and then add them to the DB. What I am Doing:I am taking information from the user and store it in variables. Can I add these variables into a tuple? Can you please help me with the syntax?Also if there is an efficient way of doing this, please share...EDITLet me edit this question a bit...I only need the tuple to enter info into the DB. Once the information is added to the DB, should I delete the tuple? I mean I don't need the tuple anymore. | Tuples are immutable; you can't change which variables they contain after construction. However, you can concatenate or slice them to form new tuples:a = (1, 2, 3)b = a + (4, 5, 6) # (1, 2, 3, 4, 5, 6)c = b[1:] # (2, 3, 4, 5, 6)And, of course, build them from existing values:name = "Joe"age = 40location = "New York"joe = (name, age, location) |
How to increment a string in python I was trying this code:str = input("Enter the string:")num = input("By how much you want to increment:")x = int(str) + numprint(char(num))but this throws a traceback, What will be the correct code and what if the person enters (z + 1) i.e. how will the code be fixed around only the 26 alphabets.Thank you | You can use ord to get ascii of them and chr to get back the value using ascii valuedef inc_letter(char, inc): start_char = ord('a') if char.islower() else ord('A') start = ord(char) - start_char offset = ((start + inc) % 26) + start_char result = chr(offset) return resultstr_ = input("Enter the string:")num = int(input("By how much you want to increment:"))inc_letter(str_, num)Result:Enter the string:ZBy how much you want to increment:12'L' |
IO error in savetxt while using numpy Im trying to read a dataset and collect meta features from it.I get the following error after executing the python file.Traceback (most recent call last): File "runmeta.py", line 79, in <module> np.savetxt('datasets/'+str(i)+'/metafeatures',meta[i],delimiter=',') File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 940, in savetxt fh = open(fname, 'w')IOError: [Errno 2] No such file or directory: 'datasets/2/metafeatures' | the error you're getting is simply telling you it didn't find the file. i would suggest looking into absolute and relative file paths. advice in error handling:the error is triggered on this linefh = open(fname, 'w')so as you debug your program, look at the line python shows you. maybe change the variable fname. that is where i would start.currentlyfname = 'datasets/2/metafeatures' |
Why this SQL does not work in python cur.execute("SELECT * FROM `productinfo` WHERE CreateDate > '%s'",kakko)where kakko is user input string, for example, 2012-01-15'%s' is not correct? | So, elaborating from the comments:cursor.execute requires a parameter tuple, and you don't need to quote the %s:cur.execute("SELECT * FROM `productinfo` WHERE CreateDate > %s", (kakko, )) |
how to verify connection is reused with python requests.session? I'd like to use requests 's session to reuse connections in django. Reusing connections in Django with Python Requests says I only need to declare it in global and access it.However I doubt it is working as expected because it didn't get any faster in my test.Is there a way to see if connection is actually reused as described here?http://docs.python-requests.org/en/latest/user/advanced/Django spawns separate thread for each requests and I think it defeats the mechanism to share the connection. (because session won't be shared across multiple threads) .. this is my hypothesis.. | You can try increasing your logging verbosity, then look out for logs that look like:"Starting new HTTPS connection (1): some.url:port"This is how to make global logging more verbose:import logginglogging.basicConfig(level=logging.DEBUG, format="%(message)s")If the connection is being reused, you will only see one of those messages for any given url. If it is not being reused, you will see a different one each time a connection is established to the same url. |
Is there a way to remove all characters except letters in a string in Python? I call a function that returns code with all kinds of characters ranging from ( to ", and , and numbers.Is there an elegant way to remove all of these so I end up with nothing but letters? | Givens = '@#24A-09=wes()&8973o**_##me' # contains letters 'Awesome' You can filter out non-alpha characters with a generator expression:result = ''.join(c for c in s if c.isalpha())Or filter with filter:result = ''.join(filter(str.isalpha, s)) Or you can substitute non-alpha with blanks using re.sub:import reresult = re.sub(r'[^A-Za-z]', '', s) |
how to check if two strings have intersection in python? For example, a = "abcdefg", b = "krtol", they have no intersection, c = "hflsfjg", then a and c have intersaction.What's the easiest way to check this? just need a True or False result | def hasIntersection(a, b): return not set(a).isdisjoint(b) |
Why am i getting error The error:Error Traceback (most recent call last): File "/home/enrique/Dropbox/Public/pygametut3.py", line 41, in <module> pix = MovingPixel(width/2, height/2)TypeError: this constructor takes no argumentsThe Code:#Creat a moving pixelpix = MovingPixel(width/2, height/2)while running: pix.move() if pix.x <= 0 or pix.x >= width or pix.y <= 0 or pix.y >= height: print "Crash" running = False | Because MovingPixel needs to be instantiated with no arguments:pix = MovingPixel() |
using pyunit on a network thread I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads.For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me.I have to imagine there are methods for unit testing networked threads, but I can't seem to find any. | I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.Try not to use threads just yet, to simplify testing. |
What is a good & free game engine? For C++, Java, or Python, what are some good game + free game engines that are easy to pick up?Any type of game engine is okay. I just want to get started somewhere by looking into different game engines and their capabilities. | For my Computer Graphics course in College we used the open source OGRE 3D engine. Not only is this an extremely robust 3D engine but it was a blast! Develop a medium sized game using it and you will get a good taste of many of the different game programming specialties. You'll find yourself doing 3d modeling, sound effects, physics programming, AI, the works. alt text http://www.mactabilisarts.com/Images/multiplayer%20games.jpgScreenshot of a recent OGRE 3D Game |
Tensorflow: how to assign variables properly It's not duplicate of How to assign value to a tensorflow variable?I was trying to do simpliest thing: just swap variables Tensorflow: how to swap variables between scopes and set variables in scope from another, and I still can't do it.BUT now I know that assign changes even copy of tensor which I get with tf.identity. I don't want this. I need copy of variable for swapping. In [10]: a = tf.Variable(1)In [11]: b = tf.identity(a)In [12]: a += 1In [14]: sess.run(a)Out[14]: 2In [15]: sess.run(b)Out[15]: 1In [16]: a = tf.Variable(1)In [17]: b = tf.identity(a)In [18]: assign_t = a.assign(2)In [20]: sess.run(tf.initialize_all_variables())In [21]: sess.run(a)Out[21]: 1In [22]: sess.run(assign_t)Out[22]: 2In [23]: sess.run(a)Out[23]: 2In [24]: sess.run(b)Out[24]: 2How can I assign value to a without changing b? | The tf.identity() operation is stateless. When you have a tf.Variable called a, the value of tf.identity(a) will always be the same as the value of a. If you want b to remember a previous value of a, you should create b as a tf.Variable as well:a = tf.Variable(1)b = tf.Variable(a.initialized_value())sess.run(tf.global_variables_initializer())# Initially, variables `a` and `b` have the same value.print(sess.run([a, b])) ==> [1, 1]# Update the value of `a` to 2.assign_op = a.assign(2)sess.run(assign_op)# Now, `a` and `b` have different values.print(sess.run([a, b])) ==> [2, 1] |
How to straighten a rotated rectangle area of an image using OpenCV in Python? The following picture will tell you what I want.I have the information of the rectangles in the image (width, height, center point and rotation degree). Now, I want to write a script to cut them out and save them as an image, but straighten them as well. As in, I want to go from the rectangle shown inside the image to the rectangle that is shown outside.I am using OpenCV Python. Please tell me a way to accomplish this.Kindly show some code as examples of OpenCV Python are hard to find. | You can use the warpAffine function to rotate the image around a defined center point. The suitable rotation matrix can be generated using getRotationMatrix2D (where theta is in degrees). You then can use Numpy slicing to cut the image. import cv2import numpy as npdef subimage(image, center, theta, width, height): ''' Rotates OpenCV image around center with angle theta (in deg) then crops the image according to width and height. ''' # Uncomment for theta in radians #theta *= 180/np.pi shape = ( image.shape[1], image.shape[0] ) # cv2.warpAffine expects shape in (length, height) matrix = cv2.getRotationMatrix2D( center=center, angle=theta, scale=1 ) image = cv2.warpAffine( src=image, M=matrix, dsize=shape ) x = int( center[0] - width/2 ) y = int( center[1] - height/2 ) image = image[ y:y+height, x:x+width ] return imageKeep in mind that dsize is the shape of the output image. If the patch/angle is sufficiently large, edges get cut off (compare image above) if using the original shape as--for means of simplicity--done above. In this case, you could introduce a scaling factor to shape (to enlarge the output image) and the reference point for slicing (here center).The above function can be used as follows:image = cv2.imread('owl.jpg')image = subimage(image, center=(110, 125), theta=30, width=100, height=200)cv2.imwrite('patch.jpg', image) |
Python + selenium: extract variable quantity of paragraphs between titles Fellows, assuming the html below how can extract the paragraphs <p> who belongs to the tile <h3>.<!DOCTYPE html> <html> <body> ... <div class="main-div"> <h3>Title 1</h3> <p></p> <h3>Title 2</h3> <p></p> <p></p> <p></p> <h3>Title 3</h3> <p></p> <p></p> ... </div></body>As you can see both <h3> and <p> tags are children of the <div> tag but they have no class or id that makes possible to identify them and say that "Title 1" has 1 paragraph, title 2 has 3 paragraphs, title 3 has two paragraphs and so on. I can't see a way to tie the paragraph to the title...I'm trying to do it using Python 2.7 + selenium. But I'm not sure that I'm working with the right tools, maybe you can suggest the solution or any different combinations like Beautifulsoup, urllib2...Any suggestion/direction will be very appreciated!UPDATEAfter the brilliant solution pointed by @JustMe I came up with the solution below, hope it helps someone else or if someone can improve it to pythonic. I coming from c/c++/java/perl world so always I hit the wall :)import bs4page = """ <!DOCTYPE html><html><body>... <div class="maincontent-block"> <h3>Title 1</h3> <p>1</p> <p>2</p> <p>3</p> <h3>Title 2</h3> <p>2</p> <p>3</p> <p>4</p> <h3>Title 3</h3> <p>7</p> <p>9</p> ... </div></body>"""page = bs4.BeautifulSoup(page, "html.parser")div = page.find('div', {'class':"maincontent-block"})mydict = {}# write to the dictionaryfor tag in div.findChildren(): if (tag.name == "h3"): #print(tag.string) mydict[tag.string] = None nextTags = tag.findAllNext() arr = []; for nt in nextTags: if (nt.name == "p"): arr.append(nt.string) mydict[tag.string] = arr elif (nt.name == "h3"): arr = [] break# read from dictionaryarrKeys = []for k in mydict: arrKeys.append(k)arrKeys.sort()for k in arrKeys: print k for v in mydict[k]: print v | It's easy to be done using BeautifulSoupimport bs4page = """<!DOCTYPE html> <html> <body> ... <div class="main-div"> <h3>Title 1</h3> <p></p> <h3>Title 2</h3> <p></p> <p></p> <p></p> <h3>Title 3</h3> <p></p> <p></p> ... </div></body>"""page = bs4.BeautifulSoup(page)h3_tag = page.div.find("h3").stringprint(h3_tag)>>> u'Title 1'h3_tag.find_next_siblings("p")>>> [<p></p>, <p></p>, <p></p>, <p></p>, <p></p>, <p></p>]len(h3_tag.find_next_siblings("p"))/2>>> 3Ok, since You want separated count of paragraphs i came up with this, crude thing. h_counters = [] count = -1 for child in page.div.findChildren(): if "<h3>" in str(child): h_counters.append(count) count = 0 else: count += 1 h_counters.append(count) h_counters = h_counters[1:] print (h_counters) >> [1, 3, 2] |
How to print the sum value of gradient in tensorflow? self.logits = nn_layers.full_connect_(self.wide_deep_embed, config.num_classes, activation='None', use_bn = True, \ keep_prob=self.keep_prob, name='output_layer') # predict prob## loss and optim#self.loss = nn_layers.cross_entropy_loss_with_reg(self.labels, self.logits)self.loss = tf.losses.mean_squared_error(self.labels, self.logits)tf.summary.scalar('loss', self.loss)if not opt: optim = nn_layers.get_optimizer(config.optimizer, learning_rate=self.learning_rate)else: optim = optself.train_op = optim.minimize(self.loss, global_step=self.global_step)## score & infersself.infers = self.logits # predict labelHere is a part of my model which is a DNN to do a regression task.But I find that the model's loss did not change to much after several batches (batch size is 1000 and the whole data is 11 million). So I want to print the value of sum gradient in every step, which is the sum of gradients in every batches. How can I modify my code to do it? | Here's how you can add the gradients to tensorboard summary on each step:# All gradients of loss function wrt trainable variables grads = tf.gradients(self.loss, tf.trainable_variables())# Summarize all gradientsfor grad, var in list(zip(grads, tf.trainable_variables())): tf.summary.histogram(var.name + '/gradient', grad)If the gradients are too big, you can report the sum as well:for grad, var in list(zip(grads, tf.trainable_variables())): tf.summary.histogram(var.name + '/gradient_sum', tf.reduce_sum(grad))But usually you can detect vanishing gradients without taking a sum: just take a look at the gradients at the early layers of your network. |
tensorflow object detection eval error When I use a model to check the mAP on test datasets, I got the following error:INFO:tensorflow:Restoring parameters from /home/aurora/workspaces/PycharmProjects/tensorflow/tensorflow_object_detection/outputs/model.ckpt-278075INFO:tensorflow:Restoring parameters from /home/aurora/workspaces/PycharmProjects/tensorflow/tensorflow_object_detection/outputs/model.ckpt-278075WARNING:root:The following classes have no ground truth examples: 0/home/aurora/workspaces/PycharmProjects/tensorflow/tensorflow_object_detection/object_detection/utils/metrics.py:145: RuntimeWarning: invalid value encountered in true_dividenum_images_correctly_detected_per_class / num_gt_imgs_per_class)I examined test.tfrecords, and every image have ground-truth bounding-boxes.How could I solve this problem? Thanks. | I got a similar error and I was stuck for many days on that error.I could resolve that error by editing my label.pbtxt file. Could you show your label(.pbtxt) file?My label file was :(containing 3 labels)item { id: 1 name: 'tree' id: 2 name: 'water body' id: 3 name: 'building'}Then I changed that to :item { id: 1 name: 'tree' }item { id: 2 name: 'water body' }item { id: 3 name: 'building' }This worked in my case. Have a look at your .pbtxt file which you reference to in the config file of your model. |
Incrementing IntegerField counter in a database As beginner at Django, i tried to make a simple application that would give Http response of how many times content was viewed.I have created a new Counter model, and inside, added IntegerField model count.class Counter(models.Model): count = models.IntegerField(default=0) def __int__(self): return countIn views, i made a variable counter out of Counter() class, and tried adding +1 to counter.count integer, but when i tried to save, it would give me an error that integer couldn't be saved.so i tried saving class instead:def IndexView(response): counter = Counter() counter.count = counter.count + 1 counter.save() return HttpResponse(counter.count)This method, would keep showing 1 and could not change after reload.How would i change IntegerField model properly, so it could be updated after every view, and would be saved even if server was reloaded? | The problemYes but you are creating a new Counter object on each request, which starts again at 0, that's your problemdef IndexView(response): counter = Counter() # This creates a new counter each time counter.count = counter.count + 1 counter.save() return HttpResponse(counter.count)What you were doing above would result in a bunch of Counter objects with count = 1 in the database.The SolutionMy example below shows you how to get an existing Counter object, and increment it, or create it if it doesn't already exist, with get_or_create()First we need to associate a Counter to e.g. a page (or anything, but we need someway to identify it and grab it from the DB)class Counter(models.Model): count = models.IntegerField(default=0) page = models.IntegerField() # or any other way to identify # what this counter belongs tothen:def IndexView(response): # Get an existing page counter, or create one if not found (first page hit) # Example below is for page 1 counter, created = Counter.objects.get_or_create(page=1) counter.count = counter.count + 1 counter.save() return HttpResponse(counter.count)Avoid race conditions that can happen with count = count + 1And to avoid race conditions use an F expression# When you have many requests coming in,# this may have outdated value of counter.count:# counter.count = counter.count + 1# Using an F expression makes the +1 happen on the databasefrom django.db.models import Fcounter.count = F('count') + 1 |
Django Celery Directory Structure and Layout I have a django project using the following directory structure.project/ account/ models.py views.py blog/ models.py views.py mediakit/ models.py views.py reports/ celery.py <-- new models.py tasks.py <-- new views.py settings/ __init__.py <-- project settings file system/ cron/ mongodb/ redis/ manage.pyHere's the contents of celery.py derived from the celery tutorial (http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html).from __future__ import absolute_importimport osfrom celery import Celery# set the default Django settings module for the 'celery' program.os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')from django.conf import settings# app = Celery('reports')app = Celery('reports', backend='djcelery.backends.database:DatabaseBackend', broker='amqp://guest:guest@localhost:5672//')# Using a string here means the worker will not have to# pickle the object when using Windows.app.config_from_object('django.conf:settings')app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)@app.task(bind=True)def debug_task(self): print('Request: {0!r}'.format(self.request))Some of my apps are shared across projects. reports, for example might be used in 4 different projects, so I can see how tasks.py should live in the reports app so when it's added to a new project the tasks come along. What I don't quite understand is why celery.py needs to live within the reports apptoo. When I go to add some tasks to the account app, I'm basically building the same celery.py filereplacing 'reports' with 'account'. Shouldn't I have one celery file that lives at the same levelas manage.py? Any help or suggestions would be greatly appreciated. | The celery app file should live in the core directory of your project, along the settings and all the other things as shown in the documentation that you posted.To define portable tasks it makes sense to put them in the app that is using them, as you pointed out, in your case the reports app.The idea is that your task file is registered by whatever celery app is defined in the project, and your django app need no knowledge of which celery app is registering the tasks. You do this by using the shared_task decorator instead of app.task.To summarize:project/ project/ settings.py celery.py <- new, shown in the docs, also add __init__.py urls.py account/ models.py views.py blog/ models.py views.py mediakit/ models.py tasks.py <-- tasks for the me views.py reports/ models.py tasks.py <-- tasks for the reports app views.py manage.pyIn tasks.py you have something like this:from celery import shared_task@shared_taskdef my_add_task(a, b): return a + bHope this helps. |
How to escape spaces in Bash command line arguments Does Bash support escaping spaces in command line arguments?I have a simple Python script using argparse to get arguments passed from Bash, but when I call it like:myscript.py --name="Some Text With Spaces"I get a result like:args = ['Text', 'With' Spaces']kwargs = {'name': 'Some'}I though Bash support spaces with "\" but tryingmyscript.py --name="Some\ Text\ With\ Spaces"results in the same thing.Am I misusing Bash, or is this a problem I have to deal with on Python's side? | Coming from the bash end, the most likely cause is that you're not telling us the truth about your bash code. What you're hitting looks a great deal like BashFAQ #50.Runningmyscript.py --name="Some Text With Spaces"...directly from a command line works perfectly, resulting in a sys.argv array of ['myscript.py', '--name=Some Text With Spaces']. The behavior you describe is consistent with this:cmd='myscript.py --name="Some Text With Spaces"'$cmd...which will result in a sys.argv array of ['myscript.py', '--name="Some', 'Text', 'With', 'Spaces"'].Don't do that, ever. Either use an array (typically appropriate if you need to build up an argument line conditionally):cmd=( myscript.py --name="Some Text With Spaces" )"${cmd[@]}"...or a function (typically the appropriate choice in all other cases):myscript() { myscript.py --name="Some Text With Spaces" "$@"; }myscript |
How do I convert a str list that has phrases to a int list? I have a script that allows me to extract the info obtained from excel to a list, this list contains str values that contain phrases such as: "I like cooking", "My dog´s name is Doug", etc.So I've tried this code that I found on the Internet, knowing that the int function has a way to transform an actual phrase into numbers.The code I used is:lista=["I like cooking", "My dog´s name is Doug", "Hi, there"]test_list = [int(i, 36) for i in lista]Running the code I get the following error: builtins.ValueError: invalid literal for int() with base 36: "I like cooking"But I´ve tried the code without the spaces or punctuation, and i get an actual value, but I do need to take those characters into consideration. | To expand on the bytearray approach you could use int.to_bytes and int.from_bytes to actually get an int back, although the integers will be much longer than you show in your example.def to_int(s): return int.from_bytes(bytearray(s, 'utf-8'), 'big', signed=False)def to_str(s): return s.to_bytes((s.bit_length() +7 ) // 8, 'big').decode()lista = ["I like cooking", "My dog´s name is Doug", "Hi, there"]encoded = [to_int(s) for s in lista]decoded = [to_str(s) for s in encoded]encoded:[1483184754092458833204681315544679, 28986146900667755422058678317652141643897566145770855, 1335744041264385192549]decoded:['I like cooking', 'My dog´s name is Doug', 'Hi, there'] |
Wrong symbol when using escape sequences learn python the hard way ex10 When i try to print \v or \f i get gender symbols instead:Note also that I'm a complete beginner at programming.edit: Seems like i didnt write clear enough, i dont want to write \v or \f but the escape sequence created by them, i dont know what they exactly do but i dont think this is their meant function- | You are trying to print special characters, e.g., "\n" == new line. You can learn more here: Python String Literals.Excerpt: In plain English: String literals can be enclosed in matching single quotes (') or double quotes ("). They can also be enclosed in matching groups of three single or double quotes (these are generally referred to as triple-quoted strings). The backslash (\) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character. String literals may optionally be prefixed with a letter 'r' or 'R'; such strings are called raw strings and use different rules for interpreting backslash escape sequences.The r tells it to print a "raw string."Python 2.7ish:print r"\v"Or, you can escape the escape character:print "\\v"Or, for dynamic prints:print "%r" % ("\v",) |
Can tuples implement external data from .txt files? # Defines the variable 'load_words()'.def load_words():# Opens and assigns a 'word' file from an external "txt" file. words_file = open("words.txt", "r")# Assigns 'words' as the condensed function for lines of coded words in the external "txt" file. words = [line.strip() for line in words_file]can you do the same with tuples, without accessing data directly within the program?Thanks | If I understood you correctly, yes.From your question, it sounds like you want to read a file with N lines, and from that file produce an N-tuple of strings, in which each element of the tuple is a line from the file.So, what you're doing now is reading a file that looks like this:HelloWorld!and producing the list of strings, ["Hello", "World", "!"]What I think you want is the 3-tuple: ("Hello", "World", "!")You can do the following:with open("words.txt") as file: t = tuple(line.strip() for line in file)which will result in t containing the N-tuple of "words.txt"'s contents.What's happening here is that the expression line.strip() for line in file is a generator, being passed to the tuple constructor.Generators are iterable, and the tuple constructor, when given an iterable, iterates over all elements of the iterable and stuffs them into a tuple.A quick note: you should not want to or need to do this.File inputs are beyond your direct control as a programmer, and therefore should be treated as variable length.Tuples are useful to represent fixed-length constructs, not so much for variable things like files.Why do you want to do this? What purpose do tuples serve for you which lists do not? |
How to change timezone in http response (django server)? I'm running django server without any proxy:python manage.py runserver 0.0.0.0:80I set my local timezone on linux server, it's correct:root@83b3bf90b5c5:/app# dateFri Apr 7 12:38:42 MSK 2017Also I set local timezone on settings.py of my django project:TIME_ZONE = 'Europe/Moscow'And checked it:>>> from django.utils.timezone import localtime, now>>> localtime(now())datetime.datetime(2017, 4, 7, 12, 38, 42, 196476, tzinfo=<DstTzInfo 'Europe/Moscow' MSK+3:00:00 STD>)But when I open any webpage from client (Google Chrome browser) - in http response headers timezone isn't local:Date:Fri, 07 Apr 2017 09:38:42 GMTHow can I change timezone in http headers for all project globally? | Using pytz, as astimezone methodfrom pytz import timezonetime_zone = timezone(settings.TIME_ZONE)currentTime = currentTime.astimezone(time_zone) In your Middleware:import pytzfrom django.utils import timezonefrom django.utils.deprecation import MiddlewareMixinclass TimezoneMiddleware(MiddlewareMixin): def process_request(self, request): tzname = request.session.get('django_timezone') if tzname: timezone.activate(pytz.timezone(tzname)) else: timezone.deactivate() In Your view.pyfrom django.shortcuts import redirect, renderdef set_timezone(request): if request.method == 'POST': request.session['django_timezone'] = request.POST['timezone'] return redirect('/') else: return render(request, 'template.html', {'timezones': pytz.common_timezones}) In your templete.html{% load tz %}{% get_current_timezone as TIME_ZONE %}<form action="{% url 'set_timezone' %}" method="POST"> {% csrf_token %} <label for="timezone">Time zone:</label> <select name="timezone"> {% for tz in timezones %} <option value="{{ tz }}"{% if tz == TIME_ZONE %} selected="selected"{% endif %}>{{ tz }}</option> {% endfor %} </select> <input type="submit" value="Set" /></form> |
Heroku error : Compiled slug size: 624.7M is too large (max is 300M) - using miniconda for scipy and numpy I am working with Python 2.7.11, Django 1.9 and Heroku.I need to use scipy and numpy. Everything works well locally but Heroku returns an error when I push the application : "Compiled slug size: 624.7M is too large (max is 300M)"I therefore deleted the buildpack Heroku/Python and added this one: https://github.com/kennethreitz/conda-buildpackI kept the file requirements.txt:django==1.9.2boto==2.41.0dj-database-url==0.4.1Django==1.9.2django-allauth==0.28.0django-appconf==1.0.2django-autocomplete-light==3.1.6django-toolbelt==0.0.1gunicorn==19.6.0pep8==1.7.0Pillow==4.0.0psycopg2==2.6.1pytz==2016.10sorl-thumbnail==12.3virtualenv==15.1.0sendgrid==3.2.10python_http_client==2.2.1django-s3-folder-storage==0.3django-debug-toolbar==1.5celery==3.1.25redis==2.10.5tweepy==3.5.0geopy==1.11.0django-mptt==0.8.7mistune==0.7.3django-widget-tweaks==1.4.1django-cleanup == 0.4.2django-unused-media == 0.1.6python-memcached == 1.58python-binary-memcached == 0.26.0django-bmemcached == 0.2.3whitenoise==3.2coverage == 4.3.4raven == 6.0.0newrelic == 2.82.0.62ajaxuploader==0.3.8awscli==1.10.47botocore==1.4.37colorama==0.3.7dj-static==0.0.6django-libs==1.67.4django-user-media==1.2.3docutils==0.12ecdsa==0.13flake8==2.5.4jmespath==0.9.0mccabe==0.5.0oauthlib==1.1.2paramiko==2.0.1pyasn1==0.1.9pycrypto==2.6.1pyflakes==1.2.3python-openid==2.2.5requests==2.9.1requests-oauthlib==0.6.1rsa==3.4.2s3transfer==0.0.1simplejson==3.8.2six==1.10.0static3==0.7.0futures==3.0.5and added a conda-requirements.txt with:nomklpython=2.7.11numpy=1.11.1scipy=0.19.0scikit-learn==0.18.1Here is the complete Heroku build log (too many lines to fit here):https://gist.github.com/jpuaux/74cb50a6cfb2dcab80d25d1809ae01c2Please note that I purged Heroku cache with:heroku repo:purge_cache -a myappThanks for any help you can provide! | Did you use Anaconda? I had the same problem the slug file was 505M, then I created a virtual env with pip and got one only 237MMy requirements.txt:I created a new virtual env using pip instead of conda. pip install virtualenvcd my_project_foldervirtualenv my_projectThen I installed the packages I needed, this is my list in requirements.txt:certifi==2018.10.15chardet==3.0.4Click==7.0cycler==0.10.0decorator==4.3.0Flask==1.0.2gunicorn==19.9.0idna==2.7ipython-genutils==0.2.0itsdangerous==1.1.0Jinja2==2.10jsonschema==2.6.0jupyter-core==4.4.0kiwisolver==1.0.1MarkupSafe==1.0matplotlib==3.0.1nbformat==4.4.0nltk==3.3numpy==1.15.3pandas==0.23.4Pillow==5.3.0plotly==3.3.0pyparsing==2.2.2python-dateutil==2.7.4pytz==2018.6requests==2.20.0retrying==1.3.3scikit-learn==0.20.0scipy==1.1.0six==1.11.0sklearn==0.0SQLAlchemy==1.2.12traitlets==4.3.2urllib3==1.24Werkzeug==0.14.1wordcloud==1.5.0Then I uploaded to heroku and it went through. |
How to save training model at each training step instead of periodic save based on time interval.? - in TensorFlow-Slim slim.learning.train(...) accepts two arguments pertaining to saving the model(save_interval_secs) or saving the summaries(save_summaries_secs). The problem with this API is, it only allows to save the model/summary based on some "time interval" but I need to do this based on "each step" of the training.how to achieve this using TF-slim api.?Here is the slim.learning train api -def train(train_op, logdir, train_step_fn=train_step, train_step_kwargs=_USE_DEFAULT, log_every_n_steps=1, graph=None, master='', is_chief=True, global_step=None, number_of_steps=None, init_op=_USE_DEFAULT, init_feed_dict=None, local_init_op=_USE_DEFAULT, init_fn=None, ready_op=_USE_DEFAULT, summary_op=_USE_DEFAULT, **save_summaries_secs=600,** summary_writer=_USE_DEFAULT, startup_delay_steps=0, saver=None, **save_interval_secs=600,** sync_optimizer=None, session_config=None, session_wrapper=None, trace_every_n_steps=None, ignore_live_threads=False): | Slim is deprecated, and using Estimator you get full control over saving / summary frequency.You can also set the seconds to a very small number so it always saves. |
Running CrossValidationCV in parallel When I run a GridsearchCV() and a RandomizedsearchCV() methods in parallel ( having n_jobs>1 or n_jobs=-1 options set )it shows this message: ImportError: [joblib] Attempting to do parallel computing without protecting your import on a system that does not support forking. To use parallel-computing in a script, you must protect your main loop using "if name == 'main'". Please see the joblib documentation on Parallel for more information" I put the code in a class in .py file and call it using if_name_=='main in other .py file but it still shows this messageIt works good when n_jobs=1import platform; print(platform.platform())Windows-10-10.0.10586-SP0import numpy; print("NumPy", numpy.__version__) NumPy 1.13.1import scipy; print("SciPy", scipy.__version__) SciPy 0.19.1 import sklearn; print("Scikit-Learn", sklearn.__version__) Scikit-Learn 0.19.0UPDATEI tried this code but it still gives me the same error import numpy as npfrom sklearn.model_selection import RandomizedSearchCVfrom sklearn.tree import DecisionTreeClassifierclass Test(): def __init__(self): attributes = [..] dataset = pd.read_csv("..") X=dataset[[..]] Y=dataset[...] model=DecisionTreeClassifier() model = RandomizedSearchCV(....) model.fit(X, Y) if __name__ == '__main__': Test() | joblib is know for this behaviour and rather explicit in documenting: Warning Under Windows, it is important to protect the main loop of code to avoid recursive spawning of subprocesses when using joblib.Parallel. In other words, you should be writing code like this:import ....def function1(...): ...def function2(...): ......if __name__ == '__main__': # do stuff with imports and functions defined about ... No code should run outside of the “if __name__ == ‘__main__’” blocks, only imports and definitions.So, refactor your code so as to meet this well-defined requirement and your code will start to benefit from the joblib-tools powers. |
Validation Error while creating partial invoice from sales order in Odoo 10 When am creating partial invoice (down payment), I got the below errorThe operation cannot be completed, probably due to the following:- deletion: you may be trying to delete a record while other records still reference it- creation/update: a mandatory field is not correctly set[object with reference: categ_id - categ.id] | As far as my understanding, I think you have done some customizations in the DB, that's why this error. The error says that there is a mandatory field, but you are not supplied the value into it. The field is shown in the error message, categ_id.Thanks |
What are the workaround options for python out of memory error? I am reading a x,y,z point file (LAS) into python and have run into memory errors. I am interpolating unknown points between known points for a project I am working on. I began working with small files (< 5,000,000 points) and was able to read/write to a numpy array and python lists with no problem. I have received more data to work with (> 50,000,000 points) and now my code fails with a MemoryError.What are some options for handling such large amounts of data? I do not have to load all data into memory at once, but I will need to look at neighboring points using scipy kd-tree I am using Python 2.7 32 bit on a 64 bit Windows XP OS.Thanks in advance.EDIT: Code is posted below. I took out code for long calculations and variable definitions.from liblas import fileimport numpy as npf = file.File(las_file, mode='r')num_points = int(f.__len__())dt = [('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('i', 'u2'), ('c', 'u1'), ('t', 'datetime64[us]')]xyzict = np.empty(shape=(num_points,), dtype = dt)counter = 0for p in f: newrow = (p.x, p.y, p.z, p.intensity, p.classification, p.time) xyzict[counter] = newrow counter += 1dropoutList = []counter = 0for i in np.nditer(xyzict): # code to define P1x, P1y, P1z, P1t if counter != 0: # code to calculate n, tDiff, and seconds if n > 1 and n < scanN: # code to find v and vD for d in range(1, int(n-1)): # Code to interpolate x, y, z for points between P0 and P1 # Append tuple of x, y, and z to dropoutList dropoutList.append(vD) # code to set x, y, z, t for next iteration counter += 1 | Regardless of the amount of RAM in your system, if you are running 32-bit python, you will have a practical limit of about 2 GB of RAM for your application. There are a number of other questions on SO that address this (e.g., see here). Since the structure you are using in your ndarray is 23 bytes and you are reading over 50,000,000 points, that already puts you at about 1 GB. You haven't included the rest of your code so it isn't clear how much additional memory is being consumed by other parts of your program.If you have well over 2 GB of RAM in your system and you will continue to work on large data sets, you should install 64-bit python to get around this ~ 2 GB limit. |
Why isn't my label configuring correctly? I want this label to configure into the text entry after the user enters the text and hits go but the label isn't configuring.I want the label that says "Hello!" to change into whatever is put in the main entry. I'm looking for an answer written in full code instead of one fixed line.Here's my code:import tkinter as tkroot = tk.Tk()root.attributes('-fullscreen', True)exit_button = tk.Button(root, text="Exit", command = root.destroy)exit_button.place(x=1506, y=0)def answer(): answer_label.config(text=main_entry.get())entry_frame = tk.Frame(root)main_entry = tk.Entry(entry_frame, width=100)main_entry.grid(row=0, column=0)go_button = tk.Button(entry_frame, text= 'Go!', width=85, command= answer)go_button.grid(row=1, column=0)answer_label = tk.Label(text = "Hello!").pack()entry_frame.place(relx=.5, rely=.5, anchor='center')root.mainloop() | 1.Split tk.Label and pack().2.Pass the lable. import tkinter as tk root = tk.Tk() root.attributes('-fullscreen', True) exit_button = tk.Button(root, text="Exit", command = root.destroy) exit_button.place(x=1506, y=0) def answer(answer_label): answer_label.config(text=main_entry.get()) entry_frame = tk.Frame(root) main_entry = tk.Entry(entry_frame, width=100) main_entry.grid(row=0, column=0) answer_label = tk.Label(text = "Hello!") answer_label.pack() go_button = tk.Button(entry_frame, text= 'Go!', width=85, command=lambda: answer(answer_label)) go_button.grid(row=1, column=0) entry_frame.place(relx=.5, rely=.5, anchor='center') root.mainloop() |
How do I print a local tensor in tensorflow? I want to print a tensor in my program to see its internal values once it gets evaluated. The problem, however, is that the tensor being declared inside a function. To understand my problem better, here is some example code to better explain what it is I want to do:a = tf.Variable([[2,3,4], [5,6,7]])b = tf.Variable([[1,2,2], [3,3,3]])def divide(a,b): with tf.variable_scope('tfdiv', reuse=True): c = tf.divide(a,b, name='c') # Cannot print(c) here, as this will only yield tf info on c return cd = divide(a,b)with tf.Session() as sess: sess.run(tf.global_variables_initializer()) sess.run(d) sess.run(tf.get_variable('tfdiv/c:0').eval(session=sess))Previously, I have been able to do a print(c.eval(session=sess)), but as c is a local variable inside a function now, that does not work. As can be seen in the code above, I have tried to use tensorflow's variable scope in order to access the variable and then evaluate it. Unfortunately, this results in the error message:ValueError: Shape of a new variable (tfdiv/c:0) must be fully defined, but instead was <unknown>.I tried to use the reuse=True flag, but I still get the same error. Any thoughts on how I can solve this problem? Best would be if there is a print(c) equivalent that can be put into the divide function, as written in the code above. | This will achieve what you want to do:with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(d))Alternatively, you could replace the last line with:print(sess.run(tf.get_default_graph().get_tensor_by_name('tfdiv/c:0'))) |
How to build a content-based recommender system that uses multiple attributes? I want to build a content-based recommender system in Python that uses multiple attributes to decide whether two items are similar. In my case, the "items" are packages hosted by the C# package manager (example) that have various attributes such as name, description, tags that could help to identify similar packages.I have a prototype recommender system here that currently uses only a single attribute, the description, to decide whether packages are similar. It computes TF-IDF rankings for the descriptions and prints out the top 10 recommendations based on that:# Code mostly stolen from http://blog.untrod.com/2016/06/simple-similar-products-recommendation-engine-in-python.htmldef train(dataframe): tfidf = TfidfVectorizer(analyzer='word', ngram_range=(1, 3), min_df=0, stop_words='english') tfidf_matrix = tfidf.fit_transform(dataframe['description']) cosine_similarities = linear_kernel(tfidf_matrix, tfidf_matrix) for idx, row in dataframe.iterrows(): similar_indices = cosine_similarities[idx].argsort()[:-10:-1] similar_items = [(dataframe['id'][i], cosine_similarities[idx][i]) for i in similar_indices] id = row['id'] similar_items = [it for it in similar_items if it[0] != id] # This 'sum' is turns a list of tuples into a single tuple: # [(1,2), (3,4)] -> (1,2,3,4) flattened = sum(similar_items, ()) try_print("Top 10 recommendations for %s: %s" % (id, flattened))How can I combine cosine_similarities with other similarity measures (based on same author, similar names, shared tags, etc.) to give more context to my recommendations? | For some context, my work with content-based recommenders has revolved primarily around raw text and categorical data/features. Here's a high-level approach I've taken that has worked out nicely and is pretty simple to implement.Suppose I have three feature columns that I can potentially use to make recommendations: description, name, and tags. To me, the path of least resistance entails combining these three feature sets in a useful way.You're off to a good start, using TF-IDF to encode description. So why not treat name and tags in a similar way by creating a feature "corpus" consisting of description, name, and tags? Literally, this would mean concatenating the contents of each of the three columns into one long text column.Be wise about the concatenation, though, as it's probably to your advantage to preserve from which column a given word comes from, in the case of features like name and tag, which are assumed to have much lower cardinality than description. To put it more explicitly: instead of just creating your corpus column like this:df['corpus'] = (pd.Series(df[['description', 'name', 'tags']] .fillna('') .values.tolist() ).str.join(' ')You might try preserving information about where particular data points in name and tags come from. Something like this:df['name_feature'] = ['name_{}'.format(x) for x in df['name']]df['tags_feature'] = ['tags_{}'.format(x) for x in df['tags']]And after you do that, I would take things a step further by considering how the default tokenizer (which you're using above) works in TfidfVectorizer. Suppose you have the name of a given package's author: "Johnny 'Lightning' Thundersmith". If you just concatenate that literal string, the tokenizer will split it up and roll each of "Johnny", "Lightning", and "Thundersmith" into separate features, which could potentially diminish the information added by that row's value for name. I think it's best to try to preserve that information. So I would do something like this to each of your lower-cardinality text columns (e.g. name or tags):def raw_text_to_feature(s, sep=' ', join_sep='x', to_include=string.ascii_lowercase): def filter_word(word): return ''.join([c for c in word if c in to_include]) return join_sep.join([filter_word(word) for word in text.split(sep)])def['name_feature'] = df['name'].apply(raw_text_to_feature)The same sort of critical thinking should be applied to tags. If you've got a comma-separated "list" of tags, you'll probably have to parse those individually and figure out the right way to use them.Ultimately, once you've got all of your <x>_feature columns created, then you can create your final "corpus" and plug that into your recommender system as inputs.This whole system takes some engineering, to be sure, but I've found it's the easiest way to introduce new information from other columns that have different cardinalities. |
How to convert voltage (or frequency) floating number read backs to mV (or kHz)? I am successfully able to read back data from an instrument:When the read back is a voltage, I typically read back values such as 5.34e-02 Volts.When the read back is frequency, I typically read values like 2.95e+04or 1.49e+05 with units Hz.I would like to convert the voltage read back of 5.34e-02 to exponent e-3 (aka millivolts), ie.. 53.4e-3. next, I would like to extract the mantissa 53.4 out of this because I want all my data needs to be in milliVolts.Similarly, I would like to convert all the frequency such as 2.95e+04 (or 1.49e+05) to kiloHz, ie... 29.5e+03 or 149e+03. Next would like to extract the mantissa 29.5 and 149 from this since all my data needs to be kHz.Can someone suggest how to do this? | Well, to convert volts to millivolts, you multiply by 1000. To convert Hz to kHz, you divide by 1000.>>> reading = 5.34e-02>>> millivolts = reading * 1000>>> print(millivolts)53.400000000000006>>> hz = 2.95e+04>>> khz = hz /1000>>> khz29.5>>>FOLLOW-UPOK, assuming your real goal is to keep the units the same but adjust the exponent to a multiple of 3, see if this meets your needs.def convert(val): if isinstance(val,int): return str(val) cvt = f"{val:3.2e}" if 'e' not in cvt: return cvt # a will be #.## # b will be -## a,b = cvt.split('e') exp = int(b) if exp % 3 == 0: return cvt if exp % 3 == 1: a = a[0]+a[2]+a[1]+a[3] exp = abs(exp-1) return f"{a}e{b[0]}{exp:02d}" a = a[0]+a[2]+a[3]+a[1] exp = abs(exp-2) return f"{a}e{b[0]}{exp:02d}"for val in (5.34e-01, 2.95e+03, 5.34e-02, 2.95e+04, 5.34e-03, 2.95e+06): print( f"{val:3.2e} ->", convert(val) )Output:5.34e-01 -> 534.e-032.95e+03 -> 2.95e+035.34e-02 -> 53.4e-032.95e+04 -> 29.5e+035.34e-03 -> 5.34e-032.95e+06 -> 2.95e+06 |
How to get the div before a specific div with css selector There is probably a better way to do this, but I just need this to work for now before I can come up with a better solution.Im working on a webscraping application with Python and BeautifulSoup. I need to grab a specific div, but the placement of that div changes slightly on different pages (sometimes its the 3rd, sometimes the 4th, ect). There are no class tags or id tags on the div I want, but I did notice how there was always a div directly after the one I want, and that one has a id tag. It looks something like this:<div id="main-container"> <div></div> <div></div> <div>The div I want</div> <div id="point"></div> <div></div></div>So im looking for something like this:div#main-container > div:item-before(#point)Is there any easy way to do this in CSS, or do I have to come up with a better solution? | Find specific div using id or class and call find_previous() to get appropriate taghtml="""<div id="main-container"> <div></div> <div></div> <div>The div I want</div> <div id="point"></div> <div></div></div>"""soup=BeautifulSoup(html,"html.parser")soup.find("div",attrs={"id":"main-container"}).find("div",attrs={"id":"point"}).find_previous()Output:<div>The div I want</div> |
scrape sports reference table I have tried the following script to make to grab the table on the webpage.from bs4 import BeautifulSoupimport pandas as pdurl = 'https://www.sports-reference.com/cfb/play-index/rivals.cgi?request=1&school_id=penn-state&opp_id=purdue'headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}pageTree = requests.get(url, headers=headers)soup = BeautifulSoup(pageTree.content, 'html.parser')soup.find('tbody')However, the table is not able to be pulled. Not even a "pd.read_html" line works. Is there a reason for that? | The desired table data is under html comment. By removing the comment,you can extract the table data using pandas only.import pandas as pdimport requestsfrom bs4 import BeautifulSoupurl= 'https://www.sports-reference.com/cfb/play-index/rivals.cgi?request=1&school_id=penn-state&opp_id=purdue'res = requests.get(url).text.replace('<!--', '').replace('-->', '')soup =BeautifulSoup(res,'lxml')table = soup.select_one('#div_results')df = pd.read_html(str(table))[0]d = df.droplevel(0, axis=1)print(d)Output: G Date Day School Unnamed: 4_level_1 Opponent ... Diff W L T Streak Notes0 19 2019-10-05 Sat Penn State (12) NaN Purdue ... 28 15 3 1 W 9 NaN1 18 2016-10-29 Sat Penn State (24) @ Purdue ... 38 14 3 1 W 8 NaN2 17 2013-11-16 Sat Penn State NaN Purdue ... 24 13 3 1 W 7 NaN3 16 2012-11-03 Sat Penn State @ Purdue ... 25 12 3 1 W 6 NaN4 15 2011-10-15 Sat Penn State NaN Purdue ... 5 11 3 1 W 5 NaN5 14 2008-10-04 Sat Penn State (6) @ Purdue ... 14 10 3 1 W 4 NaN6 13 2007-11-03 Sat Penn State NaN Purdue ... 7 9 3 1 W 3 NaN7 12 2006-10-28 Sat Penn State @ Purdue ... 12 8 3 1 W 2 NaN8 11 2005-10-29 Sat Penn State (11) NaN Purdue ... 18 7 3 1 W 1 NaN9 10 2004-10-09 Sat Penn State NaN Purdue (9) ... -7 6 3 1 L 2 NaN10 9 2003-10-11 Sat Penn State @ Purdue (18) ... -14 6 2 1 L 1 NaN11 8 2000-09-30 Sat Penn State NaN Purdue (22) ... 2 6 1 1 W 6 NaN12 7 1999-10-23 Sat Penn State (2) @ Purdue (16) ... 6 5 1 1 W 5 NaN13 6 1998-10-17 Sat Penn State (12) NaN Purdue ... 18 4 1 1 W 4 NaN14 5 1997-11-15 Sat Penn State (6) @ Purdue (19) ... 25 3 1 1 W 3 NaN15 4 1996-10-12 Sat Penn State (10) NaN Purdue ... 17 2 1 1 W 2 NaN16 3 1995-10-14 Sat Penn State (20) @ Purdue ... 3 1 1 1 W 1 NaN17 2 1952-09-27 Sat Penn State NaN Purdue ... 0 0 1 1 T 1 NaN18 1 1951-11-03 Sat Penn State @ Purdue ... -28 0 1 0 L 1 NaN[19 rows x 16 columns] |
Can't import a class from a python package I created a private python package with this structure: python_package utils __init__.py module1.py module2.pyAnd inside the module1.py file there is a class Class1Now when I download this package in another project using pip, I can't import Class1 usingfrom utils import Class1Am I missing something ?Also the __init__.py file contains the following lines :from .module1 import *from .module2 import * | try this if you are not able to access the class directlyimport filenameobject=filename.class1() |
Adding an increment to duplicates within a python dataframe I'm looking to concatenate two columns in data frame and, where there are duplicates, append an integer number at the end. The wrinkle here is that I will keep receiving feeds of data and the increment needs to be aware of historical values that were generated and not reuse them.I've been trying to do this with an apply function but I'm having issues when there are duplicates within a single received data set and I just can't wrap my head around a way to do this without iterating through the data frame (which is generally frowned upon).I've gotten this far:import pandas as pddef gen_summary(color, car, blacklist): exists = True increment = 0 summary = color + car while exists: if summary in blacklist: increment += 1 summary = color + car + str(increment) # Append increment if in burn list else: exists = False # Exit this loop return summarydef main(): blacklist = ['RedToyota', 'BlueVolkswagon', 'BlueVolkswagon1', 'BlueVolkswagon2'] df = pd.DataFrame( {'color': ['Red', 'Blue', 'Blue', 'Green'], 'car': ['Toyota', 'Volkswagon', 'Volkswagon', 'Hyundai'], 'summary': ['', '', '', '']} ) #print(df) df["summary"] = df.apply(lambda x: gen_summary(x['color'], x['car'], blacklist), axis=1) print(df)if __name__ == "__main__": main()Output: color car summary0 Red Toyota RedToyota11 Blue Volkswagon BlueVolkswagon32 Blue Volkswagon BlueVolkswagon33 Green Hyundai GreenHyundaiNote that BlueVolkswagon1 and BlueVolkswagon2 were used in previous data feeds so it has to start from 3 here. The real issue is that there are duplicate BlueVolkswagon values in just this data set so it doesn't increment properly and duplicates BlueVolkswagon3 because I can't update the history in the middle of applying a function to the entire data set.Is there some elegant pythonic way to do this that I can't wrap my head around or is this a scenario where iterating through the data frame actually does make sense? | I'm not completely sure what you want to achieve, but you can update blacklist in the process. blacklist is just a pointer to the actual list data. If you slightly modify gen_summary by adding blacklist.append(summary) before the return statementdef gen_summary(color, car, blacklist): ... exists = False # Exit this loop blacklist.append(summary) return summaryyou will get following result color car summary0 Red Toyota RedToyota11 Blue Volkswagon BlueVolkswagon32 Blue Volkswagon BlueVolkswagon43 Green Hyundai GreenHyundaiGrouping would be a bit more efficient. This should produce the same result:def gen_summary(ser, blacklist): color_car = ser.iat[0] summary = color_car increment = 0 exists = True while exists: if summary in blacklist: increment += 1 summary = color_car + str(increment) # Append increment if in burn list else: exists = False # Exit this loop return ([color_car + ('' if increment == 0 else str(increment))] + [color_car + str(i + increment) for i in range(1, len(ser))])df['summary'] = df['color'] + df['car']df['summary'] = df.groupby(['color', 'car']).transform(gen_summary, blacklist)Is that the result you are looking for? If yes, I'd like to add a suggestion for optimising your approach: Use a dictionary instead of a list for blacklist:def gen_summary(color, car, blacklist): key = color + car num = blacklist.get(key, -1) + 1 blacklist[key] = num return key if num == 0 else f'{key}{num}'blacklist = {'RedToyota': 0, 'BlueVolkswagon': 2}or with groupingdef gen_summary(ser, blacklist): key = ser.iat[0] num = blacklist.get(key, -1) + 1 return ([f'{key}{"" if num == 0 else num}'] + [f'{key}{i + num}' for i in range(1, len(ser))])blacklist = {'RedToyota': 0, 'BlueVolkswagon': 2}df['summary'] = df['color'] + df['car']df['summary'] = df.groupby(['color', 'car']).transform(gen_summary, blacklist)should produce the same result without the while-loop and a much faster lookup. |
Can I optimize this code with an array for it to work on 100 pages in a single loop? I'm fairly new in writing code in Python. I'm trying website parser with Beautiful Soup and it works fine.I need guidance in making my code more optimized because I need to parse 100 pages of a single website one by one, and wanted to do it with a single loop + array of pages.Pages change just by numbers like: https://www.example.com/cat?page1 /cat?page2 /cat?page3 and etc.Please see the code below and please give advice if you can regarding my subject.Thanks a lot in advance <3from __future__ import print_functionfrom re import subfrom bs4 import BeautifulSoupfrom urllib.request import urlopenurlpage= urlopen("https://www.example.com/cat?page1").read()bswebpage=BeautifulSoup(urlpage)results=bswebpage.findAll("div",{'class':"someDiv"})for result in results: print(sub("&ldquo;|.&rdquo;","","".join(result.contents[0:1]).strip())) | You can make a loop there like this:for i in range(1, 101): #goes from 1-100 url = f"https://www.example.com/cat?page{i}" #page1 etc. urlpage= urlopen(url).read() bswebpage=BeautifulSoup(urlpage) results=bswebpage.findAll("div",{'class':"someDiv"}) for result in results: print(sub("&ldquo;|.&rdquo;","","".join(result.contents[0:1]).strip()))The results part you can make an array:all_results = [](... then inside the for) all_results.append(results) |
Python 3 inheritance multiple classes with __str__ How do I use multiple __str__ from other classes? For example:class A: def __str__(self): return "this"class B: def __str__(self): return "that"class C(A,B): def __str__(self): return super(C, self).__str__() + " those" # return something(A) + " " something(B) + " those"cc = C()print(cc)Output: this thoseI would like the output to be: this that thoseThis post is almost a solution (with super()) | With multiple inheritance, super() searches for the first class that has the attribute, as they appear, from left to right. So, it will stop at A. You can access all the parent classes with the special __bases__ attribute, and loop over them, calling str on each one. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.