questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Scrapy SgmlLinkExtractor and span attribute I need to match attribute against some strings.I tried to add the span attribute to sgmllinkextractor but it seems to ignore it since it has no link in it.is there an option to use a callback function that will be called when no link could be extract via linkExtractor?I want to match the page against some string if and only if there was no match with linkExtractor.Thanks
Try subclassing BaseSpider instead of using CrawlSpider.
A Simple View to Display/Render a Static image in Django I am trying to find the most efficient way of displaying an image using django's template context loader. I have a static dir within my app which contains the image 'victoryDance.gif' and an empty static root dir at the project level (with settings.py). assuming the paths within my urls.py and settings.py files are correct. what is the best view?from django.shortcuts import HttpResponsefrom django.conf import settingsfrom django.template import RequestContext, Template, Contextdef image1(request): # good because only the required context is rendered html = Template('<img src="{{ STATIC_URL }}victoryDance.gif" alt="Hi!" />') ctx = { 'STATIC_URL':settings.STATIC_URL} return HttpResponse(html.render(Context(ctx)))def image2(request): # good because you don't have to explicitly define STATIC_URL html = Template('<img src="{{ STATIC_URL }}victoryDance.gif" alt="Hi!" />') return HttpResponse(html.render(RequestContext(request)))def image3(request): # This allows you to load STATIC_URL selectively from the template end html = Template('{% load static %}<img src="{% static "victoryDance.gif" %}" />') return HttpResponse(html.render(Context(request)))def image4(request): # same pros as image3 html = Template('{% load static %} <img src="{% get_static_prefix %}victoryDance.gif" %}" />') return HttpResponse(html.render(Context(request)))def image5(request): html = Template('{% load static %} {% get_static_prefix as STATIC_PREFIX %} <img src="{{ STATIC_PREFIX }}victoryDance.gif" alt="Hi!" />') return HttpResponse(html.render(Context(request)))thanks for answers These views all work!
If you need to render an image read a bit here http://www.djangobook.com/en/1.0/chapter11/ and use your version of the following code:For django version <= 1.5:from django.http import HttpResponsedef my_image(request): image_data = open("/path/to/my/image.png", "rb").read() return HttpResponse(image_data, mimetype="image/png")For django 1.5+ mimetype was replaced by content_type(so happy I'm not working with django anymore):from django.http import HttpResponsedef my_image(request): image_data = open("/path/to/my/image.png", "rb").read() return HttpResponse(image_data, content_type="image/png")Also there's a better way of doing things!Else, if you need a efficient template engine use Jinja2Else, if you are using Django's templating system, from my knowledge you don't need to define STATIC_URL as it is served to your templates by the "static" context preprocessor:TEMPLATE_CONTEXT_PROCESSORS = ( 'django.contrib.auth.context_processors.auth', 'django.core.context_processors.debug', 'django.core.context_processors.i18n', 'django.core.context_processors.static', 'django.core.context_processors.media', 'django.core.context_processors.request', 'django.contrib.messages.context_processors.messages',)
How to create train, test and validation splits in tensorflow 2.0 I am new to tensorflow, and I have started to use tensorflow 2.0I have built a tensorflow dataset for a multi-class classification problem. Let's call this labeled_ds. I have prepared this dataset by loading all the image files from their respective class wise directories. I have followed along the tutorial here : tensorflow guide to load image datasetNow, I need to split labeld_ds into three disjoint pieces : train, validation and test. I was going through the tensorflow API, but there was no example which allows to specify the split percentages. I found something in the load method, but I am not sure how to use it. Further, how can I get splits to be stratified ?# labeled_ds contains multi class data, which is unbalanced.train_ds, val_ds, test_ds = tf.data.Dataset.tfds.load(labeled_ds, split=["train", "validation", "test"])I am stuck here, would appreciate any advice on how to progress from here. Thanks in advance.
Please refer below code to create train, test and validation splits using tensorflow dataset "oxford_flowers102" !pip install tensorflow==2.0.0import tensorflow as tfprint(tf.__version__)import tensorflow_datasets as tfdslabeled_ds, summary = tfds.load('oxford_flowers102', split='train+test+validation', with_info=True)labeled_all_length = [i for i,_ in enumerate(labeled_ds)][-1] + 1train_size = int(0.8 * labeled_all_length)val_test_size = int(0.1 * labeled_all_length)df_train = labeled_ds.take(train_size)df_test = labeled_ds.skip(train_size)df_val = df_test.skip(val_test_size)df_test = df_test.take(val_test_size)df_train_length = [i for i,_ in enumerate(df_train)][-1] + 1df_val_length = [i for i,_ in enumerate(df_val)][-1] + 1df_test_length = [i for i,_ in enumerate(df_test)][-1] + 1print('Original: ', labeled_all_length)print('Train: ', df_train_length)print('Validation :', df_val_length)print('Test :', df_test_length)
Python: convention name for a test Is there a convention for naming tests in Python when using the unittest module. I know that each method inside a class which inherits from unittest.TestCase should start with test, but I wonder what is much better:1. A short descriptive name without docstringdef test_for_a_date_on_a_weekday(self): customer_type = "Regular" dates = ["16Mar2009(mon)"] self.assertEquals(self.hotel_reservation_system.find_cheapest_hotel(customer_type, dates), "Lakewood")2. A number following the word test with a docstring which explains the test.def test_1(self): """Tests the cheapest hotel when the date is on a weekday. """ customer_type = "Regular" dates = ["16Mar2009(mon)"] self.assertEquals(self.hotel_reservation_system.find_cheapest_hotel(customer_type, dates), "Lakewood")Which option is much preferable, and if any of them what should I use?
Generally it is preferable to increase readability by : - choosing an adequate name - describing how it worksChoose your name such that it will be short and descriptive. For readability, use snake_case. For example : test_week_date.Always include a docstring in your function. This will allow the reader to get all necessary information if the name isn't clear enough OR if he doesn't really understand what the method does / how she does it.Conclusion : short and descriptive (snake_case) name with a docstring.
Memory error while generating the openstreet map tiles from generate_tiles.py I am facing weired behavioue of python. when i set the small value of the bound i am able to generate the tiles for small portion .but when i am setting bound value to large number like 60232323.73 i am getting memory error in Generate_tile.py.Please help on this
6191256.42, 842455.88, 11502754.24, 4218918.81 is not a valid bounding box. The latitude (2nd and 4th parameter) must be between -90.0 and 90.0 and the longitude (1st and 3rd parameter) must be between -180.0 and 180.0.
Merge two data-sets in Python Pandas I have two datasets in the below format & want to merge them into a single dataset based on City+Age+Gender. Thanks in advanceDataset1: City Age Gender Source Count0 California 15-24 Female Amazon Prime Video 146291 California 15-24 Female Fubo TV 38402 California 15-24 Female Hulu 540673 California 15-24 Female Netflix 117134 California 15-24 Female Sling TV 10642Dataset2: City Age Gender Source Feeds0 California 15-24 Female Blogs 1501 California 15-24 Female Customsite 572 California 15-24 Female Discussions 283 California 15-24 Female Facebook Comment 5554 California 15-24 Female Google+ 19Expected resulting dataset: City Age Gender Source Count California 15-24 Female Amazon Prime Video 14629 California 15-24 Female Fubo TV 3840 California 15-24 Female Hulu 54067 California 15-24 Female Netflix 11713 California 15-24 Female Sling TV 10642 California 15-24 Female Blogs 150 California 15-24 Female Customsite 57 California 15-24 Female Discussions 28 California 15-24 Female Facebook Comment 555 California 15-24 Female Google+ 19Note : Feeds/Count signify the same meaning. So okay to have either of them as the column name in the merged dataset.
Use pandas.concat with rename columns for align columns - need same columns in both DataFrames:df = pd.concat([df1, df2.rename(columns={'Feeds':'Count'})], ignore_index=True)print (df) City Age Gender Source Count0 California 15-24 Female Amazon Prime Video 146291 California 15-24 Female Fubo TV 38402 California 15-24 Female Hulu 540673 California 15-24 Female Netflix 117134 California 15-24 Female Sling TV 106425 California 15-24 Female Blogs 1506 California 15-24 Female Customsite 577 California 15-24 Female Discussions 288 California 15-24 Female Facebook Comment 5559 California 15-24 Female Google+ 19Alternative with DataFrame.append - not pure python append:df = df1.append(df2.rename(columns={'Feeds':'Count'}), ignore_index=True)print (df) City Age Gender Source Count0 California 15-24 Female Amazon Prime Video 146291 California 15-24 Female Fubo TV 38402 California 15-24 Female Hulu 540673 California 15-24 Female Netflix 117134 California 15-24 Female Sling TV 106425 California 15-24 Female Blogs 1506 California 15-24 Female Customsite 577 California 15-24 Female Discussions 288 California 15-24 Female Facebook Comment 5559 California 15-24 Female Google+ 19
python 2.7 wand: UnicodeDecodeError: (Error in get_font_metrics) I am getting this error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 17: ordinal not in range(128)" when I try to merge this image "La Pocatière.png". Python 2.7.11 bg_img = Image(filename='C:/Pocatière.png') bg_img.resize(1200,628) bg_img.composite('C:/test.png', left=0, top=0)when I do print I can see the right unicode:bg_imgu'La Pocati\xe8re.png'>>> print bg_imgLa Pocatière.pngNot sure how I can bypass this issue?Answer: After doing lots research and in discussion with my colleague we were able to solve this issue by setting :text_encoding = 'utf-8'For some reason wand wasn't able to set it automatically
Is this python v2 or v3? In case this is Python version 2 (which I think it is), then you might be better of with calling Image(filename=u'C:/Pocatière.png') you can also notice this in the working sample where it states u'La Pocati\xe8re.png'
Retaining longest consecutive occurrence that does not equal a specific value I have a df like so:Value013-9994562789-99932-9991and I want to retain the most consecutive values in the dataframe that are NOT -999which for this example would give me this:Value4562789I have multiple dataframes (originally csv files) that have the -999 values in different locations and I would like to apply the same method to all dataframes.
You can do a cumsum() on the condition series which gives a unique groupId for each consecutive sequence from one -999 to another. Then find the maximum length of the groupId and filter on that should give the desired output:df['groupId'] = (df['Value'] == -999).cumsum()df.Value[df.groupId == df.groupId.value_counts().idxmax()][1:]# 4 4# 5 5# 6 6# 7 2# 8 7# 9 8# 10 9# Name: Value, dtype: int64
Add a value if this value doesn't exist in dictionary I have a default dictionary. I loop through many strings and add them to the directory under the key as a number but only if there is no that value already in dictionary. So my code looks like this:from collections import defaultdictstrings = ["val1", "val2", "val2", "val3"]my_dict = defaultdict(list)key = 0for string in strings: if string not in my_dict.itervalues(): my_dict[key].append(string) key += 1print my_dictbut it seems not to work because all of strings are added to the dictionary, like this:defaultdict(<type 'list'>, {0: ['val1'], 1: ['val2'], 2: ['val2'], 3: ['val3']})'val2' shouldn't be added and it should looks like this:defaultdict(<type 'list'>, {0: ['val1'], 1: ['val2'], 2: ['val3']})What am I doing wrong?
Notice that my_dict.itervalues() returns a list of lists in your case. So string not in lists always returns True, as you can see from the following code,>>> "val2" not in [["val1"], ["val2"]]TrueTo get the desired result, flat a list of lists into a list using itertools.chain.from_iterable,>>> import itertools>>> "val2" not in itertools.chain.from_iterable([["val1"], ["val2"]])FalseThe full source code for your case,from collections import defaultdictimport itertoolsstrings = ["val1", "val2", "val2", "val3"]my_dict = defaultdict(list)key = 0for string in strings: if string not in itertools.chain.from_iterable(my_dict.values()): # flat a list of lists into a list my_dict[key].append(string) key += 1print(my_dict)# Outputdefaultdict(<type 'list'>, {0: ['val1'], 1: ['val2'], 2: ['val3']})
serving media files with dj-static in heroku I'm trying to serve media files that are registered in django-admin.When accessing an image by api error 404 Not found.I made a configuration as the recommended documentation, but in heroku does not work.settings.pyimport osfrom decouple import config, Csvfrom dj_database_url import parse as dburlBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))SECRET_KEY = config('SECRET_KEY')# SECURITY WARNING: don't run with debug turned on in production!DEBUG = config('DEBUG', default=False, cast=bool)ALLOWED_HOSTS = config('ALLOWED_HOSTS', default=[], cast=Csv())default_dburl = 'sqlite:///' + os.path.join(BASE_DIR, 'db.sqlite3')DATABASES = { 'default': config('DATABASE_URL', default=default_dburl, cast=dburl),}(...)STATIC_URL = '/static/'STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')MEDIA_URL = '/media/'MEDIA_ROOT = os.path.join(BASE_DIR, 'media')wsgi.pyimport osfrom dj_static import Cling, MediaClingfrom django.core.wsgi import get_wsgi_applicationos.environ.setdefault("DJANGO_SETTINGS_MODULE", "integrafundaj.settings")application = Cling(MediaCling(get_wsgi_application()))requirements.txtdj-database-url==0.4.2dj-static==0.0.6Django==1.11.3djangorestframework==3.7.7easy-thumbnails==2.4.2olefile==0.44Pillow==4.3.0python-decouple==3.1pytz==2017.3static3==0.7.0Unidecode==0.4.21gunicorn==19.4.1psycopg2==2.7.1
I had the same issue and fixed it by changing my path on models.py to a different one... It was configured to access it through media/images/img.jpg, but the page using dj-static was requesting it from the same folder structure as static files, which should be located at myapp/media/images/img.jpg (static files were the same thing but with static instead of media).After this change, and after re-uploading the images to the new folder, everything worked fine! (remember to change myapp to the your app's name).
Algorithm to sum/stack values from a time series graph where data points don't match on time I have a graphing/analysis problem i can't quite get my head around. I can do a brute force, but its too slow, maybe someone has a better idea, or knows or a speedy library for python?I have 2+ time series data sets (x,y) that i want to aggregate (and subsequently plot). The issue is that the x values across the series don't match up, and i really don't want to resort to duplicating values into time bins.So, given these 2 series:S1: (1;100) (5;100) (10;100)S2: (4;150) (5;100) (18;150)When added together, should result in:ST: (1;100) (4;250) (5;200) (10;200) (18;250)Logic:x=1 s1=100, s2=None, sum=100x=4 s1=100, s2=150, sum=250 (note s1 value from previous value)x=5 s1=100, s2=100, sum=200x=10 s1=100, s2=100, sum=200x=18 s1=100, s2=150, sum=250My current thinking is to iterate a sorted list of keys(x), keep the previous value for each series, and query each set if it has a new y for the x.Any ideas would be appreciated!
Something like this:def join_series(s1, s2): S1 = iter(s1) S2 = iter(s2) value1 = 0 value2 = 0 time1, next1 = next(S1) time2, next2 = next(S2) end1 = False end2 = False while True: time = min(time1, time2) if time == time1: value1 = next1 try: time1, next1 = next(S1) except StopIteration: end1 = True time1 = time2 if time == time2: value2 = next2 try: time2, next2 = next(S2) except StopIteration: end2 = True time2 = time1 yield time, value1 + value2 if end1 and end2: raise StopIterationS1 = ((1, 100), (5, 100), (10, 100))S2 = ((4, 150), (5, 100), (18, 150))for result in join_series(S1, S2): print(result)It basically keeps the current value of S1 and S2, together with the next of S1 and S2, and steps through them based on which has the lowest "upcoming time". Should handle lists of different lengths to, and uses iterators all the way so it should be able to handle massive dataseries, etc, etc.
How can I modify and remove special characters in keys of Python2 dictionary I am trying to get rid of special characters in Python dictionary keys and add the year of the key to its corresponding value if the year exist:{'New Year Day 2019\\xa0': 'Tuesday, January 1', 'Good Friday': 'Friday, March 30', 'New Year Day 2018\\xa0': 'Monday, January 1'}The key and values are all string. I want this to look like the following:{'New Year Day': 'Tuesday, January 1, 2019', 'Good Friday': 'Friday, March 30', 'New Year Day': 'Monday, January 1, 2018'}I have tried to remove \xa0 but was unsuccessful: for key in data: key.replace('\xa0', '') print keyI think I will need to use the (re.search(r'[12]\d{3}', key)).group[0] regex for getting the year. But how will I remove it from the keys?
If it's the special character "\xa0" you are trying to remove from the keys, try this:data = {'New Year Day 2019\\xa0': 'Tuesday, January 1', 'Good Friday': 'Friday, March 30', 'New Year Day 2018\\xa0': 'Monday, January 1'}for i in data: if "\\xa0" in i: data[i.replace("\\xa0", "")] = data.pop(i)print dataHope this helped.
Original files are automatically deleted by the process while compiling code I have written a code in python to convert dicom (.dcm) data into a csv file. However, if I run the code for more than once on my database directory, the data is automatically getting lost/deleted. I tried searching in 'recycle bin' but could not find the deleted data. I am not aware of the process of what went wrong with the data. Is there anything wrong with my code? Any suggestions are highly appreciated.here is my code:import xlsxwriterimport os.pathimport sysimport dicomimport xlrdimport csvroot = input("Enter Directory Name: ")#path = os.path.join(root, "targetdirectory")i=1for path, subdirs, files in os.walk(root): for name in files: os.rename(os.path.join(path, name), os.path.join(path,'MR000'+ str(i)+'.dcm')) i=i+1dcm_files = []for path, dirs, files in os.walk(root): for names in files: if names.endswith(".dcm"): dcm_files.append(os.path.join(path, names))print (dcm_files)with open('junk/test_0.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL) spamwriter.writerow(["Folder Name","File Name", "PatientName", "PatientID", "PatientBirthDate","SliceThickness","Rows"]) for dcm_file in dcm_files: ds = dicom.read_file(dcm_file) fileName = dcm_file.split("/") spamwriter.writerow([fileName[1],fileName[2], ds.get("PatientName", "None"), ds.get("PatientID", "None"), ds.get("PatientBirthDate", "None"), ds.get("SliceThickness", "None"), ds.get("Rows", "None")])
You have something like the following scenario:After 1st iteration, you end with the files: MR0001.dcm, MR0002.dcm, MR0003.dcm... In 2nd iteration, there are the following changes:os.rename('some_file', 'MR0001.dcm')os.rename('MR0001.dcm', 'MR0002.dcm')os.rename('MR0002.dcm', 'MR0003.dcm')os.rename('MR0003.dcm', 'MR0004.dcm')...So at the end there is only a file 'MR0004.dcm'. Add the following line just below renaming:print( os.path.join(path, name), '-->', os.path.join(path,'MR000'+ str(i)+'.dcm'))Then you will see, what exactly files are renamed.
Saving every rows of pandas dataframe to txt file So, I open a dataset from a HDF5 file like below:import pandas as pdimport numpy as npdata1 = pd.read_hdf('sport.hdf5', usecols=['category','title','images','link','date','desc'])It will give me output like below:category title images \0 raket Kevin/Marcus Langsung Fokus ke Kejuaraan Dunia... NaN 1 f1 Vettel Menangi GP Inggris yang Penuh Drama NaN 2 others Semangat 'Semakin di Depan' Warnai Kejuaraan M... NaN 5 sepakbola Roberto Martinez Mengejar Status Elite NaN 6 sepakbola Nyaris Separuh Gol Piala Dunia 2018 Lahir dari... NaN link \0 https://sport.detik.com/raket/d-4104834/kevinm... 1 https://sport.detik.com/f1/d-4104788/vettel-me... 2 https://sport.detik.com/sport-lain/d-4105193/s... 5 https://sport.detik.com/sepakbola/berita/d-410... 6 https://sport.detik.com/sepakbola/berita/d-410... date \0 Senin 09 Juli 2018, 00:31 WIB 1 Minggu 08 Juli 2018, 22:35 WIB 2 Senin 09 Juli 2018, 11:15 WIB 5 Senin 09 Juli 2018, 12:35 WIB 6 Senin 09 Juli 2018, 12:51 WIB desc 0 - Setelah , Kevin Sanjaya/Marcus Gideon suda... 1 - Driver Ferrari keluar sebagai pemenang Gr... 2 - Kejuaraan Dunia Motocross Grand Prix (MXGP)... 5 - bisa jadi mulai kerap diperbinc... 6 - Berakhirnya perempatfinal Piala D... Now, I need to save every single row that contain desc with the title of title, I'm using the code belom:np.savetxt(data1['title']+'.txt', data1['desc'], fmt='%s')but, it come out with the result like this:Traceback (most recent call last): File "index.py", line 23, in <module> np.savetxt(data1['title']+'.txt', data1['desc'], fmt='%s') File "/home/adminsvr/tf-py3/lib/python3.5/site-packages/numpy/lib/npyio.py", line 1187, in savetxt if fname.endswith('.gz'): File "/home/adminsvr/tf-py3/lib/python3.5/site-packages/pandas/core/generic.py", line 3614, in __getattr__ return object.__getattribute__(self, name)AttributeError: 'Series' object has no attribute 'endswith'Any solution or ideas?
After hours of working, here's the idea to solve the problem:First, make iteration of rows for Data1 dataframe. Don't forget to add attribute iterrows that will return row selection. And don't forget to define index and rows.To make file for every row, define the directory followed by (row[title]) to make it dynamic.However, the directory result/ is not exist yet. User makedirs to make it.And finally, write (row[desc]) inside the txt file.Here we go:import osfor idx,row in data1.iterrows(): filename = "result/"+str(row['title'])+".txt" os.makedirs(os.path.dirname(filename), exist_ok=True) with open(filename, "w+") as f: f.write(row['desc']) f.close() print (idx)
What does the second argument of the read command mean? I have this Python code:for name, age in read(file, ('name','age')):Could anybody please explain what it means?
('name','age') is a tuple, an immutable sequence type, similar to a list.If you're asking what it means in regards to the read() function, I'm sure that can be found at the specific module's documentation, because read is not a built-in function last I heard :p.
Split list into lists containing only 1s I have this list in python:[100, 96, 1, 1, 1, 2, 4, 1, 1, 1, 1, 55, 1]How could I split the given list (and other lists containing 1s) so that I get sub-lists containing only neighbouring 1s - so the result would be: [ [1, 1, 1], [1, 1, 1, 1], [1] ]I guess I am looking to build a function that would somehow detect the "outer" 1s as the points of list separation:
I guess there could be an approach using maybe itertools' takewhile/dropwhile or something, but this simple for loop does it:l = [100, 96, 1, 1, 1, 2, 4, 1, 1, 1, 1, 55, 1]res = []tmp = []for i in l: if i == 1: tmp.append(i) elif tmp: res.append(tmp) tmp = []if tmp: res.append(tmp)print(res)Output:[[1, 1, 1], [1, 1, 1, 1], [1]]
Minimum required hardware component to install tensorflow-gpu in python I'm tried many PC with different hardware capability to install tensorflow on gpu, they are either un-compatible or compatible but stuck in some point. I would like to know the minimum hardware required to install tensorflow-gpu. And also I would like to ask about some hardware, Is they are allowed or not:Can I use core i5 instead of core i7 ??Is 4 GB gpu enough for training the dataset??Is 8 GB ram enough for training and evaluating the dataset ?? with most thanks.
TensorFlow (TF) GPU 1.6 and above requires cuda compute capability (ccc) of 3.5 or higher and requires AVX instruction support.https://www.tensorflow.org/install/gpu#hardware_requirements.https://www.tensorflow.org/install/pip#hardware-requirements.Therefore you would want to buy a graphics card that has ccc above 3.5.Here's a link that shows ccc for various nvidia graphic cards https://developer.nvidia.com/cuda-gpus.However if your cuda compute capability is below 3.5 you have to compile TF from sources yourself. This procedure may or may not work depending on the build flags you choose while compiling and is not straightforward.In my humble opinion, The simplest way is to use TF-GPU pre-built binaries to install TF GPU.To answer your questions. Yes you can use TF comfortably on i5 with 4gb of graphics card and 8gb ram. The training time may take longer though, depending on task at hand.In summary, the main hardware requirement to install TF GPU is getting a Nvidia graphics card with cuda compute capability more than 3.5, more the merrier.Note that TF officially supports only NVIDIA graphics card.
How to compare two dates that are datetime64[ns] and choose the newest I have a dataset and I want to compare to dates, both are datetime64[ns] if one is the newest I need to choose the other.Here is my code:df_analisis_invertido['Fecha de la primera conversion']=df_analisis_invertido.apply(lambda x: x['Fecha de creacion'] if df_analisis_invertido['Fecha de la primera conversion'] < df_analisis_invertido['Fecha de creacion'] else x['Fecha de la primera conversion'], axis=1)this is the error:ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
The approach you chose is almost fine, except the comparison of the series objects. If you replace them with x instead of the df_analisis_invertido, it should work.Here an example:import pandas as pddata = {'t_first_conv': [5, 21, 233], 't_creation': [3, 23, 234], }df = pd.DataFrame(data)df['t_first_conv'] = pd.to_datetime(df['t_first_conv'])df['t_creation'] = pd.to_datetime(df['t_creation'])print(df)# Change entry of first conversion column in case it is older/smaller than creation value (timestamps)# Expected: # 0: 5 3# 1: 23 23# 2: 234 234df['t_first_conv']=df.apply(lambda x: x['t_creation'] if x['t_first_conv'] < x['t_creation'] else x['t_first_conv'], axis=1)print(df)
Transform a python nested list into an HTML table I want to transform a list of rows in Python into an HTML table to ultimately send in an email body. Let's say my list of rows is stored as the variable req_list (representing the import data from a .csv file, for example) looks like:> [['Email', 'Name', 'Name ID', 'Policy ID',> 'Policy Number', 'Policy Effective Date'],> ['[email protected]','My Name', '5700023153486', '57000255465455','C4545647216', '1/1/2017']]Please ignore the above > formatting that appears incorrect.You can probably guess that the first row in the list contains column headers. I want to generate an HTML table out of this list. Assume for this example that there could be any number of additional rows to add to the table, but no need to teach me about looping to handle this or error handling if there are 0.How can I change this into an HTML table formatted relatively well? (e.g. black and white, with gridlines perhaps). I do not know HTML very well and have tried the following:for thing in req_list: for cell in thing: req_tbl = req_tbl + "<tr><td>" + str(cell)Which yields basically each "cell" in the list printed one after another on a single line, when read in my email inbox (the code sends this req_table variable to myself in an email)EmailNameName IDPolicy IDand so on. How can I get this formatted into a proper HTML table? Furthermore, is there a way I can read the "html" text contained in req_table within python so I can check my work? I have used urllib to open pages/files but can't seem to pass it a variable.
You can use Pandas for that, only two lines of code:import pandas as pddf = pd.DataFrame(req_list[1:], columns=req_list[0])df.to_html()'<table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>Email</th>\n <th>Name</th>\n <th>Name ID</th>\n <th>Policy ID</th>\n <th>Policy Number</th>\n <th>Policy Effective Date</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>0</th>\n <td>[email protected]</td>\n <td>My Name</td>\n <td>5700023153486</td>\n <td>57000255465455</td>\n <td>C4545647216</td>\n <td>1/1/2017</td>\n </tr>\n </tbody>\n</table>'You can also read a html table into a DataFrame using:df = pd.read_html(req_table)
Can not import opencv in python3 in Raspberry Pi3? Any solution for this error ?, need help :(I import cv2 in python3:import cv2and it results like this:Traceback (most recent call last):File "<stdin>", line 1, in <module>File "/usr/local/lib/python3.5/dist-packages/cv2/__init__.py", line 4, in <module> from .cv2 import *ImportError: libQtTest.so.4: cannot open shared object file: No such file or directory
Use this:sudo apt install libqt4-testReference: RPi-Stackexchange
python vaex groupby with custom function Is there a way to apply a custom function to a group using the groupby function of a vaex DataFrameArray?I can do: df_vaex.groupby(['col_x1','col_x2','col_x3','col_x4'], agg=vaex.agg.mean(df_vaex['col_y']))But is there a way to do pandas: df.groupby(['col_x1','col_x2','col_x3','col_x4']).apply(lambda x: my_own_function(x['col_y']))
Unfortunately, not. There's an open issue requesting it, and the Vaex team is thinking about/working on a solution.https://github.com/vaexio/vaex/issues/763
How to combine multiple different numpy arrays along a single common dimension, while setting unique variables as separate dimensions I have multiple different numpy arrays, all with different shapes and containing different information. But all contain a 'timestamp' axis.For example, I have 2 arrays, a, b as follows:a = np.array([[1,[1,2,3,4,5,6,7,8,9,10]],[2,[11,12,13,14,15,16,17,18,19,20]],[3,[1,2,3,4,5,6,7,8,9,10]],[4,[11,12,13,14,15,16,17,18,19,20]]])b = np.array([[1,0],[2,1],[3,1],[4,0]])I want to combine them to create the following([ [1, [[1,2,3,4,5,6,7,8,9,10], 0]], [2, [[11,12,13,14,15,16,17,18,19,20], 1]], [3, [[1,2,3,4,5,6,7,8,9,10], 1]], [4, [[11,12,13,14,15,16,17,18,19,20], 0]]])I have been going in circles and have tried using different techniques like vstack, concatenation, as well as a bunch of others, but have not been successful.Any guidance would be gratefully appreciated!
Maybe the previous answer using a zip solved it for you but it works only if the 2 lists have the "index element" in the same order. In case they are not (or if there are few indexes missing), the zip will not work properly.Try this.import itertools[[i[0][0],[i[0][1],i[1][1]]] for i in itertools.product(a,b) if i[1][0]==i[0][0]]This is basically the same as taking a join on the index element.[[1, [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 0]], [2, [[11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 1]], [3, [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 1]], [4, [[11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 0]]]
How to create a column that has the same value per group in Python Pandas? I currently have a Pandas Dataframe with lots of stock tickers in my first column. They are time series so each Tickers appears more than once. In my second column I have a CUSIP code, but this code only appears in the row where the ticker appears first, all the next rows do not contain this CUSIP code. I would like to have this same CUSIP code in all the columns that match the same ticker. This is what my dataframe looks like, I want all the NaN to fill with the correct CUSIP so that you get the dataframe belowMSFT.OQ 594918104 FY2019 55252000000 United States USA 1MSFT.OQ NaN FY2018 44501000000 United States USA 1MSFT.OQ NaN FY2017 42730000000 United States USA 1MSFT.OQ NaN FY2016 25145000000 United States USA 1EFT_pa^E08 449515402 FY2001 6642000 United States USA 1EFT_pa^E08 NaN FY2000 12161000 United States USA 1EFT_pa^E08 NaN FY1999MSFT.OQ 594918104 FY2019 55252000000 United States USA 1MSFT.OQ 594918104 FY2018 44501000000 United States USA 1MSFT.OQ 594918104 FY2017 42730000000 United States USA 1MSFT.OQ 594918104 FY2016 25145000000 United States USA 1EFT_pa^E08 449515402 FY2001 6642000 United States USA 1EFT_pa^E08 449515402 FY2000 12161000 United States USA 1EFT_pa^E08 449515402 FY1999
Use ffill - To fill NA/NaN values using the specified forward method.>>> df.ffill() 0 1 2 3 4 5 6 70 MSFT.OQ 594918104.0 FY2019 5.525200e+10 United States USA 1.01 MSFT.OQ 594918104.0 FY2018 4.450100e+10 United States USA 1.02 MSFT.OQ 594918104.0 FY2017 4.273000e+10 United States USA 1.03 MSFT.OQ 594918104.0 FY2016 2.514500e+10 United States USA 1.04 EFT_pa^E08 449515402.0 FY2001 6.642000e+06 United States USA 1.05 EFT_pa^E08 449515402.0 FY2000 1.216100e+07 United States USA 1.06 EFT_pa^E08 449515402.0 FY1999 1.216100e+07 United States USA 1.0
Is there a way to change windows folder thumbnails with Python? I have hundreds of folders of images on my HDD, and with very few exceptions they each have a cover image that I want to use as their respective folder thumbnails, or at least a memorable first image. Unfortunately, Windows 10 defaults to using two random images in the folder as the thumbnail, and I have to manually select the first image as the thumbnail in the folder properties every singe time. Recently Windows automatically wiped the thumbnail cache, and I really don't want to manually reset the thumbnails on these folders.Is there a way to automate going into a folder's properties, the customize tab, folder pictures, and selecting the first item in the folder every time? Or would I need a hypothetical "Folder.properties.setFolderPicture()" that Windows doesn't have for security reasons? Python is the only language I have any experience with, but if I need another language to do this I'm willing to try it.
I don't have reputation to comment, so I pile up my answer here. I feel you are better of using folder icons for this purpose, since nowhere on the Internet could I find a way to programmatically set folder pictures, but I'm sure its some registry trickery.import osfrom PIL import Imagefrom configparser import ConfigParserMAX_SIZE = 256, 256image = Image.open(name_dot_format)image.thumbnail(MAX_SIZE)image.save(name_dot_ico)ini = ConfigParser()ini['.ShellClassInfo'] = {'IconResource': f'{name_dot_ico},0'}try: with open('desktop.ini', 'w') as desktop_ini: ini.write(desktop_ini) os.system("attrib +s +h desktop.ini") os.system("attrib +r .")except PermissionError: # Don't mess up the already existing desktop.ini os.system("attrib -r .") os.system("attrib -s -h desktop.ini") with open('desktop.ini', 'a') as desktop_ini: ini.write(desktop_ini) os.system("attrib +s +h desktop.ini") os.system("attrib +r .")
Is it possible to use SQLite on a VPS for a Discord bot? Is it possible to use SQLite on a VPS as a database? I've been making a Discord bot and I used SQLite for leveling, warns and changing prefix etc.I don't really want to use JSON as a database since I'll be making this bot a public bot for everyone's usage, and JSON seems to slow down when the file gets chunky enough. Also using SQLite seemed easier for me rather than using JSON.If SQLite doesn't work on a VPS, is there an alternative way of making a database for leveling or other type of stuff that requires a database?
The sqlite3 module is part of the standard Python library, so any standard Ubuntu installation or any VPS with Python installed will not require further installation.If you need to manually install it use:sudo apt-get updatesudo apt-get install sqlite3 libsqlite3-devKeep in mind that Sqlite can only support one writer at a time. The official documentation talks about the pros and cons of SQLite (https://www.sqlite.org/whentouse.html)
Python variables format changes inside an IF while the first condition is ok I am doing a very simple aplication of calculating a cost, and I want to have a Radiobutton where I can choose the currency.I am wondering what is the problem here, because whith the first condition(Run in EUR), everything goes well, but if there is the second condition, I got the problem: File "C:\Users\Iker\AppData\Local\Programs\Python\Python38\lib\tkinter\__init__.py", line 1883, in __call__ return self.func(*args) File "C:\Users\Iker\eclipse-workspace\Prueba2\Prueba2.py", line 128, in <lambda> Button_1 = Button(root, text="Calcular", padx=20, pady=10, command=lambda:calcular(Moneda.get())) File "C:\Users\Iker\eclipse-workspace\Prueba2\Prueba2.py", line 71, in calcular Beneficio_Bruto_EUR = Label(root, width=20, borderwidth=5, text="%.2f€"%Beneficio_Bruto/d) TypeError: unsupported operand type(s) for /: 'str' and 'float'I really don't get it, because if there a problem defining the variables, shouldn't be in the if and else the same?def calcular(Moneda): a=float(Precio_de_venta.get()) b=float(Portes.get()) c=float(Precio_de_compra.get()) d=float(1.04) #Beneficio Bruto Beneficio_Bruto=a-c-b-(a*0.1)-((a*0.029)+0.35) Beneficio_Brutolbl=Label(root, text="Beneficio Bruto") Beneficio_Brutolbl.grid(row=3, column=0) if Moneda == 0: Beneficio_Bruto_EUR = Label(root, width=20, borderwidth=5, text="%.2f€"%Beneficio_Bruto) Beneficio_Bruto_EUR.grid(row=3, column=1) Beneficio_Bruto_USD = Label(root, width=10, borderwidth=5, text="%.2f USD"%(Beneficio_Bruto*d)) Beneficio_Bruto_USD.grid(row=3, column=2) elif Moneda == 1: Beneficio_Bruto_EUR = Label(root, width=20, borderwidth=5, text="%.2f€"%Beneficio_Bruto/d) Beneficio_Bruto_EUR.grid(row=3, column=2) Beneficio_Bruto_USD = Label(root, width=10, borderwidth=5, text="%.2f USD"%(Beneficio_Bruto)) Beneficio_Bruto_USD.grid(row=3, column=1)
You are missing parenthesis:Beneficio_Bruto_EUR = Label(root, width=20, borderwidth=5, text="%.2f€" % (Beneficio_Bruto/d))String formatting is always applied before operations:>>> '%d' % (4 * 2)'8'>>> '%d' % 4 * 2'44' >>> '%d' % (4 / 2)'2'>>> '%d' % 4 / 2Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: unsupported operand type(s) for /: 'str' and 'int'
Abstract dataclass without abstract methods in Python: prohibit instantiation Even if a class is inherited from ABC, it can still be instantiated unless it contains abstract methods.Having the code below, what is the best way to prevent an Identifier object from being created: Identifier(['get', 'Name'])?from abc import ABCfrom typing import Listfrom dataclasses import dataclass@dataclassclass Identifier(ABC): sub_tokens: List[str] @staticmethod def from_sub_tokens(sub_tokens): return SimpleIdentifier(sub_tokens) if len(sub_tokens) == 1 else CompoundIdentifier(sub_tokens)@dataclassclass SimpleIdentifier(Identifier): pass@dataclassclass CompoundIdentifier(Identifier): pass
You can create a AbstractDataclass class which guarantees this behaviour, and you can use this every time you have a situation like the one you described.@dataclass class AbstractDataclass(ABC): def __new__(cls, *args, **kwargs): if cls == AbstractDataclass or cls.__bases__[0] == AbstractDataclass: raise TypeError("Cannot instantiate abstract class.") return super().__new__(cls)So, if Identifier inherits from AbstractDataclass instead of from ABC directly, modifying the __post_init__ will not be needed.@dataclassclass Identifier(AbstractDataclass): sub_tokens: List[str] @staticmethod def from_sub_tokens(sub_tokens): return SimpleIdentifier(sub_tokens) if len(sub_tokens) == 1 else CompoundIdentifier(sub_tokens)@dataclassclass SimpleIdentifier(Identifier): pass@dataclassclass CompoundIdentifier(Identifier): passInstantiating Identifier will raise TypeError but not instantiating SimpleIdentifier or CompountIdentifier.And the AbstractDataclass can be re-used in other parts of the code.
only convert to date cells with data I have a data frame with dates and missing dates:date2022-02-022022-02-03--I need to convert to date only the ones different from '-', I'm using .loc for this but is not working:df.loc[oppty['date'] != '-', 'date'] = pd.to_datetime(df['date'])in parse raise ParserError("String does not contain a date: %s",timestr) dateutil.parser._parser.ParserError: String does not containa date: -
Will this work?df1 = pd.DataFrame({'date':['2022-02-02', '2022-02-03', '-','-']})df1pd.to_datetime(df1['date'], errors='coerce')if you want to keep '-' change 'coerce' to 'ignore'
Sum possibilities, one loop Earlier I had a lot of wonderful programmers help me get a function done. however the instructor wanted it in a single loop and all the working solutions used multiple loops.I wrote an another program that almost solves the problem. Instead of using a loop to compare all the values, you have to use the function has_key to see if that specific key exists. Answer of that will rid you of the need to iter through the dictionary to find matching values because u can just know if they are matching or not.again, charCount is just a function that enters the constants of itself into a dictionary and returns the dictionary.def sumPair(theList, n): for a, b in level5.charCount(theList).iteritems(): x = n - a if level5.charCount(theList).get(a): if a == x: if b > 1: #this checks that the frequency of the number is greater then one so the program wouldn't try to multiply a single possibility by itself and use it (example is 6+6=12. there could be a single 6 but it will return 6+6 return a, x else: if level5.charCount(theList).get(a) != x: return a, x print sumPair([6,3,8,3,2,8,3,2], 9) I need to just make this code find the sum without iteration by seeing if the current element exists in the list of elements.
You can use collections.Counter function instead of the level5.charCountAnd I don't know why you need to check if level5.charCount(theList).get(a):. I think it is no need. a is the key you get from the level5.charCount(theList)So I simplify you code:form collections import Counterdef sumPair(the_list, n): for a, b in Counter(the_list).iteritems(): x = n - a if a == x and b >1: return a, x if a != x and b != x: return a, xprint sumPair([6, 3, 8, 3, 2, 8, 3, 2], 9) #output>>> (8, 1)The also can use List Comprehension like this:>>>result = [(a, n-a) for a, b in Counter(the_list).iteritems() if a==n-a and b>1 or (a != n-a and b != n-a)]>>>print result[(8, 1), (2, 7), (3, 6), (6, 3)]>>>print result[0] #this is the result you want(8, 1)
Django 2.2 with 2 domains I have a Django web app and 2 domains. I want to use these domains for the different Django apps.For example:firstdomain.com -> stuff appseconddomain.com -> customer appIs it possible? How should urls.py looks like?
Django comes with an optional “sites” framework. It’s a hook for associating objects and functionality to particular websites, and it’s a holding place for the domain names and “verbose” names of your Django-powered sites.Use it if your single Django installation powers more than one site and you need to differentiate between those sites in some way.The "sites" framework
Django can't call custom django commands extwith call_command This is probably a really basic question but I can't find the answer anywhere for some reason. I created a custom command which I can call from the command line with python manage.py custom_command. I want to run it from elsewhere but don't know how to do so. I have added pages to my INSTALLED_APPS in settings.py. This question: Django custom command works on command line, but not call_command is very similar but I'm not sure what the answer means and I think it's unrelated. My file structure is :├── custom_script│ ├── script.py│ ├── __init__.py├── project│ ├── asgi.py│ ├── __init__.py│ ├── settings.py│ ├── urls.py│ └── wsgi.py├── manage.py├── pages│ ├── admin.py│ ├── apps.py│ ├── forms.py│ ├── __init__.py│ ├── management│ │ ├── commands│ │ │ ├── __init__.py│ │ │ └── custom_command.py│ │ ├── __init__.py│ ├── migrations│ │ ├── __init__.py│ ├── models.py│ ├── tests.py│ └── views.pycontent of script.pyfrom django.core.management import call_commandcall_command('custom_command', 'hi')content of custom_command.pyfrom django.core.management.base import BaseCommand, CommandErrorclass Command(BaseCommand): def add_arguments(self, parser): parser.add_argument('message', type=str) def handle(self, *args, **options): print('it works')I want to run python custom_script/script.py which will call the custom_command but keep getting: django.core.management.base.CommandError: Unknown command: 'custom_command'. I have isolated the problem to the fact that django can't see my command as when I run print(management.get_commands()) my custom command is not listed. Additionally, after looking through the django python code for management for a while I noticed this settings.configured variable which upon checking is False which means it only passes in the default commands when management.get_commands is run. How can I get this to become True? Technically, I could use a subprocess if I really wanted to but since there is already a call_command feature I figured I'd try and use it.
Not sure if this will help anyone, but it turns out I was doing this the wrong way. Generally, I don't think my method above will work because you have to call a django command from outside the django project basically which means the settings will not be configured. My use case was running a django command in the background on a webserver using script.py as the file to run the command. If your use case is similar you should instead call the custom command directly from the command line with python manage.py custom_command, this worked for me at least.
i = self.pos[0] is saying TypeError: 'int' object is not subscriptable, line 18 and 19 of my code I'm trying to build a snake game with pygame by following a video posted by Tech with Tim I'm at part 3 of the video and I don't know my i saying it's not subscriptable when it didn't for him.class cube(object): rows = 20 w = 500 def __init__(self, start, dirnx=1, dirny=0, color=(255, 0, 0)): self.pos = start self.dirnx = 1 self.dirny = 0 self.color = color def move(self, dirnx, dirny): self.dirnx = dirnx self.dirny = dirny self.pos(self.pos[0] + self.dirnx, self.pos[1] + self.dirny) def draw(self, surface, eyes=False): dis = self.w // self.rows i = self.pos[0] j = self.pos pygame.draw.rect(surface, self.color, (self.pos[0]*dis+1, self.pos[0]*dis+1, dis -2, dis -2) ) if eyes: centre = dis // 2 radius = 3 circleMiddle = (i*dis+centre-radius, j*dis+8) circleMiddle2 = (i*dis+dis - radius*2, j*dis+8) pygame.draw.rect(surface, (0, 0, 0), circleMiddle) pygame.draw.rect(surface, (0, 0, 0,), circleMiddle2)This is the class where I'm experiencing the problem and if this information isn't enough here's the full code I've finished up till now I sincerely hope someone can help me.import mathimport randomimport pygameimport tkinter as tkfrom tkinter import messageboxclass cube(object): rows = 20 w = 500 def __init__(self, start, dirnx=1, dirny=0, color=(255, 0, 0)): self.pos = start self.dirnx = 1 self.dirny = 0 self.color = color def move(self, dirnx, dirny): self.dirnx = dirnx self.dirny = dirny self.pos(self.pos[0] + self.dirnx, self.pos[1] + self.dirny) def draw(self, surface, eyes=False): dis = self.w // self.rows i = self.pos[0] j = self.pos pygame.draw.rect(surface, self.color, (self.pos[0]*dis+1, self.pos[0]*dis+1, dis -2, dis -2) ) if eyes: centre = dis // 2 radius = 3 circleMiddle = (i*dis+centre-radius, j*dis+8) circleMiddle2 = (i*dis+dis - radius*2, j*dis+8) pygame.draw.rect(surface, (0, 0, 0), circleMiddle) pygame.draw.rect(surface, (0, 0, 0,), circleMiddle2)class snake(object): body = [] turns = {} def __init__(self, color, pos): self.color = color self.head = cube(pos) self.body.append(self.head) self.dirnx = 0 self.dirny = 1 def move(self): for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() keys = pygame.key.get_pressed() for key in keys: if keys[pygame.K_LEFT]: self.dirnx == -1 self.dirny = 0 self.turns[self.head.pos[:]] == [self.dirnx, self.dirny] elif keys[pygame.K_RIGHT]: self.dirnx == 1 self.dirny = 0 self.turns[self.head.pos[:]] == [self.dirnx, self.dirny] elif keys[pygame.K_UP]: self.dirnx == 0 self.dirny = -1 self.turns[self.head.pos[:]] == [self.dirnx, self.dirny] elif keys[pygame.K_DOWN]: self.dirnx == 0 self.dirny = 1 self.turns[self.head.pos[:]] == [self.dirnx, self.dirny] for i, c in enumerate: p = c.pos[:] if p in self.turns: turn = self.turns[p] c.move[turn[1], turn[0]] if i == len(self.body) - 1: self.turns.pop(p) else: if c.dirnx == -1 and c.pos[0] <= 0: c.pos == (c.rows -1,c.pos[1]) elif c.dirnx == 1 and c.pos[0] >= c.rows[-1]: c.pos == (0, c.pos[1]) elif c.dirny == 1 and c.pos[1] >= c.rows[-1]: c.pos == (c.rows[0],c.pos[0]) elif c.dirny == -1 and c.pos[1] <= 0: c.pos == (c.pos[0], c.rows -1) else: c.move(c.dirnx, c.dirny) def reset(self, pos): pass def addCube(self): pass def draw(self, surface): for i, c in enumerate(self.body): if i == 0: c.draw(surface,True) else: c.draw(surface)def drawGrid(w, rows, surface): sizeBtwn = w // rows x = 0 y = 0 for l in range(rows): x = x + sizeBtwn y = y + sizeBtwn pygame.draw.line(surface, (255, 255, 255), (x, 0), (x, w)) pygame.draw.line(surface, (255, 255, 255), (0, y), (w, y))def redrawWindow(surface): global rows, width, s surface.fill((0, 0, 0)) s.draw(surface) drawGrid(width, rows, surface) pygame.display.update()def randomSnack(rows, item): passdef message_box(subject, content): passdef main(): global width, rows, s width = 500 rows = 20 win = pygame.display.set_mode((width, width)) s = snake((0, 170, 0), 10) clock = pygame.time.Clock() flag = True while flag: pygame.event.get() pygame.time.delay(50) # lower this is the faster clock.tick(10) # lower this is the slower redrawWindow(win)main() ```
You create the snake object assnake((0, 170, 0), 10)Inside the snake.__init__ function you create a cube object ascube(pos)Where pos is the value 10 you passed to the snake.__init__ function. 10 is indeed an int object, and you can't use it as a list, tuple or dictionary (it's not subscriptable).
Django: How to compare two querysets and get the difference without including the PK I don't think the word difference is correct because you might think difference() but it makes sense to me what I am trying to achieve. I do apologize if this is a common problem that's already been solved but I can't find a solution or dumbed down understanding of it.I have two querysets of the same model as follows:qs1 = ErrorLog.objects.get(report=original_report).defer('report') # 272 rows returnedqs2 = ErrorLog.objects.get(report=new_report).defer('report') # 266 rows returnedI want to compare the first one to the second one and find the 6 rows that don't match in the first qs1I tried difference() and intersection() but I keep ending up with the same 272 rows or 0 rows. I have a feeling that it sees pk as a unique value so it never finds matching rows. I tried the following:# Get the 4 fields I want to compare and excludefield_1 = [error.field_1 for error in qs2]field_2 = [error.field_2 for error in qs2]field_3 = [error.field_3 for error in qs2]field_4 = [error.field_4 for error in qs2]# Assuming this would workqs3 = qs1.exclude(field_1__in=field_1, field_2__in=field_2, field_3__in=field_3, field_4__in=field_4)# But ended up with 10 rows in qs3 since it doesn't loop thru the fields it just excludes it if found, which isn't ideal since some rows might be duplicate in qs1 so.I then thought maybe union() would combine the two and exclude any duplicates between the two then I could just use an exclude(pk__in=qs3_union). But I realized that's not how union() works.
qs1 = ErrorLog.objects.filter(report=original_report) # 272 rowsqs2 = ErrorLog.objects.filter(report=new_report) # 266 rowsdiff_qs = qs1.difference(qs2) # 6 rows
how to read csv rows and compare it with a my list Suppose, we have a list of listdata = [23, 511, 62] and we want to check whether this list exist in a csv file and find out the name of the person who matches itfor e.g. csv file:name,age,height,weightbob,24,6,82ash,23,511,62mary,22,62,55How can we do so by reading it into memory using csv.DictReader and checking the information if it matches print out it's nameI don't know how can I compare the whole listdata variable with the values in dictionary (csv.DictReader) as only way I know how to use dictionaries is by accessing them with key and key here is pretty limited as it can't take the whole list and compare with other keys in a same line/row.
import csvlistdata = [23, 511, 62]with open('file.csv', newline='') as csvfile: reader = list(csv.reader(csvfile, delimiter=',', quotechar='|')) # we remove the first row because it contains headers for row in reader[1:]: row = list(row) if listdata == row[1:]: print(row[0]) break
In python using iloc how would you retrive the last 12 values of a specific column in a data frame? So the problem I seem to have is that I want to acces the data in a dataframe but only the last twelve numbers in every column so I have a data frame:index A B C 20 1 2 3 21 2 5 6 22 7 8 9 23 10 1 2 24 3 1 2 25 4 9 0 26 10 11 12 27 1 2 3 28 2 1 5 29 6 7 8 30 8 4 5 31 1 3 4 32 1 2 3 33 5 6 7 34 1 3 4The values inside A,B,C are not important they are just to show an examplecurrently I am using df1=df2.iloc[23:35] perhaps there is an easier way to do this because I have to do this for around 20 different dataframes of different sizes I know that if I use df1=df2.iloc[-1]it will return the last number but I dont know how to incorporate it for the last twelve numbers. any help would be appreciated.
You can get the last n rows of a DataFrame by:df.tail(n)ordf.iloc[-n-1:-1]
Python 3.6 SSL - Uses TLSv1.0 instead of TLSv1.2 cipher - (2 way auth and self-signed cert) I'm using the ssl library with python 3.6. I'm using self-signed ECDSA certificate that I generated with openssl. Server/client code:# Create a context in TLSv1.2, requiring a certificate (2-way auth)context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)context.options |= ssl.OP_NO_TLSv1context.options |= ssl.OP_NO_TLSv1_1context.verify_mode = ssl.CERT_REQUIREDcontext.check_hostname = True # This line ommited in server code# Set the list of allowed ciphers to those with key length of at least 128# TODO Figure out why this isn't workingcontext.set_ciphers('TLSv1.2+HIGH+SHA256+ECDSA')# Print some info about the connectionfor cipher in context.get_ciphers(): print(cipher)Output: {'id': 50380835, 'name': 'ECDHE-ECDSA-AES128-SHA256', 'protocol': 'TLSv1/SSLv3', 'description': 'ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256', 'strength_bits': 128, 'alg_bits': 128}The current cipher: connection.cipher()('ECDHE-ECDSA-AES128-SHA256', 'TLSv1/SSLv3', 128)My question: why is the selected cipher not TLSv1.2?Edit: Requested screenshotsBased on another thread, I tried changing my code to the following, without any success. # Create a context in TLSv1.2, requiring a certificate (2-way auth) self.context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) self.context.options |= ssl.OP_NO_SSLv2 self.context.options |= ssl.OP_NO_SSLv3 self.context.options |= ssl.OP_NO_TLSv1 self.context.options |= ssl.OP_NO_TLSv1_1 self.context.verify_mode = ssl.CERT_REQUIRED # self.context.check_hostname = True # Set the list of allowed ciphers to those with high key length # I went with SHA384 because it seemed to have more security self.context.set_ciphers('TLSv1.2+ECDSA+HIGH')
This cipher is compatible with TLS 1.2, it's an ordinary cipher defined in RFC 5289.I think we need to interpret somewhat Python's doc to know what get_ciphers() is returning exactly as it's not explained. But cipher() gives us the answer maybe :SSLSocket.cipher()Returns a three-value tuple containing the name of the cipher being used, the version of the SSL protocol that defines its use, andthe number of secret bits being used. If no connection has beenestablished, returns None.A network capture would confirm the TLS protocol version.
Conversion between Cartesian vs. Polar Coordinates. Hoping the result is positive I have several points that I need to covert them from Cartesian to Polar Coordinates. But for some points, the results I got were negative values.For example, the origin or the center of the system is (50,50), and the point I want to covert is (10, 43). The angle I got from my code is -170.07375449, but I wish the angle is 189.92624551. (I hope all of the angles after conversion are between 0~360 degree)How I fix this?Thanks!!!import numpy as nppoints = np.array([(10, 43), (10, 44), (10, 45), (10, 46), (10, 47)])#Set the center (origin) at (50, 50). Not (0, 0)def cart_to_pol(coords, center = [50,50], deg = True): complex_format = np.array(coords, dtype = float).view(dtype = np.complex) -\ np.array(center, dtype = float).view(dtype = np.complex) # return np.abs(complex_format).squeeze(), np.angle(complex_format, deg = deg).squeeze() return np.angle(complex_format, deg=deg).squeeze()print(cart_to_pol(points))
If you need to convert [-180; 180] angle to [0; 360] you can use this code:def convert_angle(angle): return (angle + 360) % 360
Generate all permutations of n entries in a w x h matrix I'd like to generate all the permutations of n entries in a w x h matrix:example with a 2x2 matrix and n = 1:| 1 0 || 0 0 || 0 1 || 0 0 || 0 0 || 1 0 || 0 0 || 0 1 |example with a 3x3 matrix and n = 2 (partial):| 0 0 1|| 0 0 1|| 0 0 0|| 1 0 0|| 0 0 1|| 0 0 0|...I would like to avoid the usage of numpy, so I think itertool is the way to go.I am looking at one dimensional solutions but all I got is something slightly different , like itertools.product that iterates with a fixed number of values, e.g.itertools.product([0,'n'],repeat=6)[(0, 0, 0, 0, 0, 0),....('n', 'n', 'n', 'n', 'n', 'n')]any hint would be gladly appreciated
There are w * h available positions in which you want to place n 1's and fill the rest with 0's.You can create all possible combinations of positions for the n 1's by using itertools.combinations:>>> w = 2>>> h = 2>>> n = 2>>> list(itertools.combinations(range(w * h), n))[(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]To create the actual matrix (as a list of 1's and 0's) from one of the positions tuples you can for example use a list comprehension:>>> positions = (1, 3)>>> [1 if i in positions else 0 for i in range(w * h)][0, 1, 0, 1]For very large n the lookup i in positions becomes inefficient and it would be better to change this to a function like:def create_matrix(positions): matrix = [0] * w * h for i in positions: matrix[i] = 1 return matrixNow you can put everything together:>>> [[1 if i in p else 0 for i in range(w * h)]... for p in itertools.permutations(range(w * h), n)][[1, 1, 0, 0], [1, 0, 1, 0], [1, 0, 0, 1], [1, 1, 0, 0], [0, 1, 1, 0], [0, 1, 0, 1], [1, 0, 1, 0], [0, 1, 1, 0], [0, 0, 1, 1], [1, 0, 0, 1], [0, 1, 0, 1], [0, 0, 1, 1]]Or, if you use the create_matrix function:>>> [create_matrix(p) for p in itertools.permutations(range(w * h), n)]
Make a 2D histogram with HEALPix pixellization using healpy The data are coordinates of objects in the sky, for example as follows:import pylab as pltimport numpy as npl = np.random.uniform(-180, 180, 2000)b = np.random.uniform(-90, 90, 2000)I want to do a 2D histogram in order to plot a map of the density of some point with (l, b) coordinates in the sky, using HEALPix pixellization on Mollweide projection. How can I do this using healpy ?The tutorial: http://healpy.readthedocs.io/en/v1.9.0/tutorial.htmlsays how to plot a 1D array, or a fits file, but I don't find how to do a 2d histogram using this pixellization.I also found this function, but it is not working , so I am stuck.hp.projaxes.MollweideAxes.hist2d(l, b, bins=10)I can do a plot of these points in Mollweide projection this way :l_axis_name ='Latitude l (deg)'b_axis_name = 'Longitude b (deg)'fig = plt.figure(figsize=(12,9))ax = fig.add_subplot(111, projection="mollweide")ax.grid(True)ax.scatter(np.array(l)*np.pi/180., np.array(b)*np.pi/180.)plt.show()Thank you very much in advance for your help.
Great question! I've written a short function to convert a catalogue into a HEALPix map of number counts:from astropy.coordinates import SkyCoordimport healpy as hpimport numpy as npdef cat2hpx(lon, lat, nside, radec=True): """ Convert a catalogue to a HEALPix map of number counts per resolution element. Parameters ---------- lon, lat : (ndarray, ndarray) Coordinates of the sources in degree. If radec=True, assume input is in the icrs coordinate system. Otherwise assume input is glon, glat nside : int HEALPix nside of the target map radec : bool Switch between R.A./Dec and glon/glat as input coordinate system. Return ------ hpx_map : ndarray HEALPix map of the catalogue number counts in Galactic coordinates """ npix = hp.nside2npix(nside) if radec: eq = SkyCoord(lon, lat, 'icrs', unit='deg') l, b = eq.galactic.l.value, eq.galactic.b.value else: l, b = lon, lat # conver to theta, phi theta = np.radians(90. - b) phi = np.radians(l) # convert to HEALPix indices indices = hp.ang2pix(nside, theta, phi) idx, counts = np.unique(indices, return_counts=True) # fill the fullsky map hpx_map = np.zeros(npix, dtype=int) hpx_map[idx] = counts return hpx_mapYou can then use that to populate the HEALPix map:l = np.random.uniform(-180, 180, 20000)b = np.random.uniform(-90, 90, 20000)hpx_map = hpx.cat2hpx(l, b, nside=32, radec=False)Here, the nside determines how fine or coarse your pixel grid is.hp.mollview(np.log10(hpx_map+1))Also note that by sampling uniformly in Galactic latitude, you'll prefer data points at the Galactic poles. If you want to avoid that, you can scale that down with a cosine.hp.orthview(np.log10(hpx_map+1), rot=[0, 90])hp.graticule(color='white')
Pipe unbuffered stdout from subprocess to websocket How would you pipe the stdout from subprocess to the websocket without needing to wait for a newline character?Currently, the code below only sends the stdout on a newline.Code attached for the script being run by the subprocess. Is the output not being flushed properly from there?send_data.py:import asyncioimport websocketsimport subprocessimport sysimport osasync def foo(websocket, path): print ("socket open") await websocket.send("successfully connected") with subprocess.Popen(['sudo','python3', '-u','inline_print.py'],stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0, universal_newlines=True) as p: for line in p.stdout: line = str(line.rstrip()) await websocket.send(line) p.stdout.flush() for line in p.stderr: line = str(line.rstrip()) await websocket.send(line) p.stdout.flush()start_server = websockets.serve(foo, "localhost", 8765)asyncio.get_event_loop().run_until_complete(start_server)asyncio.get_event_loop().run_forever()inline_print.py:from time import sleepimport sysloading = 'LOADING...LOADING...LOADING...LOADING...LOADING...'for i in range(50): print(loading[i], sep='', end=' ', flush=True) sleep(0.1)if end=' ' is changed to end='\n' then the stdout from send_data.py occurs in realtime.js client:var ws = new WebSocket('ws://localhost:8765/');ws.onmessage = function(event) { console.log(event.data);};I acknowledge this question is similar to these:catching-stdout-in-realtime-from-subprocesshow-do-i-get-real-time-information-back-from-a-subprocess-popen-in-python-2-5intercepting-stdout-of-a-subprocess-while-it-is-runningyet none of the solutions work without a newline character from the subprocess.
If you write for line in p.stdout:then you (kind of) implicitly say, that you want to wait for a complete lineyou had to use read(num_bytes) and not readline()Below one example to illustrate:sub.py: (example subprocess)import sys, timefor v in range(20): print(".", end="") sys.stdout.flush() if v % 4 == 0: print() if v % 3 != 0: time.sleep(0.5)rdunbuf.py: (example reading stddout unbuffered)contextlib, time, subprocessdef unbuffered(proc, stream='stdout'): stream = getattr(proc, stream) with contextlib.closing(stream): while True: last = stream.read(80) # read up to 80 chars # stop when end of stream reached if not last: if proc.poll() is not None: break else: yield last# open subprocess without buffering and without universal_newlines=Trueproc = subprocess.Popen(["./sub.py"], stdout=subprocess.PIPE, bufsize=0)for l in unbuffered(proc): print(l)print("end")Please note as well, that your code might block if it produces a lot of error messages before producing normal output, as you try first to read all normal output and only then data from stderr.You should read whataver data your subprocess produces as before any pipeline buffers are blocking independently whether this is stdout or stderr.You can use select.select() ( https://docs.python.org/3.8/library/select.html#select.select ) in order to decide whether you had to read from stdout, or stderr
Passing a list to a method inside a class from another class in order to modify said list and pass back to the original class in Python I am writing a novel Blackjack program for my online portfolio that creates cards from random. In order to not create duplicate cards in one round I have created a list that stores the cards that have already been created. The new random card is then checked against the cards contained inside the dealed_cards list, if it is duplicate the method is called again and a new card assigned. My dealed_cards list is initiated inside a class that creates the round and then is passed from class to class as a list that can re-initialized at the beginning of a new round of game play. However the list is not passing correctly into the method within the class that assigns new card values. Some ways that I have tried to pass the list in are:(self, dealed_cards), with this I get error TypeError deal_card_out() missing 1 required positional argument: 'dealed_cards'With (self, dealed_cards = [], *args) which at least works but doesn't necessarily pass the list correctly, when I try to print the dealed_cards list out from within the method before modifying it I get an empty list.with (self, *dealed_cards) this returns the list as a tuple and doesn't pass it correct. and finally with (self, dealed_cards = []) result: still not passing in the list dealed_cards from inside the functionHere is a test block of code I broke off from the main program in order to test this method.class deal_card(object):def __init__(self): passdef deal_card_out(self, dealed_cards = []): print("This is a test print statement at the beginning of this method to test that dealed_cards was passed in correctly.") print(dealed_cards) card_one_face_value = 'Seven' card_one_suit_value = 'Clubs' for _ in dealed_cards: if card_one_face_value == [_[0]]: print(f"This is a test print statement inside the for loop within deal_card out, it willl print out [_[0]] inside this for loop: {[_[0]]}") if card_one_suit_value == [_[1]]: print("test loop successful") else: print(f"This is a test print statement inside the for loop within deal_card out, it willl print out [_[0]] inside this for loop: {[_[0]]}") pass else: print(f"this is a test print statement inside the for loop within deal_card out it will print out dealed_cards[_[1]] to show what is happening inside this loop: {[_[1]]}") pass dealed_cards.append([card_one_face_value,card_one_suit_value]) print("This is a test print inside of deal_card_out, it prints list dealed_cards after method modifies the list") print(dealed_cards) return [dealed_cards,card_one_face_value,card_one_suit_value]dealed_cards = [['Place','Holder'],['Seven','Clubs']]print("this is a test print statement outside of the method to test that dealed_cards is being passed in correctly")print(dealed_cards)test_run = deal_card.deal_card_out(dealed_cards)
Figured out what was wrong with the method. "self" does not need to be placed in the method definition. For some reason placing self in the method call didn't pass the list dealed_cards correctly. Also, dealed_cards can just be passed as dealed_cards, not dealed_cards = []. So the new correct method definition is def deal_card_out(dealed_cards):The for loop was also misbehaving, the test statement if card_one_face_value == [_[0]]: needed to be changed to if card_one_face_value == _[0]: otherwise you are testing with brackets around seven, the string inside that nested list.
Print unknown number of lists as columns I am using Python 3.5.2, and I want to create an user-friendly program that outputs a range of numbers in some columns. #User inputstart = 0until = 50number_of_columns = 4#Programmer#create list of numberslist_of_stuff = [str(x) for x in range(start,until)]print("-Created "+str(len(list_of_stuff))+" numbers.")#calculate the number of numbers per columnstuff_per_column = int(len(list_of_stuff) / number_of_columns)print("-I must add "+str(stuff_per_column)+" numbers on each column.")#generate different lists with their numbersgenerated_lists = list(zip(*[iter(list_of_stuff)]*stuff_per_column))print("-Columns are now filled with their numbers.")Until that everything is fine, but here I'm stuck: #print lists together as columnsfor x,y,z in zip(generated_lists[0],generated_lists[1],generated_lists[2]): print(x,y,z)print("-Done!")I tried to use that code and it does what I want except because it involves to have hardcoded the number of columns. x,y,z for example would be for 3 columns, but I want to set the number of columns at the User input and remove the need to hardcode it everytime.What am I missing? How can I make the print understand how many lists do I have?Desired output:If user sets number of columns on 4, for example, the output would be:1 6 11 162 7 12 173 8 13 184 9 14 195 10 15 20Etc...
Use:for t in zip(generated_lists[0],generated_lists[1],generated_lists[2]): print(' '.join(str(x) for x in t))or more succinctly:for t in zip(*generated_lists[:3]): print(' '.join(map(str, t)))So what you need to change is 3 to whatever number you want
Lookup values in cells based on values in another column I have a pandas dataframe that looks like: Best_val A B C Value(1 - Best_Val) A 0.1 0.29 0.3 0.9 B 0.33 0.21 0.45 0.79 A 0.16 0.71 0.56 0.84 C 0.51 0.26 0.85 0.15I want to fetch the column value from Best_val for that row an use it as column name to subtract t from 1 to be stored in Value
Use DataFrame.lookup for performance.df['Value'] = 1 - df.lookup(df.index, df.BestVal)df BestVal A B C Value0 A 0.10 0.29 0.30 0.901 B 0.33 0.21 0.45 0.792 A 0.16 0.71 0.56 0.843 C 0.51 0.26 0.85 0.15
Filter points between polygons I have polygon like this:MULTIPOLYGON(((3.6531688909 22.2345676543....)))MULTIPOLYGON(((3.7531688909 22.6543234523....)))…And I have data like this (small part):df = id_easy latitude longitudee705ac2 22.0171 3.6687e705ac2 22.0238 3.6709e705ac2 22.0299 3.6733e705ac2 22.0319 3.67257eb84c8 22.0567 3.68213264cc7 22.0754 3.72773264cc7 22.0766 3.72083264cc7 22.0754 3.71633264cc7 22.0753 3.7102Is it possible to check points started in one blue zone and ended in other blue zone?For example, I need to check: if locations of value e705ac2 starting in left zone and ending in right zone
What does your polygon data look like? Do you have geometry fields? If so, you could use geopandas contains to check if your blue polygons contain your points.
decode TFRecord fail. Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\ I encoded some images to TFRecords as an example and then try to decode them. However, there is a bug during the decode process and I really cannot fix it.InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\222\244\257\222\244\260\223\245\260\223\245\262\225\247\263' [[{{node DecodeJpeg}}]] [Op:IteratorGetNextSync]encode: def _bytes_feature(value): """Returns a bytes_list from a string / byte.""" return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))def _float_feature(value): """Returns a float_list from a float / double.""" return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))def _int64_feature(value): """Returns an int64_list from a bool / enum / int / uint.""" return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))src_path = r"E:\data\example"record_path = r"E:\data\data"sum_per_file = 4num = 0key = 3for img_name in os.listdir(src_path): recordFileName = "trainPrecipitate.tfrecords" writer = tf.io.TFRecordWriter(record_path + recordFileName) img_path = os.path.join(src_path, img_name) img = Image.open(img_path, "r") height = np.array(img).shape[0] width = np.array(img).shape[1] img_raw = img.tobytes() example = tf.train.Example(features = tf.train.Features(feature={ 'image/encoded': _bytes_feature(img_raw), 'image/class/label': _int64_feature(key), 'image/height': _int64_feature(height), 'image/width': _int64_feature(width) })) writer.write(example.SerializeToString())writer.close()decode: import IPython.display as displaytrain_files = tf.data.Dataset.list_files(r"E:\data\datatrainPrecipitate.tfrecords")train_files = train_files.interleave(tf.data.TFRecordDataset)def decode_example(example_proto): image_feature_description = { 'image/height': tf.io.FixedLenFeature([], tf.int64), 'image/width': tf.io.FixedLenFeature([], tf.int64), 'image/class/label': tf.io.FixedLenFeature([], tf.int64, default_value=3), 'image/encoded': tf.io.FixedLenFeature([], tf.string)} parsed_features = tf.io.parse_single_example(example_proto, image_feature_description) height = tf.cast(parsed_features['image/height'], tf.int32) width = tf.cast(parsed_features['image/width'], tf.int32) label = tf.cast(parsed_features['image/class/label'], tf.int32) image_buffer = parsed_features['image/encoded'] image = tf.io.decode_jpeg(image_buffer, channels=3) image = tf.cast(image, tf.float32) return image, labeldef processed_dataset(dataset): dataset = dataset.repeat() dataset = dataset.batch(1) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)# print(dataset) return datasettrain_dataset = train_files.map(decode_example)# train_dataset = processed_dataset(train_dataset)print(train_dataset)for (image, label) in train_dataset: print(repr(image))InvalidArgumentError: Expected image (JPEG, PNG, or GIF), got unknown format starting with '\257\222\244\257\222\244\260\223\245\260\223\245\262\225\247\263' [[{{node DecodeJpeg}}]] [Op:IteratorGetNextSync]
I can use tf.io.decode_raw() to decode the TFRecords and then use tf.reshape() to get the original image. While still don't know when to use tf.io.decode_raw() and when to use tf.io.decode_jpeg().
scrollToTop not working correctly in ScrollPanel with RadioBox I'm having a problem with a wxPython scrolled panel which contains a radiobox. The scroll bar jumps to the top when trying to select an item from the radiobox when changing focus from another panel. You then need to scroll and click again. A minimal example which reproduces the problem:#!/bin/env pythonimport wximport wx.lib.scrolledpanel as SPclass MyFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, - 1, "Frame", size=(300, 300)) self.scrolledPanel = ScrollPanel(self, size=(-1, 200)) self.panel = PlotTypePanel(self) hbox = wx.BoxSizer(wx.VERTICAL) hbox.Add(self.scrolledPanel, 0, wx.EXPAND | wx.ALL, 0) hbox.Add(self.panel, 1, wx.EXPAND | wx.ALL, 0) self.SetSizer(hbox)class PlotTypePanel(wx.Panel): def __init__(self, parent, **kwargs): wx.Panel.__init__(self, parent,**kwargs) self.anotherradiobox = wx.RadioBox(self,label='other', style=wx.RA_SPECIFY_COLS, choices=["some", "other", "box"])class ScrollPanel(SP.ScrolledPanel): def __init__(self, parent, **kwargs): SP.ScrolledPanel.__init__(self, parent, -1, **kwargs) self.parent = parent self.SetupScrolling(scroll_x=False, scroll_y=True, scrollToTop=False) choices = [l for l in "abcdefghijklmnopqrstuv"] self.fieldradiobox = wx.RadioBox(self,label='letters', style=wx.RA_SPECIFY_ROWS, choices=choices) vbox = wx.BoxSizer(wx.VERTICAL) vbox.Add(self.fieldradiobox, 0, wx.EXPAND|wx.ALL, 10) self.SetSizer(vbox) self.SetupScrolling(scroll_x=False, scrollToTop=False)if __name__ == '__main__': app = wx.App() frame = MyFrame() frame.Show(True) app.MainLoop() When I click on the other radio panel and back to the scrolled panel, as here,it jumps to the top and doesn't select the radio button. I've checked and it seems the EVT_COMBOBOX is not triggered by this first click. I've also tried adding scrollToTop=False which didn't help. I'm using Python 2.7.3 with wxPython version 3.0.2.0.
OnChildFocus(self, evt)If the child window that gets the focus is not fully visible, this handler will try to scroll enough to see it.Parameters: evt – a ChildFocusEvent event to be processed.and apparently it works in this case, at least on Linux#!/bin/env pythonimport wximport wx.lib.scrolledpanel as SPclass MyFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, - 1, "Frame", size=(300, 300)) self.scrolledPanel = ScrollPanel(self, size=(-1, 200)) self.panel = PlotTypePanel(self) hbox = wx.BoxSizer(wx.VERTICAL) hbox.Add(self.scrolledPanel, 0, wx.EXPAND | wx.ALL, 0) hbox.Add(self.panel, 1, wx.EXPAND | wx.ALL, 0) self.SetSizer(hbox)class PlotTypePanel(wx.Panel): def __init__(self, parent, **kwargs): wx.Panel.__init__(self, parent,**kwargs) self.anotherradiobox = wx.RadioBox(self,label='other', style=wx.RA_SPECIFY_COLS, choices=["some", "other", "box"])class ScrollPanel(SP.ScrolledPanel): def __init__(self, parent, **kwargs): SP.ScrolledPanel.__init__(self, parent, -1, **kwargs) self.parent = parent self.SetupScrolling(scroll_x=False, scroll_y=True, scrollToTop=False) choices = [l for l in "abcdefghijklmnopqrstuv"] self.fieldradiobox = wx.RadioBox(self,label='letters', style=wx.RA_SPECIFY_ROWS, choices=choices) vbox = wx.BoxSizer(wx.VERTICAL) vbox.Add(self.fieldradiobox, 0, wx.EXPAND|wx.ALL, 10) self.SetSizer(vbox) self.Bind(wx.EVT_CHILD_FOCUS, self.on_focus) self.SetupScrolling(scroll_x=False, scrollToTop=False) def on_focus(self,event): passif __name__ == '__main__': app = wx.App() frame = MyFrame() frame.Show(True) app.MainLoop()Note: It's not an issue but you have self.SetupScrolling declared twice.
Sending data through broken pipe When I connect a socket to a server socket, and the server socket at a given time shuts down, I get a BrokenPipeError on the client side. But not the next time I try to send something, but the time after that.Here a SSCCE:Server:#! /usr/bin/python3import sockets = socket.socket (socket.AF_INET, socket.SOCK_STREAM)s.setsockopt (socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)s.bind ( ('', 10100) )s.listen (1)print ('Waiting on client')client, _ = s.accept ()print ('Accepted')data = b''done = Falsewhile not done: data += client.recv (4096) msgs = data.split (b'\r') for msg in msgs [:-1]: print ('received {}'.format (msg) ) done = msg == b'exit' data = msgs [-1]s.close ()print ('Server down')Client: #! /usr/bin/python3import sockets = socket.socket (socket.AF_INET, socket.SOCK_STREAM)print ('Connecting')s.connect ( ('localhost', 10100) )print ('Connected')for msg in [b'ping', b'pang', b'exit', b'ping', b'pang']: print ('Sending {}'.format (msg) ) sent = s.send (msg + b'\r') print ('Sent {}. {} bytes transmitted'.format (msg, sent) ) input ('>> ')I start up the server, then the client and hit enter to step through the messages.The server output is:Waiting on clientAcceptedreceived b'ping'received b'pang'received b'exit'Server downThe client output is:ConnectingConnectedSending b'ping'Sent b'ping'. 5 bytes transmitted>> Sending b'pang'Sent b'pang'. 5 bytes transmitted>> Sending b'exit'Sent b'exit'. 5 bytes transmitted>> Sending b'ping'Sent b'ping'. 5 bytes transmitted>> Sending b'pang'Traceback (most recent call last): File "./client.py", line 10, in <module> sent = s.send (msg + b'\r')BrokenPipeError: [Errno 32] Broken pipeWhy do I get the BrokenPipeError after the last pang and not after the ping?Why does send return 5 when sending the ping after the exit?Why is the pipe not broken immediately after the server is down?EDIT: After having sent exit, I don't hit enter on the client console unless the server console has already printed Server down.
The send function only ensures that the data has been transferred to the socket buffer. When the server closes it sends a FIN,ACK packet to which the client replies only ACK. The socket from client side will not be closed until the client calls the close method itself too. The connection is then "Half-Open".When the client sends again data to the closed server socket, the server replies with RST to which the client is expected to abort the connection. See http://tools.ietf.org/search/rfc793#page-33 on Half-Open Connections and Other Anomalies. However, the socket is closed after the send method has returned. That's why only the next send will crash on BrokenPipe, as the connection is now closed from the client side too.
How to extract time from datetime module and increment it I am trying to increment time. For that I stripped time from datetime and tried to add that. But it throws an exception. What is wrong here?st_time = datetime.datetime.strptime(st_time, '%H:%M:%S').time()en_time = datetime.datetime.strptime(en_time, '%H:%M:%S').time()while st_time < en_time: if str(st_time) in line: between = True break st_time = st_time + datetime.timedelta(seconds=1)Exception:TypeError: unsupported operand type(s) for +: 'datetime.time' and 'datetime.timedelta'What is wrong here?
You need full datetime objects. Not just time. This is a design constraint to forbid wrapping around of time, guaranteeing that b = a + deltaa == b - deltawhich would be violated if delta became bigger than 24h.
Sampling points from multiple Gaussians If I have one Gaussian with center=[x, y] and std=z I can sample one point using:np.random.normal(loc=[x, y], scale=std)But if I'm given two Gaussians with centers=[[x1, y1], [x2, y2]] and stds=[z1, z2] how can I sample points from these Gaussians together (or for n Gaussians)
You could just loop,import numpy as npx1 = 0.; y1=0.; z1 = 1.x2 = 1.; y2=0.; z2 = 1.centers=[[x1, y1], [x2, y2]] stds=[z1, z2]np.random.seed(1)smpl = []for c, std in zip(centers, stds): smpl.append(np.random.normal(loc=c, scale=std))print(smpl)but passing as lists also seems to work and would probably be more efficient,np.random.seed(1)smpl = np.random.normal(loc=centers, scale=std)print(smpl)
How to import the numpy module on AWS lambda? I am new beginner for AWS system, I am doing my python project, want to use AWS lambda function to run my serverless python program, I have all my resource on AWS S3 bucket, I would like to simply take one of my images from S3 bucket (let's say source-bucket), turn it to grey color and save it back to the other S3 bucket (result-bucket). My question is how to I import the numpy and the cv2 module on AWS lambda, I followed guide from https://serverless.com/blog/serverless-python-packaging/ however, it return me an error message:An error occurred: NumpyLambdaFunction - Function not found:arn:aws:lambda:us-east-1:......:function:numpy-test-dev-numpy (Service:AWSLambdaInternal; Status Code: 404; Error code: ResourceNotFoundException;Request ID: ....).What can I do to fix this error? or is there another better method for doing so? (P.S. I am using the window computer)Thank you very much!
Method 1Run this command in your project root directorypip install --target="." package_nameZip your project folder and upload it on AWSMethod 2Check out this readme
Cannot get the js file under the static folder in Flask It all works in my local server, but when others try to deploy what I have done to the server, it fails.the file system is the server something like:SERVER_FOLDER --homepage ----static ----templates ------404.html ----app.py ----config.pyfor example: The server is: MY_SERVERand then in my app.py, I use @app.route('/homepage/')@app.route('/homepage/index')def index(): # TODOto define the homepage, and @app.errorhandler(404) to redirect all the not found page to 404.htmlSo I can get access to my homepage with http://MY_SERVER/homepage/, a little different than my local server. That's one thing that I am confused. What I think is that the app.py runs under the MY_SERVER rather than MY_SERVER/homepage right?But, in this way, when I run a template in my template file, and the html template file will use the js file under the static folder. the response always shows the js file is not found.when I use <script src="{{ url_for('static', filename='file.js') }}"></script>, it shows not found in MY_SERVER/static and return to 404when I try <script src="../homepage/static/file.js"></script>, same result.How to handle this?
Build toward your solution:Get flask serving image files from staticPut an image in the static directory and call it from your browser: http://yoursite/static/some_image_there.jpgPlug away until that works.Get flask serving the js file directly to your browserNow put your js file into static and do as you did for the image.Plug away until you can call it from the browser:http://yoursite/static/yourfile.jsget your html to call the js file from staticNow you know that there is no problem actually serving the file, and you know the exact url to it. So it's not a big step to getting the HTML to reference it and your browser to load it.
Python - Pivot Table : Count the Occurrence of Value based on the Last Index could you help me how I can count the occurence of the last index in pivot table?Raw dataHere is my code -- but the last column is returning me the Grand Total - based on the 1st index (A)df.pivot_table(index=['A','B','C','D,'E','F','G'] , aggfunc={'G' : ['count',len]})This should be the result (last column) once pivoted
To get the expected count for column 'G', I included columns 'A'-'D' as indices and count of 'G' as follows:pd.pivot_table(df, index=['A','B','C','D'],values='G',aggfunc={'G': ['count']})Here is the resulting pivot table, where the expected count is shown:If however we include all columns as indices, the count of 'G' stays at 1.Creating a similar pivot table with all columns in Excel shows an identical behaviour with only count of 1's:
Converting Values of series with dictionary values to DataFrame. Not the Series itself I have series which looks like this:d1 = {'Class': 'A', 'age':35, 'Name': 'Manoj'}d2 = {'Class': 'B', 'age':15, 'Name': 'Mot'}d3 = {'Class': 'B', 'age':25, 'Name': 'Vittoo'}ser = [d1, d2, d3]dummy = pd.Series(ser)dummy0 {'Class': 'A', 'age': 35, 'Name': 'Manoj'}1 {'Class': 'B', 'age': 15, 'Name': 'Mot'}2 {'Class': 'B', 'age': 25, 'Name': 'Vittoo'}When I use the to_frame function, it does this:dummy.to_frame() 00 {'Class': 'A', 'age': 35, 'Name': 'Manoj'}1 {'Class': 'B', 'age': 15, 'Name': 'Mot'}2 {'Class': 'B', 'age': 25, 'Name': 'Vittoo'}But what I intent to get is this:Class Name age0 A Manoj 351 B Mot 152 B Vittoo 25I have tried this which works fine:df = pd.DataFrame(dummy)df = df[0].apply(pd.Series)dfBut it feels very inefficient because I need to convert the Series to a dataframe and again apply the Series function to the complete dataframe. As I'm working with millions of rows, I'd like to know if there is a more efficient solution.
Use DataFrame constructor instead Series constructor:d1 = {'Class': 'A', 'age':35, 'Name': 'Manoj'}d2 = {'Class': 'B', 'age':15, 'Name': 'Mot'}d3 = {'Class': 'B', 'age':25, 'Name': 'Vittoo'}ser = [d1, d2, d3]df = pd.DataFrame(ser)print (df) Class Name age0 A Manoj 351 B Mot 152 B Vittoo 25If input data is Series fiiled by dictionaries convert it to lists before DataFrame constructor, to_frame is not necessary:dummy = pd.Series(ser)df = pd.DataFrame(dummy.values.tolist())print (df) Class Name age0 A Manoj 351 B Mot 152 B Vittoo 25
Add matrices with different labels and different dimensions I have two large square matrices ( in two CSV files). The two matrices may have a few different labels and different dimensions. I want to add these two matrices and retain all labels. How do I do this in python?Example:{a, b, c ... e} are labels. a b c d a e a 1.2 1.3 1.4 1.5 a 9.1 9.2X= b 2.1 2.2 2.3 2.4 Y= e 8.1 8.2 c 3.3 3.4 3.5 3.6 d 4.2 4.3 4.4 4.5 a b c d e a 1.2+9.1 1.3 1.4 1.5 9.2X+Y= b 2.1 2.2 2.3 2.4 0 c 3.3 3.4 3.5 3.6 0 d 4.2 4.3 4.4 4.5 0 e 8.1 0 0 0 8.2If someone wants to see the files (matrices), they are here. ** Trying the method suggested by @piRSquaredimport pandas as pdX= pd.read_csv('30203_Transpose.csv')Y= pd.read_csv('62599_1999psCSV.csv')Z= X.add(Y, fill_value=0).fillna(0)print ZZ -> 467 rows x 661 columnsThe resulting matrix should be square too. This approach also causes the row headers to be lost ( now become 1,2,3 .. , They should be 10010, 10071, 10107, 1013 ..) 10010 10071 10107 1013 ....0 0 0 0.01705 0.04396666591 0 0 0 02 0 0 0 0.03820000223 0.0663666651 0 0 0.04913333434 0 0 0 05 0.0208000001 0 0 0.1275333315..What should I be doing?
use the add method with the parameter fill_value=0X.add(Y, fill_value=0).fillna(0)
Iterate over columns of a NumPy array and elements of another one? I am trying to replicate the behaviour of zip(a, b) in order to be able to loop simultaneously along two NumPy arrays. In particular, I have two arrays a and b:a.shape=(n,m)b.shape=(m,) I would like to get for every loop a column of a and an element of b. So far, I have tried the following:for a_column, b_element in np.nditer([a, b]): print(a_column)However, I get printed the element a[0,0] rather than the column a[0,:], which I want. How can I solve this?
You can still use zip on numpy arrays, because they are iterables.In your case, you'd need to transpose a first, to make it an array of shape (m,n), i.e. an iterable of length m:for a_column, b_element in zip(a.T, b): ...
Index lookup for calculation This is a follow-up of the following question: Pandas DataFrame Window Function analysis first_pass fruit order second_pass test units highest \0 full 12.1 apple 2 20.1 1 g True 1 full 7.1 apple 1 12.0 2 g False 2 partial 14.3 apple 3 13.1 1 g False 3 full 20.1 orange 2 20.1 1 g True 4 full 17.1 orange 1 18.5 2 g True 5 partial 23.4 orange 3 22.7 1 g True 6 full 23.1 grape 3 14.1 1 g False 7 full 17.2 grape 2 17.1 2 g False 8 partial 19.1 grape 1 19.4 1 g False highest_fruit 0 [apple, orange] 1 [orange] 2 [orange] 3 [apple, orange] 4 [orange] 5 [orange] 6 [apple, orange] 7 [orange] 8 [orange] In the original question, I was guided to the above table in which the highest fruit(s) for a given analysis and test combination was indicated by doing a transformation on the table (e.g. a full analysis on test 1 resulted in apple and orange fruits having the highest second pass numbers). I'm now trying to use this information to calculate those fruit(s) relative performance to their first pass. For example, now that I know apple and orange are the highest fruits for a full analysis, test 1, I'd like to know if they improved over their first passes. (apple improved with a score of 20.1 on the second pass compared to 12.1 on their first_pass; likewise orange improved to 20.1 after scoring 19.1 on it's first pass). I'd like a tables similar to the one below (1 = improved, 0 = no change, -1 worse): analysis first_pass fruit order second_pass test units highest \0 full 12.1 apple 2 20.1 1 g True 1 full 7.1 apple 1 12.0 2 g False 2 partial 14.3 apple 3 13.1 1 g False 3 full 20.1 orange 2 20.1 1 g True 4 full 17.1 orange 1 18.5 2 g True 5 partial 23.4 orange 3 22.7 1 g True 6 full 23.1 grape 3 14.1 1 g False 7 full 17.2 grape 2 17.1 2 g False 8 partial 19.1 grape 1 19.4 1 g False highest_fruit score_change_between_passes0 [apple, orange] {"apple" : 1, "orange" : 0}1 [orange] {"orange" : 1}2 [orange] {"orange" : -1}3 [apple, orange] {"apple" : 1, "orange" : 0}4 [orange] {"orange" " 1}5 [orange] {"orange" : -1}6 [apple, orange] {"apple" : 1, "orange" : 0}7 [orange] {"orange" : 1}8 [orange] {"orange" : -1}
You could use np.sign():second_pass = df.groupby(['test', 'analysis']).apply(lambda x: {fruit: int(np.sign(x.loc[x.fruit==fruit, 'second_pass'].iloc[0] - x.loc[x.fruit==fruit, 'first_pass'].iloc[0])) for fruit in x.highest_fruit.iloc[0]}).reset_index()df = df.merge(second_pass, on=['test', 'analysis'], how='left').rename(columns={0: 'second_pass_comp'}) analysis first_pass fruit order second_pass test units highest \0 full 12.1 apple 2 20.1 1 g True 1 full 7.1 apple 1 12.0 2 g False 2 partial 14.3 apple 3 13.1 1 g False 3 full 19.1 orange 2 20.1 1 g True 4 full 17.1 orange 1 18.5 2 g True 5 partial 23.4 orange 3 22.7 1 g True 6 full 23.1 grape 3 14.1 1 g False 7 full 17.2 grape 2 17.1 2 g False 8 partial 19.1 grape 1 19.4 1 g False highest_fruit first_pass_highest_fruit second_pass_comp 0 [apple, orange] {'orange': 19.1, 'apple': 12.1} {'orange': 1, 'apple': 1} 1 [orange] {'orange': 17.1} {'orange': 1} 2 [orange] {'orange': 23.4} {'orange': -1} 3 [apple, orange] {'orange': 19.1, 'apple': 12.1} {'orange': 1, 'apple': 1} 4 [orange] {'orange': 17.1} {'orange': 1} 5 [orange] {'orange': 23.4} {'orange': -1} 6 [apple, orange] {'orange': 19.1, 'apple': 12.1} {'orange': 1, 'apple': 1} 7 [orange] {'orange': 17.1} {'orange': 1} 8 [orange] {'orange': 23.4} {'orange': -1}
append many string in list seperated by split() If I write this code:b=list()b.append(input())print(b)Simply the output will be:["My text"]But i want output like that:["My","text"]so I wrote this code:b=list()b.append(input("Enter your text: ").split())print(b)but in there the output is:[['My', 'text']]If I use split(),I am observing there are creating list including list.Why this...?? And how can I solve this problem..??
You can simply writeb = input("Enter your text: ").split()
GDAL installation error "error: command 'x86_64-linux-gnu-gcc' failed with exit status 1" I'm trying to install GDAL with python.But it failed with error.The command I use is pip install GDAL. x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I../../port -I../../gcore -I../../alg -I../../ogr/ -I../../ogr/ogrsf_frmts -I../../gnm -I../../apps -I/usr/include/python2.7 -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I/usr/include -c extensions/gdal_wrap.cpp -o build/temp.linux-x86_64-2.7/extensions/gdal_wrap.o -std=c++11 -I/usr/include/gdal extensions/gdal_wrap.cpp:3177:27: fatal error: cpl_vsi_error.h: No such file or directory compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1and ---------------------------------------- Failed building wheel for GDAL Running setup.py clean for GDALFailed to build GDALInstalling collected packages: GDAL Running setup.py install for GDAL ... error Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-_spRXy/GDAL/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-NxpUaO-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-x86_64-2.7 copying gdal.py -> build/lib.linux-x86_64-2.7 ... creating build/temp.linux-x86_64-2.7/extensions x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I../../port -I../../gcore -I../../alg -I../../ogr/ -I../../ogr/ogrsf_frmts -I../../gnm -I../../apps -I/usr/include/python2.7 -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I/usr/include -c extensions/gdal_wrap.cpp -o build/temp.linux-x86_64-2.7/extensions/gdal_wrap.o -std=c++11 -I/usr/include/gdal extensions/gdal_wrap.cpp:3177:27: fatal error: cpl_vsi_error.h: No such file or directory compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ----------------------------------------Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-_spRXy/GDAL/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-NxpUaO-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-_spRXy/GDAL/I've already tried sudo apt-get install build-essential, but stil the same error occurs.
Here is the answer I found that worked:"you might have to change the gdal version to the version installed on your host. So I had to do this since I have gdal==1.11.2 on my host:"pip install gdal==1.11.2 --global-option=build_ext --global-option="-I/usr/include/gdal/"Where the 1.11.2 should be updated to your gdal_version, which can be found in the line # define GDAL_RELEASE_NAME of the /usr/include/gdal/gdal_version.h file (at least on my system running Kubuntu).Link to original github page with this answer from Basaks, mentioned in the comment above by Craicerjack.
How can i send data to a database from a view in Django? I created a form in my Django project, i would now like to have this form interact with a database. Basically, when the user inputs some data, it must be sent to a database. Note: i already have a database in my django project, i defined it on my settings.py, but i must not send the data to that DB, but to a different database, since that db will interact with another Python script.Now, what i don't know, is how can i use another database in Django? Where should i define the whole second database configuration? This is what my basic view looks like at the moment:def input(request): # if this is a POST request we need to process the form data if request.method == 'POST': # create a form instance and populate it with data from the request: form = InputForm(request.POST) # check whether it's valid: if form.is_valid(): # process the data in form.cleaned_data as required # ... # redirect to a new URL: messages.success(request, f"Success") # if a GET (or any other method) we'll create a blank form else: form = InputForm() return render(request, "main/data.html", context={"form":form})
You need to define the second database in settings, see:https://docs.djangoproject.com/fr/2.2/topics/db/multi-db/Then you will just save the form in a particular database like this:form.save(using='database_name')Or if you're using it for a particular model in your project you can overload save method of this model to be stored in another DB:class SomeModel(models.Model): foo = models.CharField(max_length=100) def save(self, ...): # ALL the signature super(SomeModel, self).save(using='database_name')
Split Multiple Values into New Rows I have a dataframe where a few columns may have multiple values in a single observation. Each observation in these rows has a "/" at the end of the observation, regardless of whether or not there are multiple. This means that some of the values look like this: 'OneThing/' while others like this: 'OneThing/AnotherThing/'I need to take the values where there is more than one value in an observation and split them into individual rows. This is a general example of what the dataframe looks like before:ID Date Name ColA ColB Col_of_Int ColC ColD1 09/12 Ann String String OneThing/ String String2 09/13 Pete String String OneThing/AnotherThing String String3 09/13 Ann String String OneThing/AnotherThing/ThirdThing/ String String4 09/12 Pete String String OneThing/ String StringWhat I want the output to be:ID Date Name ColA ColB Col_of_Int ColC ColD1 09/12 Ann String String OneThing String String2 09/13 Pete String String OneThing String String2 09/13 Pete String String Another Thing String String3 09/13 Ann String String OneThing String String3 09/13 Ann String String AnotherThing String String3 09/13 Ann String String ThirdThing String String4 09/12 Pete String String OneThing/ String StringI've tried the following: df = df[df['Column1'].str.contains('/')]df_split = df[df['Column1'].str.contains('/')]df1 = df_split.copy()df2 = df_split.copy()split_cols = ['Column1']for c in split_cols: df1[c] = df1[c].apply(lambda x: x.split('/')[0]) df2[c] = df2[c].apply(lambda x: x.split('/')[1])new_rows = df1.append(df2)df.drop(df_split.index, inplace=True)df = df.append(new_rows, ignore_index=True)This works, but I think it is creating new rows after every '/', which means that one new row is being created for every observation with only one value (where I want zero new rows), and two new rows are being created for every observation with two values (only need one), etc. This is particularly frustrating where there are three or more values in an observation because I am getting several unnecessary rows. Is there any way to fix this so that only observations with more than one get added to new rows?
Your method would work (I think) if you use df['column_of_interest'] = df['column_of_interest'].str.rstrip('/'), as it would get rid of that annoying / at the end of your observations. However, the loop is inneficient, and the way you have it, requires that you know how many observations you maximally have in your column. Here is another way, which I think achieves what you need:Take this example df:df = pd.DataFrame({'column_of_interest':['onething/', 'onething/twothings/', 'onething/twothings/threethings/'], 'values1': [1,2,3], 'values2': [5,6,7]})>>> df column_of_interest values1 values20 onething/ 1 51 onething/twothings/ 2 62 onething/twothings/threethings/ 3 7This gets a bit messy because your want to presumably keep the data that is in the columns outside column_of_interest. So, you can temporarily find those and cast those aside, using:value_columns = [i for i in df.columns if i != 'column_of_interest']And put them in the index for the following manipulation (which restores them at the end):new_df = (df.set_index(value_columns) .column_of_interest.str.rstrip('/') .str.split('/') .apply(pd.Series) .stack() .rename('new_column_of_interest') .reset_index(value_columns))And your new_df then looks like:>>> new_df values1 values2 new_column_of_interest0 1 5 onething0 2 6 onething1 2 6 twothings0 3 7 onething1 3 7 twothings2 3 7 threethingsOr alternatively, using merge:new_df = (df[value_columns].merge(df.column_of_interest .str.rstrip('/') .str.split('/') .apply(pd.Series) .stack() .reset_index(1, drop=True) .to_frame('new_column_of_interest'), left_index=True, right_index=True))EDIT: On the dataframe you posted, this results in: ID Date Name ColA ColB ColC ColD new_column_of_interest0 1 09/12 Ann String String String String OneThing0 2 09/13 Pete String String String String OneThing1 2 09/13 Pete String String String String AnotherThing0 3 09/13 Ann String String String String OneThing1 3 09/13 Ann String String String String AnotherThing2 3 09/13 Ann String String String String ThirdThing0 4 09/12 Pete String String String String OneThing
Python condition to append json items I have no experience with python, just started looking into this week:messages = []msg_list = ticket.messagefor message in msg_list:for item in msg_list: item_json = json.loads(message.body) tmp_item.date = item_json['date'] tmp_item.time = item_json['time'] tmp_item.author = item_json['author'] tmp_item.location = item_json['location'] tmp_item.message = item_json['message'] msg_list.append(tmp_item)return {"payload": msg_list}Is there a way I can check if the item_json(message.body) does not have the following props, "date", "time", "author", "location" and "message" to simply avoid it,and do not append to msg_list??So basically I just want to append if it meets that criteria, example would beif item_json['date'] and item_json['time'] and item_json['author'].... :
What you refer to as json data (after parsing) is actually a dict in python.To check whether a key exists in a dictionary the most common way is to use in operatorif 'key' in dictionary: print(dictionary['key']) # if key existselse: print("Key doesn't exist")See Check if a given key already exists in a dictionary
AttributeError: can't set attribute when connecting to sqlite database with flask-sqlalchemy I've been learning the flask web application framework and feel quite comfortable with it. I've previously built a simple to do app that worked perfectly. I was working on the same project, but trying to implement it using TDD. I've encountered an error with the database that I've never seen before and don't know how to fix.When I examine my code, I cant see any issue. It also looks identical to the code of the working project, so I really don't know what I am doing wrong.Here is the errors:(env) PS C:\coding-projects\task-master-tdd> flask shellPython 3.8.5 (tags/v3.8.5:580fbb0, Jul 20 2020, 15:43:08) [MSC v.1926 32 bit (Intel)] on win32App: project [development]Instance: C:\coding-projects\task-master-tdd\instance>>> from project import db>>> dbTraceback (most recent call last): File "<console>", line 1, in <module> File "c:\coding-projects\task-master-tdd\env\lib\site-packages\flask_sqlalchemy\__init__.py", line 1060, in __repr__ self.engine.url if self.app or current_app else None File "c:\coding-projects\task-master-tdd\env\lib\site-packages\flask_sqlalchemy\__init__.py", line 943, in engine return self.get_engine() File "c:\coding-projects\task-master-tdd\env\lib\site-packages\flask_sqlalchemy\__init__.py", line 962, in get_engine return connector.get_engine() File "c:\coding-projects\task-master-tdd\env\lib\site-packages\flask_sqlalchemy\__init__.py", line 555, in get_engine options = self.get_options(sa_url, echo) File "c:\coding-projects\task-master-tdd\env\lib\site-packages\flask_sqlalchemy\__init__.py", line 570, in get_options self._sa.apply_driver_hacks(self._app, sa_url, options) File "c:\coding-projects\task-master-tdd\env\lib\site-packages\flask_sqlalchemy\__init__.py", line 914, in apply_driver_hacks sa_url.database = os.path.join(app.root_path, sa_url.database)AttributeError: can't set attribute>>>my config.py file:import os# load the environment variables from the .env filefrom dotenv import load_dotenvload_dotenv()# Determine the folder of the top-level directory of this projectBASEDIR = os.path.abspath(os.path.dirname(__file__))class Config: FLASK_ENV = 'development' TESTING = False DEBUG = False SECRET_KEY = os.getenv('SECRET_KEY', default='A very terrible secret key.') SQLALCHEMY_DATABASE_URI = os.getenv('DATABASE_URL', default=f"sqlite:///{os.path.join(BASEDIR, 'instance', 'app.db')}") SQLALCHEMY_TRACK_MODIFICATIONS = Falseclass DevelopmentConfig(Config): DEBUG = Trueclass TestingConfig(Config): TESTING = True SQLALCHEMY_DATABASE_URI = os.getenv('DATABASE_URL', default=f"sqlite:///{os.path.join(BASEDIR, 'instance', 'test.db')}")class ProductionConfig(Config): FLASK_ENV = 'production'my user model:from project import db, login_managerfrom flask_login import UserMixinfrom werkzeug.security import generate_password_hash, check_password_hashclass User(db.Model, UserMixin): __tablename__ = 'users' id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String, unique=True) hashed_password = db.Column(db.String) def __init__(self, username, password): self.username = username self.hashed_password = generate_password_hash(password) def is_password_valid(self, password): return check_password_hash(self.hashed_password, password) def __repr__(self): return '<User {}>'.format(self.id)@login_manager.user_loaderdef load_user(user_id): return User.query.get(int(user_id))
EditIf you're experiencing this, upgrading Flask-SQLAlchemy to >= 2.5 should resolve the issue per https://github.com/pallets/flask-sqlalchemy/issues/910#issuecomment-802098285.Pinning SQLAlchemy to ~1.3 should no longer be necessary.I ran into this issue a little earlier, but think I've figured out what's going on.SQLAlchemy is automatically installed as a dependency for Flask-SQLAlchemy and its latest release (1.4.0) introduces the following breaking change:The URL object is now an immutable named tuple. To modify a URL object, use the URL.set() method to produce a new URL object.I was able to fix this issue by simply installing the previous version of SQL Alchemy (1.3.23).
Pick a Random Images and Do PIL for Watermark I've error when i pick a random image in a folder and i want to edit with PIL.My code isimport osimport randomfrom PIL import Imagefrom PIL import ImageDrawfrom PIL import ImageFontdef watermark_text(input_image_path, output_image_path, text, pos): photo = Image.open(input_image_path) # make the image editable drawing = ImageDraw.Draw(photo) black = (255, 255, 255) font = ImageFont.truetype("font.ttf", 40) drawing.text(pos, text, fill=black, font=font) photo.show() photo.save(output_image_path)if __name__ == '__main__': path="./bg" files=os.listdir(path) d=random.choice(files) img = d watermark_text(img, '1.jpg', text='Risna Fadillah', pos=(0, 0))and error showing like thisTraceback (most recent call last):File "quotes.py", line 26, in watermark_text(img, '1.jpg',File "quotes.py", line 10, in watermark_textphoto = Image.open(input_image_path)File "/data/data/com.termux/files/usr/lib/python3.8/site-packages/PIL/Image.py", line 2878, in openfp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: '1qqq.jpg'How to fix it?
i was careless about this, I should have written like thisimport osimport randomfrom PIL import Imagefrom PIL import ImageDrawfrom PIL import ImageFontdef watermark_text(input_image_path, output_image_path, text, pos): photo = Image.open(input_image_path) # make the image editable drawing = ImageDraw.Draw(photo) black = (255, 255, 255) font = ImageFont.truetype("font.ttf", 40) drawing.text(pos, text, fill=black, font=font) photo.show() photo.save(output_image_path)if __name__ == '__main__': path="./bg/" files=os.listdir(path) d=random.choice(files) img = path + d watermark_text(img, '1.jpg', text='Risna Fadillah', pos=(0, 0))sorry.
Deploying static files for a Wagtail application on Divio I'm struggling to understand how I can implement my static files live. This is my first project I'm trying to deploy so it's possible I've missed something, and I'm finding it hard to understand which documentation is best to follow here - Wagtail, Divio or Django?I can view my website with the localhost fine, the static files are read. But when deploying to Divio’s test servers, no longer, just Bootstrap stylings. Am i meant to set debug to False somewhere, and if so where do I set it so? The dockerfile in the Divio project contains this command, which I sense is related to deploying live:# <STATIC>RUN DJANGO_MODE=build python manage.py collectstatic --noinput# </STATIC> What are the steps needed to transition from operating on the localhost and viewing my static correctly, to having it display in test/live deployments?I thought I could link them with the settings.py file but when I try to do this I experience a problem related to the following step:Step 7/7 : RUN DJANGO MODE=build python manage.py collectstatic —noinput It seems to hang almost indefinitely, failing after a long time - the following are the last few lines of my logs.Copying '/virtualenv/lib/python3.5/site-packages/wagtail/admin/static/wagtailadmin/fonts/opensans-regular.woff'Copying '/virtualenv/lib/python3.5/site-packages/wagtail/admin/static/wagtailadmin/fonts/wagtail.svg'Copying '/virtualenv/lib/python3.5/site-packages/wagtail/admin/static/wagtailadmin/fonts/robotoslab-regular.woff'Copying '/virtualenv/lib/python3.5/site-packages/wagtail/admin/static/wagtailadmin/fonts/opensans-semibold.woff'Thanks all in advance for your time and help!
In a Divio Cloud project, the settings for things like static files handling and DEBUG are managed automatically according to the server environment (Live, Test or Local). See the table in How to run a local project in live configuration. You can override these manually if you need to, but there is no need whatsoever in normal use.If you have added settings related to static file handling to your settings.py, try commenting them out - almost certainly, it will just work.
How to acess matrix's elements and pass matrix as function argument? My program is supposed to simulate a Bingo game. It receives as input a 5X5 matrix (the Bingo card), the number of elements(which are integers) it should verify whether they are on the card and the series of elements, one by one. The goal is to verify whether or not each element is in the matrix: if affirmative, the program should replace the corresponding element by "XX". The program should continuously proceed in the forementioned fashion until all elements are verified. If all the elements of any row, column or either diagonals are replaced by "XX", the program is to print the final scenario (the final stage of the matrix), with the correct elements replaced by "XX" and the word BINGO!, and just the final scenario otherwise. The first line of the matrix contains the letters B I N G O, thus identifying each matrix's column by its corresponding label letter, "B" for the first, "I" for the second, and so on, in a way that the input should be on the form:label_letter-XY, where X and Y represent the numerals.I've already managed to correctly print the Bingo card, but I'm still not able to iterate over the matrix's lines and columns, verify whether of not the candidate numbers are inthose columns, and replace them by "XX". I'm not actually sure what my program is actually doing, since it's only printing the original bingo card, which makes me conclude that the I'm not correctly accessing the matrix. If anyone could give me some insight on what I'm doing wrong, I'll be extremely grateful!m=5 #linesn=5 #columns/rowsmat=[]data=[]for i in range(m):col=input().split() mat.append(col)num=int(input())blank=''def printbingocard(mat): print("+", end=blank) print((16)*"-" + "+")print("| ", end=blank)print("B ", end=blank)print("I ", end=blank)print("N ", end=blank)print("G ", end=blank)print("O ", end=blank)print("|")print("+" + (16)*"=" + "+")for i in range(m): print("| ", end=blank) for j in range(n): print(mat[i][j] + " ", end='') print("|")print("+" + (16)*"-" + "+")printbingocard(mat)for i in range(num): input=str(input()).split("-") input_data.append(input) for j in range(n): if input_data[i][0]=="B": if mat[0][j]==input_data[i][1]: mat[0][j]="XX" printbingocard(mat) if input_data[i][0]=="I": if mat[1][j]==input_data[i][1]: mat[1][j]="XX" printbingocard(mat) if input_data[i][0]=="N": if mat[2][j]==input_data[i][1]: mat[2][j]="XX" printbingocard(mat) if input_data[i][0]=="G": if mat[3][j]==input_data[i][1]: mat[3][j]="XX" printbingocard(mat) if input_data[i][0]=="O": if mat[4][j]==input_data[i][1]: mat[4][j]="XX" printbingocard(mat)for i in range(m): for j in range(n): if mat[i][j]== "XX": bol=True else: bol=False breakfor j in range(n): for i in range(m): if mat[i][j]== "XX": bol=True else: bol=False breakprintbingocard(mat)if bol==True: print("BINGO!") for j in range(n): for i in range(m): if mat[j][j]=="XX" or mat[i][i]=="XX": bol=True else: bol=False breakprintbingocard(mat)if bol==True: print("BINGO!")for j in range(4,n,-1): for i in range(1,m,1): if mat[i][j]=="XX": bol=True else: bol=False breakprintbingocard(mat)if bol==True: print("BINGO!")
My take on it, I use atom text editor for my python programming so I don't have input() functions so I had to randomize my bingo arrayimport randomimport numpy as npm=5 #linesn=5 #columns/rowsmat=[]bingo_numbers = np.linspace(1,n*m,n*m,dtype=int)remaining_numbers = bingo_numbers # I need this later on to know what numbers are leftrandom.shuffle(bingo_numbers)print(bingo_numbers)completed_lines = 0for i in range(m): col=bingo_numbers[i*5:(i+1)*5] mat.append(list(col))def imprimecartela(mat, completed_lines): # Function to print the bingo card print("+", end=branco) print((16)*"-" + "+") print("| ", end=branco) if (completed_lines == 0): print(5*"_ ", end=branco) elif(completed_lines == 1): print("B ", end=branco) print(4*"_ ", end=branco) elif(completed_lines == 2): print("B ", end=branco) print("I ", end=branco) print(3*"_ ", end=branco) elif(completed_lines == 3): print("B ", end=branco) print("I ", end=branco) print("N ", end=branco) print(2*"_ ", end=branco) elif(completed_lines == 4): print("B ", end=branco) print("I ", end=branco) print("N ", end=branco) print("G ", end=branco) print("_ ", end=branco) else: print("B ", end=branco) print("I ", end=branco) print("N ", end=branco) print("G ", end=branco) print("O ", end=branco) print("|") print("+" + (16)*"=" + "+") for i in range(m): print("| ", end=branco) for j in range(n): if mat[i][j] != 0: # Check values of <mat>: if non zero print number with 2 digits, if zero print 'XX' print(str(mat[i][j]).zfill(2) + " ", end='') else: print("XX" + " ", end='') print("|") print("+" + (16)*"-" + "+")def check_completed_lines(mat): completed_lines = 0 for i in range(m): temp = [x[i] for x in mat] if (temp == [0,0,0,0,0]): completed_lines += 1 for x in mat: if x==[0,0,0,0,0]: completed_lines += 1 if (mat[0][0] == 0 and mat[1][1] == 0 and mat[2][2] == 0 and mat[3][3] == 0 and mat[4][4] == 0): completed_lines += 1 if (mat[0][4] == 0 and mat[1][3] == 0 and mat[2][2] == 0 and mat[3][1] == 0 and mat[4][0] == 0): completed_lines += 1 return completed_linesimprimecartela(mat,completed_lines)while (len(remaining_numbers) != 0): # Looping through turns call_number = random.choice(remaining_numbers) # <-- Next number print("next number is : ", call_number) remaining_numbers = np.delete(remaining_numbers, np.where(remaining_numbers==call_number)) # Remove the number so it doesn't occur again for i in mat: if call_number in i: i[i.index(call_number)] = 0 # Change the value current round number to 0 in <mat> completed_lines = check_completed_lines(mat) # This function checks rows and columns and diagonals for completeness, every completed line will add a letter to "BINGO" on the card imprimecartela(mat, completed_lines) if completed_lines == 5: break # When 5 lines are completed, you win, breakI added comments inside the code to explain the process, but basically you don't need to change the matrix to include 'XX' just change the value to 0 since zero is already not a bingo number and use your print function to print 'XX' if the value is 0. Cheers.
Scrape table from email and write to CSV (Removing \r\n) - Python I'm trying to scrape the table from an email and remove any special characters (\r\n etc) before writing to a csv file.I've managed to scrape the data however the columns are wrapped in '\r\n' which I cannot remove (I'm new to this)Table attempting to scrape:Table - ImagePython Code:for emailid in items:# getting the mail contentresp, data = m.fetch(emailid, '(UID BODY[TEXT])')text = str(data[0][1])tree = BeautifulSoup(text, "lxml")table_tag = tree.select("table")[0]tab_data = [[item.text for item in row_data.select("td")] for row_data in table_tag.select("tr")]print(table_tag)for data in tab_data: writer.writerow(data) print(' '.join(data))Results:\r\nQuick No.\r\n \r\nOrder No=\r\n\r\n \r\nPart Number\r\n \r\nDescription\r\n \r\nUOM=\r\n\r\n \r\nOrder Qty\r\n \r\nQty Received\r\n \r\nReceived Date\r\n(dd/mm/yyyy)\r\n \r\nAdditional Information\r\n\r\nE03B1A\r\n \r\nE0015130\r\n \r\nYK71114105=\r\np>\r\n \r\nCOLOUR TOP ASSY (R)=\r\n\r\n \r\nPIECE\r\n \r\n1\r\n \r\n1\r\n \r\n06/10/2020=\r\np>\r\n \r\n\r\nE03B1E\r\n \r\nE0015134\r\n \r\nYK78804497=\r\np>\r\n \r\nDIE BUTTON=\r\np>\r\n \r\nPIECE\r\n \r\n4\r\n \r\n4\r\n \r\n06/10/2020=\r\np>\r\n \r\nExpected ResultQuick No. Order No Part NumbernE03B1A nE0015130 nYK71114105nE03B1E nE0015134 nYK78804497Thanks in advance (This is my first post so please be gentle)
to remove those, you'd want to use .strip() on those strings. So try:tab_data = [[item.text.strip() for item in row_data.select("td")] for row_data in table_tag.select("tr")]But could I suggest, just let pandas parse the table from the html:import pandas as pdfor emailid in items:# getting the mail content resp, data = m.fetch(emailid, '(UID BODY[TEXT])') text = str(data[0][1]) table = pd.read_html(text)[0] df_obj = table.select_dtypes(['object']) table[df_obj.columns] = df_obj.apply(lambda x: x.str.strip()) print(table) table.to_csv('file.csv', index=False)
What does a , operator do when used in the right hand side of a conditional? a = 10b = 20c = 30if(a > b,c): print('In if')else: print('In else')Someone posted the above piece of code, and asked why the above always results in 'In if', being printed regardless of the values of b and c.Although this seems like poor programming style, I am curious what the , operator is doing.I have not been able to find an answer in the documentation so far. Can anyone provide an explanation ?
a > b, c is the tuple ((a > b), c).So if a=10, b=20, c=30, then we're asking if the tuple (False, 30) is truish. All non-empty tuples are truish, so this would always trigger the same path through the conditional.
Python: understanding lambda operations in a function Suppose I have a function designed to find the largest Y value in a list of dictionaries.s1 = [ {'x':10, 'y':8.04}, {'x':8, 'y':6.95}, {'x':13, 'y':7.58}, {'x':9, 'y':8.81}, {'x':11, 'y':8.33}, {'x':14, 'y':9.96}, {'x':6, 'y':7.24}, {'x':4, 'y':4.26}, {'x':12, 'y':10.84}, {'x':7, 'y':4.82}, {'x':5, 'y':5.68}, ]def range_y(list_of_dicts): y = lambda dict: dict['y'] return y(min(list_of_dicts, key=y)), y(max(list_of_dicts, key=y))range_y(s1)This works and gives the intended result.What I don't understand is the y before the (min(list_of_dicts, key=y). I know I can find the min and max with min(list_of_dicts, key=lambda d: d['y'])['y'] where the y parameter goes at the end (obviously swapping min for max).Can someone explain to me what is happening in y(min(list_of_dicts, key=y)) with the y and the parenthetical?
y is a function, where the function is defined by the lambda statement. The function accepts a dictionary as an argument, and returns the value at key 'y' in the dictionary.min(list_of_dicts, key=y) returns the dictionary from the list with the smallest value under key 'y'so putting it together, you get the value at key 'y' in the dictionary from the list with the smallest value under key 'y' of all dictionaries in the list
ElasticSearch ImportError: cannot import name 'Mapping' from 'elasticsearch.compat' I get this import error when trying to runfrom elasticsearch_dsl import Search, AFull tracebackImportError: cannot import name 'Mapping' from 'elasticsearch.compat' (C:\Users\SANA\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\elasticsearch\compat.py)elasticsearch version: 7.13.3elasticsearch-dsl version: 7.4.0I have tried:from collections.abc import MappingAnd can't seem to google my way to an answer
You must have installed elasticsearch_dsl. Install elasticsearch-dsl.Try doing :pip uninstall elasticsearch_dslpip install elasticsearch-dsl this should work.
psycopg2 out of shared memory and hints of increase max_pred_locks_per_transaction While inserting a lot of data into postgresql 9.1. using a Python script, we are getting the following error on this query:X: psycopg2.ProgrammingError in /home/hosting/apps/XX_psycopg.py:162 in : Execute 'execute' ( SELECT * FROM xml_fifo.fifo WHERE type_id IN (1,2) ORDER BY type_id, timestamp LIMIT 10 ): out of shared memoryHINT: You might need to increase max_pred_locks_per_transactionWe increased this number but still we get a out of shared memory (max_pred_locks_per_transaction = 192). Everytime we start the script again it runs for some time then gives this error message. On Postgres 8.1 we did not have this problem.Here is a piece of the postgresql log file:2012-06-28 02:55:43 CEST HINT: Use the escape string syntax for backslashes, e.g., E'\\'.2012-06-28 02:55:43 CEST WARNING: nonstandard use of \\ in a string literal at character 2712012-06-28 02:55:43 CEST HINT: Use the escape string syntax for backslashes, e.g., E'\\'.2012-06-28 02:55:43 CEST WARNING: nonstandard use of \\ in a string literal at character 2712012-06-28 02:55:43 CEST HINT: Use the escape string syntax for backslashes, e.g., E'\\'.2012-06-28 02:56:11 CEST WARNING: there is already a transaction in progress2012-06-28 02:57:01 CEST WARNING: there is already a transaction in progress2012-06-28 02:57:01 CEST ERROR: out of shared memory2012-06-28 02:57:01 CEST HINT: You might need to increase max_pred_locks_per_transaction.2012-06-28 02:57:01 CEST STATEMENT: SELECT * FROM xml_fifo.fifo WHERE type_id IN (1,2) ORDER BY type_id ASC, timestamp LIMIT 102012-06-28 02:57:01 CEST ERROR: out of shared memory2012-06-28 02:57:01 CEST HINT: You might need to increase max_pred_locks_per_transaction.2012-06-28 02:57:01 CEST STATEMENT: SELECT * FROM xml_fifo.fifo WHERE type_id IN (1,2) ORDER BY type_id ASC, timestamp LIMIT 10What would be the problem?
PostgreSQL added new functionality to SERIALIZABLE transactions in version 9.1, to avoid some serialization anomalies which were previously possible at that isolation level. The error you are seeing is only possible when using these new serializable transactions. Some workloads have run into the issue you describe when using serializable transactions in 9.1.One solution would be to use the REPEATABLE READ transaction isolation level instead of SERIALIZABLE. This will give you exactly the same behavior that SERIALIZABLE transactions did in PostgreSQL versions before 9.1. Before deciding to do that, you might want to read up on the differences, so that you know whether it is likely to be worthwhile to instead reconfigure your environment to avoid the issue at the SERIALIZABLE isolation level:http://www.postgresql.org/docs/9.1/interactive/transaction-iso.htmlhttp://wiki.postgresql.org/wiki/SSIIf increasing max_pred_locks_per_transaction doesn't fix it (and you could try going significantly higher without chewing up too much RAM), you could try increasing max_connections (without increasing actual connections used).I worked on the Serializable Snapshot Isolation feature for 9.1, along with Dan R.K. Ports of MIT. The cause of this problem is that the heuristic for combining multiple fine-grained predicate locks into a single coarser-grained lock is really simple in this initial version. I'm sure it can be improved, but any information you could give me on the circumstances under which it is hitting this problem would be valuable in terms of designing a better heuristic. If you could tell me a little bit about the number of CPUs you are using, the number of active database connections, and a bit about the workload where you hit this, I would really appreciate it.Thanks for any info, and my apologies for the problem.
How to host Django1.3.1 in Apache2.2? I am Using python 2.7.2,Django 1.3.1, Apache 2.2.22 on WindowsXP(win32). By the documentation i found here i did the step by step, but when the directory section is given `Alias /media/ C:/Programs/TestDjango/mysite/media/ <Directory C:/Programs/TestDjango/mysite/media/> Order deny,allow Allow from all </Directory> WSGIScriptAlias / C:/Programs/TestDjango/mysite/apache/django.wsgi <Directory C:/Programs/TestDjango/mysite/apache> Order deny,allow Allow from all </Directory>`and restarted the Apache, On opening localhost/mysite i get a Microsoft Visual C++ Library runtime error, and the Apache error log shows "Caught ImproperlyConfigured while rendering: Error loading pyodbc module: DLL load failed: A dynamic link library (DLL) initialization routine failed."....My Django app run in WAMP but wish to know where did i go wrong using Apache2.2.22 alone. Followed many Django documentation but still the same, Please to help me find where did i go wrong. thanks(identation was fixed by guettli)
I got it solved, it was the version problem, as i worked with Apache 2.2.21 instead of Apache 2.2.22, its working. i followed the step in this link. Install Python 2.7.2, Django 1.3.1 and Apache2.2.21Install the modwsgi module. The module file will be named something like mod_wsgi-win32-ap22py26-2.6.so get mod_wsgi.Copy it to the modules directory of the Apache installation. E.g., C:/Program Files/Apache Software Foundation/Apache2.2/modules.Rename it to mod_wsgi.so. Right click--> properties click Unblock and applyOpen Apache's http.conf file.Add the line LoadModule wsgi_module modules/mod_wsgi.so before all the other LoadModule entries.Configure Apache for your Django project by adding the following to end of http.conf:# Static content Alias /media/ C:/Programs/TestDjango/mysite/media/ <Directory C:/Programs/TestDjango/mysite/media/> Order deny,allow Allow from all </Directory># Django dynamic content WSGIScriptAlias / C:/Programs/TestDjango/mysite/apache/django.wsgi <Directory C:/Programs/TestDjango/mysite/apache> Order deny,allow Allow from all </Directory>`Where icardtest is the Django project root. The paths below icardtest will be specific to your project. This configuration serves all static media via the URL space /media/ and all the rest via WSGI and Django. Create a file django.wsgi and add the following to it: ` import os import sys sys.path.append('C:/Programs/TestDjango') os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler()`Restart Apache.
Where to Put Python Utils Folder? I've a whole bunch of scripts organized like this:root group1 script1.py script2.py group2 script1.py script2.py group3 script1.py script2.py utils utils1.py utils2.pyAll the scripts*.py use functions inside the utils folder. At the moment, I append the utils path to the scripts in order to import utils. However, this seems to be bad practice (or "not Pythonic"). In addition, the groups in actuality is not as flat as this and there are more util folders than listed above. Hence, the append path solution is getting messier and messier.How can I organize this differently?
Make all your directories importable first i.e. use __init__.py. Then have a top level script that accepts arguments and invokes scripts based on that.For long term what Keith has mentioned about distutils holds true. Otherwise here is a simpler (sure not the best) solution. Organization runscript.pygroup1 __init__.py script1.pyutils __init__.py utils1.pyInvocationpython runscript -g grp1 -s script1runscript.pyimport utilsdef main(): script_to_exec = process_args() import script_to_exec as script # __import__ script.main()main()Maybe your script can have main function which is then invoked by runscript.I suggest that you have a script at the top level which imports the script.
python file operation slowing down on massive text files This python code is slowing down the longer it runs.Can anyone please tell me why?I hope it is not reindexing for every line I query and counting from start again, I thought it would be some kind of file-stream ?!From 10k to 20k it takes 2 sec. from 300k to 310k it takes like 5 min. and getting worse.The code is only running in the ELSE part up to that point and 'listoflines' is constant at that point (850000 lines in list) and of type 'list[ ]' as well as 'offset' is just a constant 'int' at that point.The source file has millions of lines up to over 20 million lines.'dummyline not in listoflines' should take the same time every time.with open(filename, "rt") as source: for dummyline in source: if (len(dummyline) > 1) and (dummyline not in listoflines): # RUN compute # this part is not reached where I have the problem else: if dummyalreadycheckedcounter % 10000 == 0: print ("%d/%d: %s already checked or not valid " % (dummyalreadycheckedcounter, offset, dummyline) ) dummyalreadycheckedcounter = dummyalreadycheckedcounter +1
actually in operation for list is not the same every time in fact it is O(n) so it gets slower and slower as you addyou want to use setSee here https://wiki.python.org/moin/TimeComplexityYou didn't ask for this but I'd suggest turning this into a processing pipe line, so your compute part would not be mixed with the dedup logicdef dedupped_stream(filename): seen = set() with open(filename, "rt") as source: for each_line in source: if len(line)>1 and each_line not in seen: seen.add(each_line) yield each_linethen you can do justfor line in dedupped_stream(...): ...you would not need to worry about deduplication here at all
Request form flask is empty in GET request I'm trying to make a search form to get some data from my Api but the request form always return empty. I've read the other post with a similar problem but I didn't find the answer.I just want to make an search from the main page and display de second page if the button of the form was pressed with some [email protected]('/', methods=['GET', 'POST'])def home(): print(request.form) if 'btn_search' in request.form: search = request.args.get('search') print(search) r = requests.get('http://127.0.0.1:8000/estacionesbykm/?ciudad='+search) render_template('estacion_por_ciudad.html', estaciones=json.loads(r.text), ciudad=search) else: r = requests.get('http://127.0.0.1:8000/estaciones/') return render_template('principal.html', estaciones=json.loads(r.text))And the template of the homepage is also simple: <h2>Search</h2> <form method="GET"> <input name="search" type="text" placeholder="Search city"> <button name= "btn_search" type="submit">Search</button> </form></body></html>There is an special reason to why the form is returning me empty all the time?Help would be appreciated.
I assume that you want the data from the form. As you are using a GET request, in flask you should acces them by this coderequest.args['key']You can use reequest.form[] when you are handling a POST request
Dataframe Boxplot in Python displays incorrect whiskers In this simple example it gives wrong min and max whis.df = pd.DataFrame(np.array([1,2,3, 4, 5]), columns=['a'])df.boxplot() Outcome:Following regular formula (Q3 + 1.5 * IQR) it should be 7 and -1, but as seen on pic it's 5 and 1. Looks like formula uses 0.5 instead of 1.5. How can I change back to standard?Q1 = df['a'].quantile(0.25)Q2 = df['a'].quantile(0.50)Q3 = df['a'].quantile(0.75)print(Q1,Q2, Q3)IQR = Q3 - Q1MaxO = (Q3 + 1.5 * IQR)MinO = (Q1 - 1.5 * IQR)print("IQR:", IQR, "Max:", MaxO, "Min:" ,MinO)Outcome:2.0 3.0 4.0IQR: 2.0 Max:%: 7.0 Min:% -1.0(Q1, Q2, Q3 nad IQR are correct, but not Min or Max)
Source From above the upper quartile, a distance of 1.5 times the IQR is measured out and a whisker is drawn up to the largest observed point from the dataset that falls within this distance. Similarly, a distance of 1.5 times the IQR is measured out below the lower quartile and a whisker is drawn up to the lower observed point from the dataset that falls within this distance. All other observed points are plotted as outliers.
Scipy curve_fit confusion using bounds and initial parameters on simple data while I've gotten great fits for other datasets, for some reason the following code is not working for a relatively simple set of points. I've tried both a decaying exponential and power, along with initial parameters and bounds. I believe this is exposing my deeper misunderstanding; I appreciate any advice. snr = [1e10, 5, 1, .5, .1, .05] tau = [1, 8, 10, 14, 35, 80] fig1, ax1 = plt.subplots() def fit(x, a, b, c): #c: asymptote #return a * np.exp(b * x) + 1. return np.power(x,a)*b + c xlist = np.arange(0,len(snr),1) p0 = [-1., 1., 1.] params = curve_fit(fit, xlist, tau, p0)#, bounds=([-np.inf, 0., 0.], [0., np.inf, np.inf])) a, b, c = params[0] print(a,b,c) ax1.plot(xlist, fit(xlist, a, b, c), c='b', label='Fit') #ax1.plot(snr, tau, zorder=-1, c='k', alpha=.25) ax1.scatter(snr, tau) ax1.set_xscale('log') #ax1.set_xlim(.02, 15) plt.show()Update 1: reference figure, following Eric M's code:Will comment in the post below.Fix for Update 1: xlist = np.arange(0.01,10000,1)/1000+0.01
This worked for me. There were a couple issues. Including my comment. There is also a 'divide by zero' error in your xlist, so I avoided that by adding 0.01 to xlist, and increasing the density of points so the curve is rounded.import numpy as npimport matplotlib.pyplot as pltfrom scipy.optimize import curve_fitsnr = [1e10, 5, 1, .5, .1, .05]tau = [1, 8, 10, 14, 35, 80]fig1, ax1 = plt.subplots()def fit(x, a, b, c): return np.power(x, a)*b + cxlist = np.arange(0.01,10000,1)/1000+0.01xlist = np.append(xlist, 1e10)p0 = [-10, 10., 1.]params = curve_fit(fit, snr, tau, p0)print('Fitting parameters: {}'.format(params[0]))ax1.plot(xlist, fit(xlist, *params[0]), c='b', label='Fit')ax1.scatter(snr, tau)ax1.set_xscale('log') plt.show()
pandas cumsum replace the result calculated by cumsum with the content at the specified position datadata = [ {"content": "1", "title": "app sotre", "info": "", "time": 1578877014}, {"content": "2", "title": "app", "info": "", "time": 1579877014}, {"content": "3", "title": "pandas", "info": "", "time": 1582877014}, {"content": "12", "title": "a", "info": "", "time": 1582876014}, {"content": "33", "title": "apple", "info": "", "time": 1581877014}, {"content": "16", "title": "banana", "info": "", "time": 1561877014}, {"content": "aa", "title": "banana", "info": "", "time": 1582876014},]my code def cumsum(is_test=False): cdata = pd.to_numeric(s.str.get('content'), errors='coerce').cumsum() if is_test: print(list(cdata)) return list(cdata) for i, v in enumerate(s): s.iloc[i]['content'] = str(cdata[i]) return list(s)assert cumsum(is_test=True)==[1.0, 3.0, 6.0, 18.0, 51.0, 67.0, 'nan']res is not true, how to solve?final how to my code pythonic way?Replace the result calculated by cumsum with the content at the specified position。i hope data is :[{'content': '1.0', 'title': 'app sotre', 'info': '', 'time': 1578877014}, {'content': '3.0', 'title': 'app', 'info': '', 'time': 1579877014}, {'content': '6.0', 'title': 'pandas', 'info': '', 'time': 1582877014}, {'content': '18.0', 'title': 'a', 'info': '', 'time': 1582876014}, {'content': '51.0', 'title': 'apple', 'info': '', 'time': 1581877014}, {'content': '67.0', 'title': 'banana', 'info': '', 'time': 1561877014}, {'content': 'nan', 'title': 'banana', 'info': '', 'time': 1582876014}]
I suggest loop by zipped original list data with Series cdata and then set new values:cdata = pd.to_numeric(s.str.get('content'), errors='coerce').cumsum()print (cdata)0 1.01 3.02 6.03 18.04 51.05 67.06 NaNdtype: float64for old, new in zip(data, cdata): old['content'] = str(new)print (data)[{'content': '1.0', 'title': 'app sotre', 'info': '', 'time': 1578877014}, {'content': '3.0', 'title': 'app', 'info': '', 'time': 1579877014}, {'content': '6.0', 'title': 'pandas', 'info': '', 'time': 1582877014}, {'content': '18.0', 'title': 'a', 'info': '', 'time': 1582876014}, {'content': '51.0', 'title': 'apple', 'info': '', 'time': 1581877014}, {'content': '67.0', 'title': 'banana', 'info': '', 'time': 1561877014}, {'content': 'nan', 'title': 'banana', 'info': '', 'time': 1582876014}]
Creating a new DataFrame out of 2 existing Dataframes with Values coming from Dataframe 1? I have 2 DataFrames.DF1:movieId title genres0 1 Toy Story (1995) Adventure|Animation|Children|Comedy|Fantasy1 2 Jumanji (1995) Adventure|Children|Fantasy2 3 Grumpier Old Men (1995) Comedy|Romance3 4 Waiting to Exhale (1995) Comedy|Drama|Romance4 5 Father of the Bride Part II (1995) ComedyDF2:userId movieId rating timestamp0 1 1 4.0 9649827031 1 3 4.0 9649812472 1 6 4.0 9649822243 1 47 5.0 9649838154 1 50 5.0 964982931My new DataFrame should look like this.DF_new:userID Toy Story Jumanji Grumpier Old Men Waiting to Exhale Father of the Pride Part II1 4.0234The Values will be the ratings of the the indiviudel user to each movie.The movie titles are now the columns.The userId are now the rows.I think it will work over concatinating via the movieid. But im not sure how to do this exactly, so that i still have the movie names attached to the movieid.Anybody has an idea?
The problem consists of essentially 2 parts:How to transpose df2, the sole table where user ratings comes from, to the desired format. pd.DataFrame.pivot_table is the standard way to go.The rest is about mapping the movieIDs to their names. This can be easily done by direct substitution on df.columns.In addition, if movies receiving no ratings were to be listed as well, just insert the missing movieIDs directly before name substitution mentioned previously.Codeimport pandas as pdimport numpy as npdf1 = pd.DataFrame( data={ "movieId": [1,2,3,4,5], "title": ["toy story (1995)", "Jumanji (1995)", "Grumpier 33 (1995)", # shortened for printing "Waiting 44 (1995)", "Father 55 (1995)"], })# to better demonstrate the correctness, 2 distinct user ids were used.df2 = pd.DataFrame( data={ "userId": [1,1,1,2,2], "movieId": [1,2,2,3,5], "rating": [4,5,4,5,4] })# 1. Produce the main tabledf_new = df2.pivot_table(index=["userId"], columns=["movieId"], values="rating")print(df_new) # already pretty closeOut[17]: movieId 1 2 3 5userId 1 4.0 4.5 NaN NaN2 NaN NaN 5.0 4.0# 2. map movie ID's to titles# name lookup datasetdf_names = df1[["movieId", "title"]].set_index("movieId")# strip the last 7 characters containing year# (assume consistent formatting in df1)df_names["title"] = df_names["title"].apply(lambda s: s[:-7])# (optional) fill unrated columns and sortfor movie_id in df_names.index.values: if movie_id not in df_new.columns.values: df_new[movie_id] = np.nanelse: df_new = df_new[df_names.index.values]# replace IDs with titlesdf_new.columns = df_names.loc[df_new.columns, "title"].valuesResultdf_newOut[16]: toy story Jumanji Grumpier 33 Waiting 44 Father 55userId 1 4.0 4.5 NaN NaN NaN2 NaN NaN 5.0 NaN 4.0
how restarting game on pygame works how can I restart the game with user input? i searched all over the place and i wasnt able to discover how i can restart my game, i just want to press ESC and my game restart, after knowing how to do that i will implement a button, but how can I restart my game? this is my main loop:while True: pygame.time.Clock().tick(fps) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() #input elif event.type == pygame.KEYDOWN: #snake if event.key == ord('w'): change_to = 'UP' if event.key == ord('s'): change_to = 'DOWN' if event.key == ord('a'): change_to = 'LEFT' if event.key == ord('d'): change_to = 'RIGHT' #snake2 if event.key == pygame.K_UP: change2_to = 'UP' if event.key == pygame.K_DOWN: change2_to = 'DOWN' if event.key == pygame.K_LEFT: change2_to = 'LEFT' if event.key == pygame.K_RIGHT: change2_to = 'RIGHT' #quit game if event.key == pygame.K_ESCAPE: #here is where was supposed to restart the gamei cut the major part to not be so long but i dont know how to restart my game
I think it should be something like this# importsrestart = Falsewhile running: if restart: # create instance of all objects used # score = 0 # Default values should be initialised # Keyboard events go here # code for restarting if event.key == pygame.K_ESCAPE: restart = True
Python does deepcopy of an object duplicate its static variables? I'm used to code in C and Java and I just got into Python.I have a Class Obj that has 2 static class variables a and b and has 2 instance variables x and y. I have an instance of Obj obj. During the program I need to make copies of obj (i.e. obj2) so that obj.x and obj2.x are not the same object, but obj.a and obj2.a are the same object (same pointer).If I make something like obj.a = foo, then obj2.a == foo should be true.I'm creating obj2 by making obj2 = copy.deepcopy(obj), but they are not sharing the pointer, it's creating another instance obj2.a and using more memory then needed.I need them to work exactly like static variables in Java. How can I do this?
Python has a specific way of working with static fields of classes. If you change the static field of class accessing through an object you will change the value only for this object.obj.a = foo # changes the field a only for objBut if you change field accessing through the class it will change values for all instances of this class.Obj.a = foo # changes the field a for all instancesAlso if you want to compare references you should use is keywordclass Dog: type="Dog"a = Dog()from copy import deepcopyb = deepcopy(a)a.type is b.type>>Truea.type == b.type>>Truea.type = "Cat"a.type is b.type>>Falseb.type is Dog.type>>True
Is `asyncio.open_connection(host, port)` blocking? I am new to asyncio library and am struggling with the behavior of asyncio.open_connection. I have created a task and has await asyncio.open_connection(host, port)within it. I want the call to open_connection blocking, that is, don't yield to the event loop until the connection is established. However, my experience is suggesting that it is not blocking and yields to the loop. So here I have two questionsI want to make sure if await asyncio.open_connection really yields to the event loop?And if yes, what is the best way to avoid this?
Yes, it yields to event loop.In asyncio's source code:async def open_connection(host=None, port=None, *, limit=_DEFAULT_LIMIT, **kwds): """A wrapper for create_connection() returning a (reader, writer) pair. The reader returned is a StreamReader instance; the writer is a StreamWriter instance. The arguments are all the usual arguments to create_connection() except protocol_factory; most common are positional host and port, with various optional keyword arguments following. Additional optional keyword arguments are loop (to set the event loop instance to use) and limit (to set the buffer limit passed to the StreamReader). (If you want to customize the StreamReader and/or StreamReaderProtocol classes, just copy the code -- there's really nothing special here except some convenience.) """ loop = events.get_running_loop() reader = StreamReader(limit=limit, loop=loop) protocol = StreamReaderProtocol(reader, loop=loop) transport, _ = await loop.create_connection( lambda: protocol, host, port, **kwds) writer = StreamWriter(transport, protocol, reader, loop) return reader, writerit calls loop.create_connection which is also await call, and about loop.create_connection:This method will try to establish the connection in the background. When successful, it returns a (transport, protocol) pair.Says the Docs. So it is then yielding control to event loop and let other coroutines run, while previous coroutine is waiting in await for connection to be established.If you absolutely sure you want to block the thread that is running the event loop then you can just use low-level socket. Honestly I really am against this idea because there's not many reason to do so.Just a minor addition, I saw you wasn't accepting answers on your previous questions. People write answers spending their own time and effort to help other peoples in help. If answer solved your questions then marking them as answer is a way to thank their efforts! Please refer Stackoverflow tour for more tips!
Show django-debug-toolbar to specific users I have seen this question over the issue of DjDT. However when I implement it gives an error.'WSGIRequest' object has no attribute 'user'This is my codedef show_toolbar(request):return not request.is_ajax() and request.user and request.user.username == 'ptar'DEBUG_TOOLBAR_CONFIG = {'SHOW_TOOLBAR_CALLBACK': 'libert.settings.show_toolbar',}MIDDLEWARESMIDDLEWARE = ['whitenoise.middleware.WhiteNoiseMiddleware','django.middleware.security.SecurityMiddleware','django.contrib.sessions.middleware.SessionMiddleware','django.middleware.common.CommonMiddleware',"debug_toolbar.middleware.DebugToolbarMiddleware",#'django.middleware.csrf.CsrfViewMiddleware','django.contrib.auth.middleware.AuthenticationMiddleware','session_security.middleware.SessionSecurityMiddleware','django.contrib.messages.middleware.MessageMiddleware','django.middleware.clickjacking.XFrameOptionsMiddleware','online_users.middleware.OnlineNowMiddleware',#'user_visit.middleware.UserVisitMiddleware',#'django_htmx.middleware.HtmxMiddleware',]
I got this working by putting the debugtoolbarMiddleware after the AuthenticationMiddlewareThank you @Flimm for taking me towards that direction.
Transform a HTML in CSV Using Python and Javascript I have a doubt, I'm Working in a program that need do take data from website, but that site doesn't have any API.So I'm thinking to combine JavaScript and Python.I'm using JavaScript to transform HTML in this data:<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body>BLUE - Amil Ltda14/07/2020;;102636;Name censured;213113;10101039;1;Única;20/09/2020;102636;HCRIANÇASJ;83,00; <br>BLUE18 - Amil Ltda21/07/2020;;102636;Name Censured Again;213029;10101039;1;Única;20/09/2020;102636;HCRI;83,00;But python interprets like one string and I need to convert in csv or json like format.I'm trying to use .replace (<br>,//n) but it didn't work.Plus, I need to delete the following section:<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body>BLUE - Amil Ltda14/07/2020
const str = `<html xmlns="http://www.w3.org/1999/xhtml"><head></head><body>BLUE - Amil Ltda14/07/2020;;102636;Name censured;213113;10101039;1;Única;20/09/2020;102636;HCRIANÇASJ;83,00; <br>BLUE18 - Amil Ltda21/07/2020;;102636;Name Censured Again;213029;10101039;1;Única;20/09/2020;102636;HCRI;83,00;`;const lines = str.split(/<br>/gs);for (let i = 0; i < lines.length; i++) { lines[i] = lines[i].replace(/(.*)BLUE\d*\s-\sAmil\sLtda\d+\/\d+\/\d+;;/, '');}console.log(lines);
Can Plotly timeline be used / reproduced in Jupyter Notebook Widget? The plotly plotly.express.timeline is marvelous, but creates it's own figure. It seems like I need to embed this visual in a FigureWidget to get it to play nice with the layout in a Jupyter Notebook. So I am trying to re-create the plot using the plotly.graph_objects.Bar() that px.timeline() is built upon.Unfortunately, I can't figure out how to accomplish this. It appears that the values for the bars are added to the 'base' vector (as a relative value) not used as absolute positions. Plotly does not appear to understand datetime.timedelta() objects. Printing the timeline() figure version shows the values asan array of floating point values which it isn't clear how they are computed. I've tried simply copying them, but this ends up with plotly thinking the x axis isn't a datetime axis.Any clue would be most welcome. Either how to use the Box() to draw the appropriate figure, or how to embed/animate/layout the px.timeline() figure in a notebook.import pandas as pdimport plotly.express as pximport plotly.graph_objects as gofrom datetime import datetime# the data:df = pd.DataFrame([ dict(Task="one", Start=datetime(2009,1,1), Finish=datetime(2009,4,28)), dict(Task="two", Start=datetime(2009,5,5), Finish=datetime(2009,7,15)), dict(Task="three", Start=datetime(2009,7,20), Finish=datetime(2009,9,30))])# working plotly express figure:pxfig = px.timeline(df, x_start="Start", x_end="Finish", y="Task")pxfig.show() # looks great# Broken bar figure:plainfig = go.Figure()plainfig.add_bar(base=df['Start'],# x=pxfig.data[0].x, # this breaks the axis as they are not of type datetime.# x=df['Finish']-df['Start'], # this doesn't produce the right plot x=df['Finish'], # these appear to be relative to base, not absolute y=df['Task'], orientation='h')plainfig.show()# looking at the two shows interesting differences in the way the x data is storedprint(pxfig)print(plainfig)Figure({ 'data': [{'alignmentgroup': 'True', 'base': array([datetime.datetime(2009, 1, 1, 0, 0), datetime.datetime(2009, 5, 5, 0, 0), datetime.datetime(2009, 7, 20, 0, 0)], dtype=object), 'x': array([1.01088e+10, 6.13440e+09, 6.22080e+09]), 'xaxis': 'x', 'y': array(['one', 'two', 'three'], dtype=object), 'yaxis': 'y'}], 'layout': {'barmode': 'overlay', 'legend': {'tracegroupgap': 0}, 'margin': {'t': 60}, 'template': '...', 'xaxis': {'anchor': 'y', 'domain': [0.0, 1.0], 'type': 'date'}, 'yaxis': {'anchor': 'x', 'domain': [0.0, 1.0], 'title': {'text': 'Task'}}}})Figure({ 'data': [{'base': array([datetime.datetime(2009, 1, 1, 0, 0), datetime.datetime(2009, 5, 5, 0, 0), datetime.datetime(2009, 7, 20, 0, 0)], dtype=object), 'orientation': 'h', 'type': 'bar', 'x': array([datetime.datetime(2009, 4, 28, 0, 0), datetime.datetime(2009, 7, 15, 0, 0), datetime.datetime(2009, 9, 30, 0, 0)], dtype=object), 'y': array(['one', 'two', 'three'], dtype=object)}], 'layout': {'template': '...'}})
I can't answer how to embed the timeline in a FigureWidget, but I think I have the answer to your original problem of getting the timeline to play nicely with the jupyter notebook layout. I'm guessing you want to be able to update the timeline interactively?I have gotten around this problem by embedding the figure produced by px.timeline in an output widget. Then whenever I need the figure to be updated (from a button callback, for example) I just clear the output in the output widget, create a new timeline figure and display that new figure. It's not the most elegant way of doing things but it gets the job done.import ipywidgets as widgetsfrom IPython.display import display, clear_outputimport pandas as pdimport plotly.express as pxfrom datetime import datetimeoutput = widgets.Output()df = pd.DataFrame([ dict(Task="one", Start=datetime(2009,1,1), Finish=datetime(2009,4,28)), dict(Task="two", Start=datetime(2009,5,5), Finish=datetime(2009,7,15)), dict(Task="three", Start=datetime(2009,7,20), Finish=datetime(2009,9,30))])updated_df = pd.DataFrame([ dict(Task="one", Start=datetime(2009,1,1), Finish=datetime(2009,4,28)), dict(Task="two", Start=datetime(2009,5,5), Finish=datetime(2009,7,15)), dict(Task="three", Start=datetime(2009,7,20), Finish=datetime(2009,9,30)), dict(Task="four", Start=datetime(2009,10,5), Finish=datetime(2009,10,10))])# display the original timeline figurepxfig = px.timeline(df, x_start="Start", x_end="Finish", y="Task")with output: display(pxfig)# create a button which when pressed will update the timeline figurebutton = widgets.Button(description='update figure')def on_click(button): with output: clear_output() new_pxfig = px.timeline(updated_df, x_start="Start", x_end="Finish", y="Task") display(new_pxfig)button.on_click(on_click)display(button)
Cannot convert int string into chr string I am very new to Python coding and am currently taking courses on Grok Learning.There is a specific question I am stuck on, I have tried everything I can think of. It is probably obvious as hell but I am completely braindead with this one. Here is my code and error message:values = int(input("Codes: "))separated_values = values.split()for value in separated_values: print(chr(value))error:Traceback (most recent call last): File "program.py", line 1, in <module> values = int.split(" ")AttributeError: type object 'int' has no attribute 'split'
You are converting the inputted str into an int. You need to keep it as a str in order to split it, so remove the "int(...)" from line 1. You need to convert each individual value into an int in the for loop instead. So:values = input("Codes: ")separated_values = values.split()for value in separated_values: print(chr(int(value)))
How do I get "@app.before_request" to run only once? I have a flask web app and I wanted a function to be called every time the page loads. I got it to work using "@app.before_request", my only problem is, I have 4 requests that are being made on every page load.Here's my logs in my console127.0.0.1 - - [14/Jun/2022 17:54:47] "GET / HTTP/1.1" 200 -127.0.0.1 - - [14/Jun/2022 17:54:48] "GET /static/style.css HTTP/1.1" 304 -127.0.0.1 - - [14/Jun/2022 17:54:48] "GET /static/profile.jpg HTTP/1.1" 304 -127.0.0.1 - - [14/Jun/2022 17:54:48] "GET /favicon.ico HTTP/1.1" 404 -Obviously it's running before every single request, including requests just for loading my html and css files. I want to limit it so that it only runs once and not 4 times. I'm having it add +1 to a count on a database and since 4 requests are being run, it's running 4 times and adding 4 every time.Here's my before_request class and function, that I want to somehow limit to running only once (maybe the initial request) and not all [email protected]_requestdef before_request(): dbcounter = handler() print(dbcounter)@app.route('/')def home(): count = handler() return render_template("index.html") if __name__ == "__main__": app.run(host='0.0.0.0', port=80)
@app.before_first_request is what solved it!
Converting dataframe to list of tuples changes datetime.datetime to int I have some code I wrote using Pandas which does the exact processing I want, but unfortunately is slow. In an effort to speed up processing times, I have gone down the path of converting the dataframe to a list of tuples, where each tuple is a row in the dataframe.I have found that the datetime.datetime objects are converted to long ints, 1622623719000000000 for example.I need to calculate the time difference between each row, so my thought was 'ok, I'm not great at python/pandas, but I know I can do datetime.fromtimestamp(1622623719000000000) to get a datetime object back.Unfortunately, datetime.fromtimestamp(1622623719000000000) throws OSError: [Errno 22] Invalid argument.So, off to Google/SO to find a solution. I find this example which shows dividing the long int by 1e3. I try that, but still get 'invalid argument.'I play around with the division of the long int, and dividing by 1e9 gets me the closest to the original datetime.datetime value, but not quite.How do I successfully convert the long int back to the correct datetime value?Code to convert string format to datetime:df.start_time = pd.to_datetime(df.report_date + " " + df.start_time)Info on dataframe:<class 'pandas.core.frame.DataFrame'>RangeIndex: 46 entries, 0 to 45Data columns (total 19 columns):report_date 46 non-null object.........start_time 46 non-null datetime64[ns].........dtypes: datetime64[ns](1), float64(7), int64(1), object(10)memory usage: 6.9+ KBNoneMy test code:print("DF start time", df.start_time[5], "is type", type(df.start_time[5]))print("list start time", tup_list[5][7], "is type", type(tup_list[5][7]),"\n")print("Convert long int in row tuple to datetime")print(datetime.fromtimestamp(int(1622623719000000000/1e9)))Output:DF start time 2021-06-02 08:16:33 is type <class 'pandas._libs.tslibs.timestamps.Timestamp'>list start time 1622623719000000000 is type <class 'int'> Convert int in row tuple to datetime2021-06-02 03:48:39
Change the dtype of your column start_time to convert Timestamp to an integer (nanoseconds):df = pd.DataFrame({'start_time': ['2021-06-02 08:16:33']}) \ .astype({'start_time': 'datetime64'})>>> df start_time0 2021-06-02 08:16:33>>> df['start_time'].astype(int)0 1622621793000000000 # NOT 1622623719000000000Name: start_time, dtype: int64>>> pd.to_datetime(1622621793000000000) # RightTimestamp('2021-06-02 08:16:33')>>> pd.to_datetime(1622623719000000000) # WrongTimestamp('2021-06-02 08:48:39')
Python : making instances of a class with required parameters dynamically I have seen other links similar to my problem but is have another problem.In a part of my code in a function I need to pass the class and in that function I want to make instances of that class dynamically.For example here is the class and calling the function:class obj: def __init__(self, id:int, param1:float, param2:int): self.id = id self.param1 = param1 self.param2 = param2 data = db.get_db('test.txt',obj)and in the function I have class parameter names and types and some data which I want to cast into a list of instances of the above class. and here is some of the code:data = []for line in raw_data: line_data = line.split(',') for i, c in enumerate(_db_info.column_names): casted_data = _db_info.column_types[c](line_data[i]) setattr(_class, c, casted_data) _instance = copy.deepcopy(_class) data.append(_instance)return dataand there is one problem, every time I insert data to _class despite using deepcopy , the _instance stays the same object as the _class and the made list is a list of last object, because last object altered at the end and _class would be the last object at the end.I think it's because the passed object is the class it self and it should be mutable for the deepcopy to change the id and first i made in instance then tried to set parameters but the parameters should be required and I couldn't do that.I think the solution is something that make instance and set parameters of a class dynamically at the same time.
Sounds like you want something like this:def convert_data(obj_cls, raw_data, column_names, column_types): """ Parse raw_data (an iterable of comma-separated strings) into obj_cls objects. :param raw_data: Raw data of strings. :param column_names: Column names (obj_cls arguments). :param column_types: Column types (used to cast each column). :return: Iterable of obj_cls objects. """ for line in raw_data: line_data = line.split(",") # Split data # Cast and name each column named_data = { name: cast(value) for (name, cast, value) in zip(column_names, column_types, line_data) } yield obj_cls(**named_data) # Create instance using kwargs# ---class MyObject: def __init__(self, id: int, param1: float, param2: int): self.id = id self.param1 = param1 self.param2 = param2 def __repr__(self): return f"<obj id={self.id} param1={self.param1} param2={self.param2}>"raw_data = [ "12,34.5,8", "13,45.6,9", "14,56.7,10",]converted_data = list( convert_data( MyObject, raw_data, column_names=["id", "param1", "param2"], column_types=[int, float, int], ))print(converted_data)The output is[ <obj id=12 param1=34.5 param2=8>, <obj id=13 param1=45.6 param2=9>, <obj id=14 param1=56.7 param2=10>]
Initialize Vaex Dataframe Column to a value I want to initialize a column of my vaex dataframe to the int value 0I have the following:right_csv = "animal_data.csv"vaex_df = vaex.open(right_csv,dtype='object',convert=True)vaex_df["initial_color"] = 0But this will throw an error for line 3 complaining about how vaex was expecting a str Expression and got an integer instead. How do I make a vaex expression set every row of a column to a single value?
Good question, the most memory efficient way now (vaex-core v2.0.2, vaex v3) is:df['test'] = vaex.vrange(0, len(df)) # add a 'virtual range' column, which takes no memorydf['test'] = df['test']* 0 + 111 # multiply by zero, and add the initial valueWe should probably have a more convenient way to do this, I opened https://github.com/vaexio/vaex/issues/802 for this.
Gcc error, No such file or directory "Python.h" -- installing pyAudio on centOS7 I have python 3.6.8 installed on CentOS7 and I'm trying to install pyaudio with sudo python3.6 -m pip install pyaudioThis format worked to install a number of other things right beforehand, but if I try to use it here i get the following errorsrc/_portaudiomodule.c:28:10: fatal error: Python.h: No such file or directory #include "Python.h" ^~~~~~~~~~compilation terminated.error: command 'gcc' failed with exit status 1----------------------------------------pip install pyaudio yeilds the same resultsI have read the question and answer here but I still cannot figure it outAny advice in installation? Thank you in advance!
fatal error: Python.h: No such file or directoryIt looks like pyaudio is compiling some C code who require Python.h, to fix your issue check this answer https://stackoverflow.com/a/21530768/9799292 (also, "pip install pyaudio" prints "bash: pip: command not found")To fix this try to install pip by running this commandsudo yum install python3-pip
Python - Delete a file with a certain character in it I have a lot of duplicate files in a folder and I would like to delete the duplicate. As of now i have FileA.jpg and FileA(1).jpg. I would like to make a short script that open a directory and finds any file name that has a ( and then delete it.How would I do this?
You can use OS package.import osfor filePath in os.listdir("/path/to/dir"): if "(" in filePath: os.remove(filePath)
Why does VS-Code Autopep8 format 2 white lines? print("Hello")def world(): print("Hello")world()Gets corrected to:print("Hello")def world(): print("Hello")world()I have tried to:Reinstall Virtual Studio CodeReinstall Python 3.8Computer RebootUsing other formatters like Black and yapf but got the same result
Because autopep8 follows PEP8 which suggests 2 blank lines around top-level functions. Surround top-level function and class definitions with two blank lines.
Debugging Python: Why don't my variables update? I'm using PyCharm 2019.2 Professional, Win 10 x64, Python 3.7, and IPython 7.11.1.When running a script in debug mode and hitting a breakpoint, I can execute statements in the IPython prompt. However, I (sometimes?) cannot change the variables values.For example, I have a dataframe and check on some condition to get a bool-Series, which I then sum up to check the number of thruths in the dataframe, and if there are none, append another dataframe:(note that this is a simplified example, I'm not looking to improve this specific snipped)a_lim = 0.01my_mask = df_old['A'] < a_limif sum(my_mask) == 0: df_new = generate_new_df() df_old = df_old.append(df_new, ignore_index=True) \ .sort_values(by=['A'], ascending=True) \ .reset_index(drop=True)passLets assume df_new contains one row of data that evaluates as True. If I set a breakpoint the if-statement, sum(my_mask) would be 0. If I set my breakpoint at pass, I can check df_old and see the added rows from df_new. At that point, sum(my_mask) is still 0. That is fine.My Problem:Stopped at pass, I evaluate my_mask = df_old['A'] < a_lim. Then, I check sum(my_mask) and it still returns 0. However, if I evaluate sum(df_old['A'] < a_lim), I will get 1 (the expected result).What is happening behind the scenes, that Python/IPython seems to selectively update variables, and not others?Thanks!
I have encountered this bug as well. I have always assumed it is a PyCharm bug not Python. Might be worth raising a bug with JetBrains, think you can do that here:https://youtrack.jetbrains.com/issues/PY
Is it possible to limit Flask POST data size on a per-route basis? I am aware it is possible to set an overall limit on request size in Flask with:app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024BUT I want to ensure that one specific route will not accept POST data over a certain size.Is this possible?
You'll need to check this for the specific route itself; you can always test the content length; request.content_length is either None or an integer value:cl = request.content_lengthif cl is not None and cl > 3 * 1024 * 1024: abort(413)Do this before accessing form or file data on the request.You can make this into a decorator for your views:from functools import wrapsfrom flask import request, abortdef limit_content_length(max_length): def decorator(f): @wraps(f) def wrapper(*args, **kwargs): cl = request.content_length if cl is not None and cl > max_length: abort(413) return f(*args, **kwargs) return wrapper return decoratorthen use this as:@app.route('/...')@limit_content_length(3 * 1024 * 1024)def your_view(): # ...This is essentially what Flask does; when you try to access request data, the Content-Length header is checked first before attempting to parse the request body. With the decorator or manual check you just made the same test, but a little earlier in the view lifecycle.
Win7 query hardware keyboard Caps Lock current state I'm writing a Tkinter application with Python 2.7 on OS windows7.I want to query the current state of the hardware keyboard Caps Lock without capturing keyboard events, sending them anywhere, or toggling it. Does the OS keyboard interrupt handler take on a modal state when the user physically?Presses the hardware keyboard caps lock key?, or is Caps Lock an internal logic state withinthe keyboard itself?is there a Python means to query the current state of Caps Lock?I've been searching for YEARS, read thousands of posts relating to keyboards,and all I find is keyboard event capturing and toggling.
GetKeyState is the Windows API that you would use to find out the current state of the capslock key in C/C++, so using ctypes you could do something like this:import ctypesVK_CAPITAL = 0x14if ctypes.windll.user32.GetKeyState(VK_CAPITAL) & 1: print "Caps Lock On"else: print "Caps Lock Off"And no, the capslock functionality isn't implemented in the keyboard itself. The keyboard just tells the computer when the Caps Lock key is pressed. Windows then keeps track of capslock state itself. It even has to tell the keyboard when to turn the capslock indicator on or off. The keyboard won't do this on its own.
How do I get values from a dictionary run them through an equation and return the key with the greatest value So my assignment has been easy up to this point. Useing Python 3GetSale - Finds the maximum expected value of selling a stock. The expected sale value of a stock is the current profit minus the future value of the stock:Expected Sale value = ( ( Current Price - Buy Price ) - Risk * CurrentPrice ) * SharesThe GetSale function should calculate this value for each stock in the portfolio, and return the stock symbol with the highest expected sale value.We are using 3 separate dictionaries: Names, Prices and Exposure.For the GetSale I know I need to call the Prices and Exposure dictionaries to get the values for the equation; however, I have no idea how to get those values and run them. so far this is the code:Names = {}Prices = {}Exposure = {}def AddName(): name = input('Please enter the company name: ') stock_symbol = input('Please enter the comapny stock symbol: ') Names[name] = stock_symboldef AddPrices(): stock_symbol = input('Please enter the company stock symbol: ') Buy_Price = float(input('Please enter the buy price: ')) Current_Price = float(input('Please enter the current price: ')) Prices[stock_symbol] = 'Buy_Price:', [Buy_Price], 'Current Price', [Current_Price]def AddExposure(): stock_symbol = input('Please enter the company stock symbol: ') Risk = float(input('Please enter the risk of the stock: ')) Shares = float(input('Please enter the shares of the stock: ')) Exposure[stock_symbol] = 'Risk:', [Risk], 'Shares:', [Shares]def AddStock(): AddName() AddPrices() AddExposure()I know that it must somehow be done with a loop since it will have the run the equation over and over to find the greatest Expected Sale Value and then it will return the Stock Symbol of the greatest one.def GetSale(): for stock_symbol, Buy_Price, Current_Price in Prices.items():P.S. I'm sorry if it isn't very clear and specific I tried to make it to the point so please forgive me its only my second post.
How do I get values from a dictionaryd.values() run them through an equation(equation(value) for value in d.values()) and return the key with the greatest valueHere's where it gets interesting. You need the keys and values together for that. So let's start over. How do I get keys and values from a dictionaryd.items() run the values through an equation((equation(v), k) for k, v in d.items()) and return the key with the greatest valuemax((equation(v), k) for k, v in d.items()) no, the key, not the value and keymax((equation(v), k) for k, v in d.items())[1]
Can't load audio on pygame - Pygame error when loading audio: Failed loading libvorbisfile-3.dll: The specified module could not be found I've been using pygame 2.0.1 consistently for months. Today, after I upgraded to the latest version (2.1.2), I started getting this error when trying to load an audio file:'pygame.error: Failed loading libvorbisfile-3.dll: The specified module could not be found'.Things I have tried so far:Downloading the dll and copying it to /site-packages/pygame (it was already there).Downloading the dll and copying to the folder of the script being runRestarting the IDERestarting WindowsReinstalling pygameDowngrading to pygame 2.0.1I'm using Windows 10, Python 3.9.10 and running a virtualenv through PyCharm.
I solved the issue by uninstalling Python, installing the latest version (3.10.2), creating a new virtual environment, upgrading pip to the latest version (21.2.4) and then installing pygame via pip.
PyQt5.uic.exceptions.NoSuchWidgetError: Unknown Qt widget: KPIM.AddresseeLineEdit ProblemI am importing a .ui from pyqt5 inside a python3 file. In other projects my code worked fine but now I am receiving PyQt5.uic.exceptions.NoSuchWidgetError: Unknown Qt widget: KPIM.AddresseeLineEditMy code:import sqlite3from PyQt5.uic import *from PyQt5.QtWidgets import *from os import pathimport sysconn = sqlite3.connect('database.sqlite')cur = conn.cursor()FORM_CLASS,_ = loadUiType(path.join(path.dirname(__file__), "login.ui"))class Main(QMainWindow, FORM_CLASS): def __init__(self, parent=None): super(Main, self).__init__(parent) self.setupUi(self)def main(): app = QApplication(sys.argv) window = Main() window.show() app.exec_()if __name__ == '__main__': main()Output:Traceback (most recent call last): File "/usr/lib/python3.9/idlelib/run.py", line 559, in runcode exec(code, self.locals) File "/home/lixt/Desktop/Zamaio/ui/remade/register/zamaio.py", line 8, in <module> FORM_CLASS,_ = loadUiType(path.join(path.dirname(__file__), "login.ui")) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/__init__.py", line 200, in loadUiType winfo = compiler.UICompiler().compileUi(uifile, code_string, from_imports, File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/Compiler/compiler.py", line 111, in compileUi w = self.parse(input_stream, resource_suffix) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/uiparser.py", line 1037, in parse actor(elem) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/uiparser.py", line 828, in createUserInterface self.traverseWidgetTree(elem) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/uiparser.py", line 806, in traverseWidgetTree handler(self, child) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/uiparser.py", line 264, in createWidget self.stack.push(self.setupObject(widget_class, parent, elem)) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/uiparser.py", line 228, in setupObject obj = self.factory.createQObject(clsname, name, args, is_attribute) File "/usr/local/lib/python3.9/dist-packages/PyQt5/uic/objcreator.py", line 116, in createQObject raise NoSuchWidgetError(classname)PyQt5.uic.exceptions.NoSuchWidgetError: Unknown Qt widget: KPIM.AddresseeLineEditMy ui file<?xml version="1.0" encoding="UTF-8"?><ui version="4.0"> <class>Form</class> <widget class="QWidget" name="Form"> <property name="geometry"> <rect> <x>0</x> <y>0</y> <width>524</width> <height>555</height> </rect> </property> <property name="windowTitle"> <string>Form</string> </property> <property name="styleSheet"> <string notr="true">background-color: rgb(29, 68, 9);</string> </property> <widget class="QLabel" name="title"> <property name="geometry"> <rect> <x>0</x> <y>10</y> <width>521</width> <height>41</height> </rect> </property> <property name="statusTip"> <string>Zamaio is a new social platform!</string> </property> <property name="whatsThis"> <string>Zamaio is a new social platform!</string> </property> <property name="styleSheet"> <string notr="true">color: rgb(25, 255, 0);</string> </property> <property name="text"> <string><html><head/><body><p align="center"><span style=" font-size:26pt; font-weight:600;">Zamaio</span></p></body></html></string> </property> </widget> <widget class="QLabel" name="username_text"> <property name="geometry"> <rect> <x>20</x> <y>120</y> <width>121</width> <height>31</height> </rect> </property> <property name="font"> <font> <family>Sans Serif</family> <weight>50</weight> <italic>false</italic> <bold>false</bold> </font> </property> <property name="statusTip"> <string>Zamaio is a new social platform!</string> </property> <property name="whatsThis"> <string>Zamaio is a new social platform!</string> </property> <property name="styleSheet"> <string notr="true">color: rgb(181, 195, 130);</string> </property> <property name="text"> <string><html><head/><body><p align="center"><span style=" font-size:11pt; font-weight:600;">Username:</span></p></body></html></string> </property> </widget> <widget class="KPIM::AddresseeLineEdit" name="username"> <property name="geometry"> <rect> <x>130</x> <y>120</y> <width>221</width> <height>28</height> </rect> </property> <property name="styleSheet"> <string notr="true">background-color: rgb(44, 97, 4);color: rgb(240, 255, 233);border-width: 1px;border-style: solid;border-color: black black black black;</string> </property> <property name="maxLength"> <number>29</number> </property> </widget> <widget class="QLabel" name="password_text"> <property name="geometry"> <rect> <x>20</x> <y>170</y> <width>121</width> <height>31</height> </rect> </property> <property name="font"> <font> <family>Sans Serif</family> <weight>50</weight> <italic>false</italic> <bold>false</bold> </font> </property> <property name="statusTip"> <string>Zamaio is a new social platform!</string> </property> <property name="whatsThis"> <string>Zamaio is a new social platform!</string> </property> <property name="styleSheet"> <string notr="true">color: rgb(181, 195, 130);</string> </property> <property name="text"> <string><html><head/><body><p align="center"><span style=" font-size:11pt; font-weight:600;">Password:</span></p></body></html></string> </property> </widget> <widget class="KPIM::AddresseeLineEdit" name="password"> <property name="geometry"> <rect> <x>130</x> <y>170</y> <width>221</width> <height>28</height> </rect> </property> <property name="styleSheet"> <string notr="true">background-color: rgb(44, 97, 4);color: rgb(240, 255, 233);border-width: 1px;border-style: solid;border-color: black black black black;password::hover { border-color: black black white white;}</string> </property> <property name="maxLength"> <number>32</number> </property> <property name="readOnly"> <bool>false</bool> </property> <property name="urlDropsEnabled"> <bool>false</bool> </property> <property name="trapEnterKeyEvent" stdset="0"> <bool>false</bool> </property> <property name="squeezedTextEnabled"> <bool>false</bool> </property> <property name="passwordMode"> <bool>true</bool> </property> </widget> <widget class="QLabel" name="Age_text"> <property name="geometry"> <rect> <x>20</x> <y>370</y> <width>121</width> <height>41</height> </rect> </property> <property name="font"> <font> <family>Sans Serif</family> <weight>50</weight> <italic>false</italic> <bold>false</bold> </font> </property> <property name="statusTip"> <string>Zamaio is a new social platform!</string> </property> <property name="whatsThis"> <string>Zamaio is a new social platform!</string> </property> <property name="styleSheet"> <string notr="true">color: rgb(181, 195, 130);</string> </property> <property name="text"> <string><html><head/><body><p align="center"><span style=" font-size:11pt; font-weight:600;">Age:</span></p></body></html></string> </property> </widget> <widget class="QLabel" name="confirm_password_texr"> <property name="geometry"> <rect> <x>10</x> <y>220</y> <width>161</width> <height>31</height> </rect> </property> <property name="font"> <font> <family>Sans Serif</family> <weight>50</weight> <italic>false</italic> <bold>false</bold> </font> </property> <property name="statusTip"> <string>Zamaio is a new social platform!</string> </property> <property name="whatsThis"> <string>Zamaio is a new social platform!</string> </property> <property name="styleSheet"> <string notr="true">color: rgb(181, 195, 130);</string> </property> <property name="text"> <string><html><head/><body><p align="center"><span style=" font-size:11pt; font-weight:600;">Confirm Password:</span></p></body></html></string> </property> </widget> <widget class="KPIM::AddresseeLineEdit" name="confirm_password"> <property name="geometry"> <rect> <x>180</x> <y>220</y> <width>171</width> <height>28</height> </rect> </property> <property name="styleSheet"> <string notr="true">background-color: rgb(44, 97, 4);color: rgb(240, 255, 233);border-width: 1px;border-style: solid;border-color: black black black black;password::hover { border-color: black black white white;}</string> </property> <property name="maxLength"> <number>32</number> </property> <property name="readOnly"> <bool>false</bool> </property> <property name="urlDropsEnabled"> <bool>false</bool> </property> <property name="trapEnterKeyEvent" stdset="0"> <bool>false</bool> </property> <property name="squeezedTextEnabled"> <bool>false</bool> </property> <property name="passwordMode"> <bool>true</bool> </property> </widget> <widget class="QSlider" name="age_slider"> <property name="geometry"> <rect> <x>110</x> <y>380</y> <width>261</width> <height>17</height> </rect> </property> <property name="orientation"> <enum>Qt::Horizontal</enum> </property> </widget> <widget class="QLabel" name="Age_counter"> <property name="geometry"> <rect> <x>370</x> <y>370</y> <width>151</width> <height>41</height> </rect> </property> <property name="font"> <font> <family>Sans Serif</family> <weight>50</weight> <italic>false</italic> <bold>false</bold> </font> </property> <property name="statusTip"> <string>Zamaio is a new social platform!</string> </property> <property name="whatsThis"> <string>Zamaio is a new social platform!</string> </property> <property name="styleSheet"> <string notr="true">color: rgb(181, 195, 130);</string> </property> <property name="text"> <string><html><head/><body><p align="center"/></body></html></string> </property> </widget> <widget class="QPushButton" name="pushButton"> <property name="geometry"> <rect> <x>380</x> <y>470</y> <width>90</width> <height>28</height> </rect> </property> <property name="styleSheet"> <string notr="true">QPushButton { color: #FFFFFF; background-color: rgb(17, 176, 27); border-style: outset; padding: 2px; font: bold 20px; border-width: 2px; border-radius: 4px; border-color: rgb(19, 197, 0);}QPushButton:hover { background-color: rgb(47, 105, 37); border-color: rgb(33, 163, 30);}QPushButton::clicked{ background-color : red;}</string> </property> <property name="text"> <string>Create</string> </property> </widget> <widget class="QPushButton" name="ViewPass1"> <property name="geometry"> <rect> <x>360</x> <y>170</y> <width>31</width> <height>28</height> </rect> </property> <property name="text"> <string/> </property> </widget> <widget class="QPushButton" name="ViewPass1_2"> <property name="geometry"> <rect> <x>360</x> <y>220</y> <width>31</width> <height>28</height> </rect> </property> <property name="text"> <string/> </property> </widget> </widget> <customwidgets> <customwidget> <class>KPIM::AddresseeLineEdit</class> <extends>QLineEdit</extends> <header>LibkdepimAkonadi/AddresseeLineEdit</header> </customwidget> </customwidgets> <resources/> <connections/></ui>Is there any kind of bug?SolutionBy replace KPIM::AddresseeLineEdit by QTextEdit in the .ui file solves the problem
It seems I was using not supported widgets by pyqt5. To solve it I just need to replace KPIM::AddresseeLineEdit by QTextEdit in the .ui file solves the problem
add count column in time series plot I want to plot the mean based on month and years. My data have two columns (count, mean) and the date as index.As shown here is a plot similar to my plot where x is years and y is mean Here is my code import matplotlib.pyplot as plt diet = df[['mean']] diet.plot(figsize=(20,10), linewidth=5, fontsize=20 ,marker='<') plt.xlabel('Year', fontsize=20); plt.xlabel('Month/Year') plt.ylabel('mean')Is there any way to add the count column on the all point line like this to know the count number in each month.
idx = pd.date_range(start='1901-01-01', end='1903-12-31', freq='1M')df = pd.DataFrame({"mean": np.random.random(size=(idx.size,)), "count": np.random.randint(0,10, size=(idx.size,))}, index=idx)plt.figure()ax = df['mean'].plot(figsize=(8,4))for d,row in df.iterrows(): ax.annotate('{:.0f}'.format(row['count']), xy=(d,row['mean']), ha='center')
Change|Assign parent for the Model instance on Google App Engine Datastore Is it possible to change or assign new parent to the Model instance that already in datastore? For example I need something like thistask = db.get(db.Key(task_key))project = db.get(db.Key(project_key))task.parent = projecttask.put()but it doesn't works this way because task.parent is built-in method. I was thinking about creating a new Key instance for the task but there is no way to change key as well.Any thoughts?
According to the docs, no: The parent of an entity is defined when the entity is created, and cannot be changed later. ... The complete key of an entity, including the path, the kind and the name or numeric ID, is unique and specific to that entity. The complete key is assigned when the entity is created in the datastore, and none of its parts can change.Setting a parent entity is useful when you need to manipulate the parent and child in the same transaction. Otherwise, you're just limiting performance by forcing them both to be in the same entity group, and restricting your ability to update the relationship after the entity has been created.Use a ReferenceProperty instead.