questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
---|---|
Numpy minimum in (row, column) format How can I know the (row, column) index of the minimum of a numpy array/matrix?For example, if A = array([[1, 2], [3, 0]]), I want to get (1, 1)Thanks! | Use unravel_index:numpy.unravel_index(A.argmin(), A.shape) |
How to send multiple recipient sendgrid V3 api Python Anyone please help, I am using sendgrid v3 api. But I cannot find any way to send an email to multiple recipients. Thank in advance. import sendgrid from sendgrid.helpers.mail import * sg = sendgrid.SendGridAPIClient(apikey="SG.xxxxxxxx") from_email = Email("FROM EMAIL ADDRESS") to_email = Email("TO EMAIL ADDRESS") subject = "Sending with SendGrid is Fun" content = Content("text/plain", "and easy to do anywhere, even with Python") mail = Mail(from_email, subject, to_email, content) response = sg.client.mail.send.post(request_body=mail.get()) print(response.status_code) print(response.body) print(response.headers)I want to send email to multiple recipient. Like to_mail = " [email protected], [email protected]". | Note that with the code of the other answers here, the recipients of the email will see each others emails address in the TO field. To avoid this one has to use a separate Personalization object for every email address:def SendEmail(): sg = sendgrid.SendGridAPIClient(api_key="YOUR KEY") from_email = Email ("FROM EMAIL ADDRESS") person1 = Personalization() person1.add_to(Email ("EMAIL ADDRESS 1")) person2 = Personalization() person2.add_to(Email ("EMAIL ADDRESS 2")) subject = "EMAIL SUBJECT" content = Content ("text/plain", "EMAIL BODY") mail = Mail (from_email, subject, None, content) mail.add_personalization(person1) mail.add_personalization(person2) response = sg.client.mail.send.post (request_body=mail.get()) return response.status_code == 202 |
Django Celery periodic task example I need a minimum example to do periodic task (run some function after every 5 minutes, or run something at 12:00:00 etc.).In my myapp/tasks.py, I have,from celery.task.schedules import crontabfrom celery.decorators import periodic_taskfrom celery import task@periodic_task(run_every=(crontab(hour="*", minute=1)), name="run_every_1_minutes", ignore_result=True)def return_5(): return 5@taskdef test(): return "test"When I run celery workers it does show the tasks (given below) but does not return any values (in either terminal or flower).[tasks] . mathematica.core.tasks.test . run_every_1_minutesPlease provide a minimum example or hints to achieve the desired results.Background:I have a config/celery.py which contains the following:import osfrom celery import Celeryos.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local")app = Celery('config')app.config_from_object('django.conf:settings', namespace='CELERY')app.autodiscover_tasks()And in my config/__init__.py, I havefrom .celery import app as celery_app__all__ = ['celery_app']I added a function something like below in myapp/tasks.pyfrom celery import task@taskdef test(): return "test"When I run test.delay() from shell, it runs successfully and also shows the task information in flower | To run periodic task you should run celery beat also. You can run it with this command:celery -A proj beatOr if you are using one worker:celery -A proj worker -B |
ScrollBar with Text widget not showing in grid I am creating a grid view with different rows and columns, the main problem is that my scrollbar with text widget does not show up the entire GUI. My code is as: # -----Zero Row----lbl = Label(window, font=('Calibri',32), text='Title',bg = '#f0f0f0',bd =10,anchor='w').grid(row=0,columnspan=4)#---- First Row ---Label(window, text='Account Number').grid(row =1 , column = 0, sticky='nsew' )Label(window, text='Balance').grid(row =1 , column = 1, sticky='nsew' )btnLogOut = Button(window, text='Log Out', command = save_and_logout).grid(row= 1, column= 2, columnspan = 2, sticky='nsew')#----Second Row----Label(window, text='ON').grid(row =2 , column = 0, sticky='nsew')Entry(window, textvariable = account_number_input).grid(row =2 , column = 1, sticky='nsew')Button(window, text='OF', command = a).grid(row= 2, column= 2, sticky='nsew')Button(window, text='SWITCH', command = a).grid(row= 2, column= 3, sticky='nsew')#---Third Row----text_scrollbar = Scrollbar ( window )text_scrollbar.pack( side = RIGHT, fill = Y )transaction_text_widget = Text(window, wrap = NONE, yscrollcommand = text_scrollbar.set)# state = NORMAL for editing transaction_text_widget.config(state=NORMAL)transaction_text_widget.insert("1.0", "text")transaction_text_widget.insert("1232.30", "text")transaction_text_widget.insert("132223.0", "text")# state = DISABLED so that it cannot be edited once written transaction_text_widget.config(state=DISABLED)transaction_text_widget.pack(side="left")#Configure the scrollbarstext_scrollbar.config(command=transaction_text_widget.yview)transaction_text_widget.grid(row = 3 , column = 1,sticky='nsw')#-----------setting the column and row weights-----------window.columnconfigure(0, weight=1)window.columnconfigure(1, weight=1)window.columnconfigure(2, weight=1)window.columnconfigure(3, weight=1)window.rowconfigure(0, weight =4)window.rowconfigure(1, weight =2)window.rowconfigure(2, weight =3)window.rowconfigure(3, weight =3)What might be the reason for this? If I remove the Third Row section the GUI is seen for other rows. | I fixed it by just adding text_scrollbar.grid(row = 3 , columnspan = 4,sticky='nsew')Thank you for the answer though. |
why do I get ZeroDivisionError? I am trying to write python program that reads a .csv file. I have in the input file 4 columns/fields and I want to only have 2 in the output with one being a new one.This is the input I am using:MID,REP,NAME,NEPTUN0,,"people's front of judea",GM6MRT17,,Steve Jobs,NC3J0K,0,Brian,RQQCFE19,9,Pontius Pilate,BQ6IAJ1,,N. Jesus,QDMXVF18,,Bill Gates,D1CXLO0,,"knights who say NI",CZN5JA ,1,"Robin, the brave",BWQ5AU17,19,"Gelehed, the pure",BY9B8Gthen the output should be something like this(not full output):NEPTUN,GROWTHBQ6IAJ,-0.5263157894736842BWQ5AU,infBY9B8G,0.11764705882352941RQQCFE,0The new field called GROWTH is calculated by (REP-MID)/MID.So, I am using two lists to do that:import csvL = []s =[]with open('input.csv', 'rb') as R: reader = csv.DictReader(R) for x in reader: if x['MID'] != '' or '0' and x['REP'] == '': Growth = -float(x['MID'])/float(x['MID']) L.append(x) s.append(Growth) elif x['MID'] != '' or '0': Growth = (float(x['REP']))-float(x['MID'])/float(x['MID']) L.append(x) s.append(Growth) elif x['MID'] and x['REP'] == '' or '0' : Growth = 0 L.append(x) s.append(Growth) else: Growth = float("inf") L.append(x) s.append(Growth) for i in range(len(s)): L[i]['GROWTH'] = iR.close()with open('output.csv', 'wb') as output: fields = ['NEPTUN', 'GROWTH'] writer = csv.DictWriter(output, fieldnames=fields, extrasaction='ignore') writer.writeheader() writer.writerows(L)output.close() Now, I am not even sure if the code is correct or does what I aim it for, because I am stuck at a ZeroDivisionError: float division by zero at the first ifcondition and I tried many ways to avoid it but I get the same error.I thought the problem is that when there are no values for MID field, I think the dictionary gives it `` value and that can't be transformed to 0 by float(). But it seems that is not the problem, but honestly I have no idea now, so that is why I am asking here.the full error:Growth = -float(x['MID'])/float(x['MID'])ZeroDivisionError: float division by zeroAny hints about this are greatly valued. | if x['MID'] != '' or '0' and x['REP'] == ''This does not mean what you think it means. It is interpreted asif ((x['Mid'] != '') or ('0')) and (x['REP'] == '')which shortens toif True and x['REP'] == ''which in turn becomesif x['REP'] == ''What you mean isif x['Mid'] not in ('', '0') and (x['REP'] == ''):You need to do the same for your other if statements |
Single inheritance causes TypeError: metaclass conflict: the metaclass of a derived class must be I'm trying to make subclasses of a Command class but I keep getting the error: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its basesclass Command(object): def __init__(self, sock, str): self.sock = sock self.str = str def execute(self): passfrom src import Commandclass BroadcastCommand(object, Command): def __init__(self, sock, str): super(Command, self).__init__() def execute(self): self.broadcast() def broadcast(self): print(str)My Command.py file and BroadcastCommand.py file are currently in the same package directory. | If Command inherits from object then there's no use making BroadcastCommand inheriting from both object and Command - it's enough that it inherits from Command -, and it indeed raises the TypeError you get. Solution: make BroadcastCommand inherit from Command only.As a side note, your super call should besuper(BroadcastCommand, self).__init__(sock, str)and naming your param str is possibly not a good idea. |
Django periodic task celery I'm a newbie in Django and Celery.Help me please, I can't understand, how it work. I want to see in my console "Hello world" every 1 min.tasks.pyfrom celery import Celeryfrom celery.schedules import crontabfrom celery.task import periodic_taskapp = Celery('tasks', broker='pyamqp://guest@localhost//')@periodic_task(run_every=(crontab(hour="*", minute=1)), ignore_result=True)def hello_world(): return "Hello World"celery.pyfrom __future__ import absolute_importimport osfrom celery import Celeryos.environ.setdefault("DJANGO_SETTINGS_MODULE", "test.settings.local")app = Celery('test')app.config_from_object('celeryconfig')app.autodiscover_tasks()@app.task(bind=True)def debug_task(self): print('Request: {0!r}'.format(self.request))init.pyfrom __future__ import absolute_import# This will make sure the app is always imported when# Django starts so that shared_task will use this app.from .celery import app as celery_appceleryconfig.pybroker_url = 'redis://localhost:6379'result_backend = 'rpc://'task_serializer = 'json'result_serializer = 'json'accept_content = ['json']timezone = 'Europe/Oslo'enable_utc = TrueIt's a simple celery settings and code, but doesn't worked =\celery -A tasks worker -BAnd nothing happens. Tell me what am I doing wrong? Thank you! | You need to setup beat_schedule in your celeryconfig.pyfrom celery.schedules import crontabbeat_schedule = { 'send_each_minute': { 'task': 'your.module.path.function', 'schedule': crontab(), 'args': (), },} |
python blackjack aces break my code calculations I am coding a blackjack game. I append all cards including royals and aces and 2 through 10. What I'm having trouble with is the aces. My problem is being able to calculate all cards before calculating A, to clarify this, I mean I want to calculate ALL cards before ANY aces in a list so I can see if the total is greater then 11, if so then just add 1 for every ace if not, add 11. My code thus far:import randomdef dealhand(): #This function appends 2 cards to the deck and converts royals and aces to letters. player_hand = [] for card in range(0,2): card = random.randint(1,13) if card == 1: card = "A" if card == 11: card = "J" if card == 12: card = "Q" if card == 13: card = "K" player_hand.append(card) return player_hand#this function sums the given cards to the userdef sumHand(list): total = 0 for card in list: card = str(card) if card == "J" or card == "Q" or card== "K": total+=10 continue elif card == "2" or card == "3" or card == "4" or card == "5" or card == "6" or card == "7" or card == "8" or card == "9" or card == "10": total += int(card) elif card == "A": if total<11: total+=11 else: total+=1 return total | I would suggest total += 1, aces += 1 then at the end, add 10 if needed for each ace.A few pointers on asking a question: don't post the dealhand function, as that is completely irrelevant. Post the input, output and expected outputdef sumHand(hand): ...hand = ['A', 'K', 'Q']expected 21actual 31Here is my suggested fix (minimal change for this particular issue)def sumHand(hand): total = 0 aces = 0 for card in hand: card = str(card) if card == "J" or card == "Q" or card== "K": total+=10 continue elif card == "2" or card == "3" or card == "4" or card == "5" or card == "6" or card == "7" or card == "8" or card == "9" or card == "10": total += int(card) elif card == "A": total += 1 aces += 1 for _ in range(aces): if total <= 11: total += 10 return totalI changed "list" to "hand" because that's hiding a builtin class's name, but otherwise didn't mess with it. I would suggest adding a (unit tested) function to get a card's value. Maybe a dict which serves as a name-value map. You could simplify some of the conditions with the "in" operator. It's weird that handle ints by converting them to string and then back to int. But none of that directly relates to the issue of counting aces. |
subset df by masking between specific rows I'm trying to subset a pandas df by removing rows that fall between specific values. The problem is these values can be at different rows so I can't select fixed rows. Specifically, I want to remove rows that fall between ABC xxx and the integer 5. These values could fall anywhere in the df and be of unequal length. Note: The string ABC will be followed by different values.I thought about returning all the indexes that contain these two values. But mask could work better if I could return all rows between these two values?df = pd.DataFrame({ 'Val' : ['None','ABC','None',1,2,3,4,5,'X',1,2,'ABC',1,4,5,'Y',1,2], })mask = (df['Val'].str.contains(r'ABC(?!$)')) & (df['Val'] == 5) Intended Output: Val0 None8 X9 110 215 Y16 117 2 | a = df.index[df['Val'].str.contains('ABC')==True][0]b = df.index[df['Val']==5][0]+1c = np.array(range (a,b))bad_df = df.index.isin(c)df[~bad_df]Output Val0 None8 X9 110 2If there are more than one 'ABC' and 5, then you the below version.With this you get the df other than the first ABC & the last 5a = (df['Val'].str.contains('ABC')==True).idxmax()b = df['Val'].where(df['Val']==5).last_valid_index()+1c = np.array(range (a,b))bad_df = df.index.isin(c)df[~bad_df] |
unable to parse h1 element following execute_script() I'm trying to scrape the h1 element after the click of a JS link. As I'm new to python, selenium, and beautifulsoup, I'm not sure if what followed the JS execution changes the way parsing works, or if I'm just grabbing the new url improperly. Everything I've tried has returned something different, from Incompleteread, Nonetype object is not callable, [-1, None, -1, None], to a simple None. I'm just not sure where to go after the containers variable, which I left the way it is just to pull the html.All I'm wanting to pull from this is the name<div class="name"> <h1 itemprop="name"> Nicolette Shea </h1> star_button = driver.find_element_by_css_selector("a[href*='/pornstar/']")click = driver.execute_script('arguments[0].click();', star_button)wait = WebDriverWait(driver, 5)try: wait.until(EC.url_contains('-'))except TimeOutException: print("Unable to load")new_url = driver.current_urlpage = pUrl(new_url)p_read = page.read()page.close()p_parse = soup(p_read, 'html.parser')containers = p_parse.find('div', {'class' : 'name'})print(containers) | Why not after your wait simply load driver.page_source into BeautifulSoup?#try:#except: ....your code soup = BeautifulSoup(driver.page_source, 'lxml')names = [item.text for item in soup.select('div.name')] |
Iterate through the list The transaction csv looks like this and I add them to list as shown below.Bread MilkBread Diapers Beer Eggs Beer[{'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Diapers': 1, 'Beer': 6, 'Eggs': 1}, {'Milk': 1, 'Diapers': 1, 'Beer': 6, 'Cola': 1}, {'Bread': 1, 'Milk': 1, 'Diapers': 1, 'Beer': 6}, {'Bread': 1, 'Milk': 1, 'Diapers': 2, 'Cola': 1, 'Chips': 2, 'Beer': 1, '': 1}, {'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1}, {'Milk': 1, 'Bread': 1, 'Beer': 4, 'Cola': 1, 'Diapers': 1, 'Chips': 1}, {'Bread': 1, 'Milk': 2, 'Diapers': 2, 'Beer': 2, 'Chips': 2}, {'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1}]I would like to consider only the list which contains the count 3 Diapers.I would expect the transactions to return only as shown below:{'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1}{'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1}{'Bread', 'Beer', 'Diapers', 'Milk'}{'Bread', 'Cola', 'Beer', 'Milk', 'Chips', 'Diapers', ''}The code i have is:def M(): li = [] # Open the csv file with open('transaction.csv') as fp: DataCaptured = csv.reader(fp, delimiter=',') # Iterate through each word in csv and add it's counter to the row for row in DataCaptured: li.append(dict(Counter(row))) if li['Diaper']==3: ---> I am missing this logic not sure how to get it. # Return the list of counters return liprint(M()) | li=[{'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Diapers': 1, 'Beer': 6, 'Eggs': 1}, {'Milk': 1, 'Diapers': 1, 'Beer': 6, 'Cola': 1}, {'Bread': 1, 'Milk': 1, 'Diapers': 1, 'Beer': 6}, {'Bread': 1, 'Milk': 1, 'Diapers': 2, 'Cola': 1, 'Chips': 2, 'Beer': 1, '': 1}, {'Bread': 1, 'Milk': 1, '': 7}, {'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1}, {'Milk': 1, 'Bread': 1, 'Beer': 4, 'Cola': 1, 'Diapers': 1, 'Chips': 1}, {'Bread': 1, 'Milk': 2, 'Diapers': 2, 'Beer': 2, 'Chips': 2}, {'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1}] for d in li: if 'Diapers' in d and d['Diapers']==3: print(d)OUTPUT:{'Bread': 1, 'Cola': 1, 'Beer': 3, 'Milk': 1, 'Chips': 1, 'Diapers': 3, '': 1}{'Bread': 2, 'Beer': 3, 'Diapers': 3, 'Milk': 1} |
how to add subject for sendmail I am using the following to send an email using smtp..however subject is missing in the email,how to add subject?from email.mime.text import MIMETextfrom smtplib import SMTPdef email (body,subject): msg = MIMEText("%s" % body) msg['Content-Type'] = "text/html; charset=UTF8" s = SMTP('localhost',25) s.sendmail('[email protected]', ['[email protected]'],msg=msg.as_string())def main (): # open gerrit.txt and read the content into body with open('gerrit.txt', 'r') as f: body = f.read() subject = "test email" email(body) print "Done"if __name__ == '__main__': main() | As always, the subject is just another header.msg['Subject'] = subject |
How to avoid multiple queries in one execute call I've just realized that psycopg2 allows multiple queries in one execute call.For instance, this code will actually insert two rows in my_table:>>> import psycopg2>>> connection = psycopg2.connection(database='testing')>>> cursor = connection.cursor()>>> sql = ('INSERT INTO my_table VALUES (1, 2);'... 'INSERT INTO my_table VALUES (3, 4)')>>> cursor.execute(sql)>>> connection.commit()Does psycopg2 have some way of disabling this functionality? Or is there some other way to prevent this from happening?What I've come so far is to search if the query has any semicolon (;) on it:if ';' in sql: # Multiple queries not allowed!But this solution is not perfect, because it wouldn't allow some valid queries like:SELECT * FROM my_table WHERE name LIKE '%;'EDIT: SQL injection attacks are not an issue here. I do want to give to the user full access of the database (he can even delete the whole database if he wants). | If you want a general solution to this kind of problem, the answer is always going to be "parse format X, or at least parse it well enough to handle your needs".In this case, it's probably pretty simple. PostgreSQL doesn't allow semicolons in the middle of column or table names, etc.; the only places they can appear are inside strings, or as statement terminators. So, you don't need a full parser, just one that can handle strings.Unfortunately, even that isn't completely trivial, because you have to know the rules for what counts as a string literal in PostgreSQL. For example, is "abc\"def" a string abc"def?But once you write or find a parser that can identify strings in PostgreSQL, it's easy: skip all the strings, then see if there are any semicolons left over.For example (this is probably not the correct logic,* and it's also written in a verbose and inefficient way, just to show you the idea):def skip_quotes(sql): in_1, in_2 = False, False for c in sql: if in_1: if c == "'": in_1 = False elif in_2: if c == '"': in_2 = False else: if c == "'": in_1 = True elif c == '"': in_2 = True else: yield cThen you can just write:if ';' in skip_quotes(sql): # Multiple queries not allowed!If you can't find a pre-made parser, the first things to consider are:If it's so trivial that simple string operations like find will work, do that.If it's a simple, regular language, use re.If the logic can be explained descriptively (e.g., via a BNF grammar), use a parsing library or parser-generator library like pyparsing or pybison.Otherwise, you will probably need to write a state machine, or even explicit iterative code (like my example above). But this is very rarely the best answer for anything but teaching purposes.* This is correct for a dialect that accepts either single- or double-quoted strings, does not escape one quote type within the other, and escapes quotes by doubling them (we will incorrectly treat 'abc''def' as two strings abc and def, rather than one string abc'def, but since all we're doing is skipping the strings anyway, we get the right result), but does not have C-style backslash escapes or anything else. I believe this matches sqlite3 as it actually works, although not sqlite3 as it's documented, and I have no idea whether it matches PostgreSQL. |
How to set my QTreeWidget expanded by default? I have this QTreeWidget that I would like to be expanded by default.I have read this same question many times but the solutions aren't working for me. I tried the commands for the root of my tree:.ExpandAll() and .itemsExpandable()and for the children .setExpanded(True) with no success.Here is the code of my test application:import sysfrom PyQt5.QtCore import Qtfrom PyQt5.QtWidgets import ( QApplication, QMainWindow, QWidget, QTreeWidget, QTreeWidgetItem, QVBoxLayout )# ----------------------------------------------------------------unsorted_data = [ ['att_0', 'a', 2020], ['att_0', 'a', 2015], ['att_2', 'b', 5300], ['att_0', 'a', 2100], ['att_1', 'b', 5013], ['att_1', 'c', 6500],]# Sort datalist_att = []for elem in range(len(unsorted_data)) : att_ = unsorted_data[elem][0] if att_ not in list_att: list_att.append(att_)list_att.sort()n_att = len(list_att)data = ['']*n_atttree = ['']*n_attlist_a_number = []list_b_number = []list_c_number = []class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("My App") widget = QWidget() layout = QVBoxLayout() widget.setLayout(layout) # QTreeWidget main_tree = QTreeWidget() main_tree.setHeaderLabel('Test') # main_tree.itemsExpandable() # NOT WORKING # main_tree.expandAll() # NOT WORKING sublevel_1 = [] for i, att in enumerate(list_att) : list_a_number.clear() list_b_number.clear() list_c_number.clear() # Create a dictionary for elem in range(len(unsorted_data)) : if unsorted_data[elem][0] == att : if unsorted_data[elem][1]== 'a' : list_a_number.append(str(unsorted_data[elem][2])) if unsorted_data[elem][1] == 'b' : list_b_number.append(str(unsorted_data[elem][2])) if unsorted_data[elem][1] == 'c' : list_c_number.append(str(unsorted_data[elem][2])) data[i] = {'a' : list_a_number, 'b' : list_b_number, 'c' : list_c_number} # Fill the Tree items = [] att_id = list_att[i].split('\\')[-1] tree[i] = QTreeWidgetItem([att_id]) tree[i].setExpanded(True) # NOT WORKING sublevel_1.append(tree[i]) for key, values in data[i].items(): item = QTreeWidgetItem([key]) item.setCheckState(0, Qt.Checked) tree[i].addChild(item) for value in values : child = QTreeWidgetItem([value]) child.setExpanded(True) # NOT WORKING child.setCheckState(0, Qt.Checked) item.addChild(child) items.append(item) main_tree.insertTopLevelItems(0, sublevel_1) layout.addWidget(main_tree) self.setCentralWidget(widget)# ------------------------------------------------------------------app = QApplication(sys.argv)window = MainWindow()window.show()sys.exit(app.exec()) | You have to use expandAll after placing all the items in the QTreeWidget:main_tree.insertTopLevelItems(0, sublevel_1)main_tree.expandAll()layout.addWidget(main_tree)Note: One of the errors in your case is that you invoke setExpanded before the item is added to the QTreeWidget. remove useless setExpanded |
beautifulsoup select method returns traceback well im still learning beautifulsoup module and im replcating this from the book automate the boring stuff with python i tried replcating the get amazon price script but i get a traceback on the .select() method the error 'TypeError: 'NoneType' object is not callable' its getiing devastated with this error as i couldnt find much about it import bs4import requestsheader = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"}def site(url): x = requests.get(url, headers=header) x.raise_for_status() soup = bs4.BeautifulSoup(x.text, "html.parser") p = soup.Select('#buyNewSection > a > h5 > div > div.a-column.a-span8.a-text-right.a-span-last > div > span.a-size-medium.a-color-price.offer-price.a-text-normal') abc = p[0].text.strip() return abcprice = site('https://www.amazon.com/Automate-Boring-Stuff-Python-Programming/dp/1593275994')print('price is' + str(price))it must return a list value containing the price but im stuck with this error | If you use soup.select as opposed to soup.Select, your code does work, it just returns an empty list.The reason can see if we inspect the function you are using:help(soup.Select)Out[1]:Help on NoneType object:class NoneType(object) | Methods defined here: | | __bool__(self, /) | self != 0 | | __repr__(self, /) | Return repr(self). | | ---------------------------------------------------------------------- | Static methods defined here: | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature.Compared to:help(soup.select)Out[2]:Help on method select in module bs4.element:select(selector, namespaces=None, limit=None, **kwargs) method of bs4.BeautifulSoup instance Perform a CSS selection operation on the current element. This uses the SoupSieve library. :param selector: A string containing a CSS selector. :param namespaces: A dictionary mapping namespace prefixes used in the CSS selector to namespace URIs. By default, Beautiful Soup will use the prefixes it encountered while parsing the document. :param limit: After finding this number of results, stop looking. :param kwargs: Any extra arguments you'd like to pass in to soupsieve.select().Having that said, it seems that the page structure is actually different than the one you are trying to get, missing the <a> tag.<div id="buyNewSection" class="rbbHeader dp-accordion-row"> <h5> <div class="a-row"> <div class="a-column a-span4 a-text-left a-nowrap"> <span class="a-text-bold">Buy New</span> </div> <div class="a-column a-span8 a-text-right a-span-last"> <div class="inlineBlock-display"> <span class="a-letter-space"></span> <span class="a-size-medium a-color-price offer-price a-text-normal">$16.83</span> </div> </div> </div> </h5></div>So this should work:p = soup.select('#buyNewSection > h5 > div > div.a-column.a-span8.a-text-right.a-span-last > div.inlineBlock-display > span.a-size-medium.a-color-price.offer-price.a-text-normal')abc = p[0].text.strip()abcOut[2]:'$16.83'Additionally, you could consider using a more granular approach that let's you debug your code better. For instance:buySection = soup.find('div', attrs={'id':'buyNewSection'})buySpan = buySection.find('span', attrs={'class': 'a-size-medium a-color-price offer-price a-text-normal'})print (buyScan)Out[1]:'$16.83' |
Creating a Search Bar with cascading functionality with Python I have been wondering if it possible to create a search bar with cascading functionality using an entry widget in tkinter or if there is another widgets that can be used to achieve this aim, through out my time in desktop application development i've only been able to create one where you will have to type in the full name of what you want to search, then you'd write a query that gets the entry and gets what ever information you want from the database, this is very important for me because it limits me, especially when i want to create an application for a store where there a a lot of items you could just type the first letter of an item and it automatically shows you the items with that first letter. please i'd really appreciate if there is an answer to this... | All you need to do is bind a function on <Any-KeyRelease> to filter the data as the user types. When the bound function is called, get the value of the entry widget then use that to get a filtered list of values.Here's an example that uses a fixed set of data and a listbox to show the data, but of course you can just as easily do a database query and display the values however you wish.import tkinter as tk# A list of all tkinter widget class namesVALUES = [cls.__name__ for cls in tk.Widget.__subclasses__()]class Example(): def __init__(self): self.root = tk.Tk() self.entry = tk.Entry(self.root) self.listbox = tk.Listbox(self.root) self.vsb = tk.Scrollbar(self.root, command=self.listbox.yview) self.listbox.configure(yscrollcommand=self.vsb.set) self.entry.pack(side="top", fill="x") self.vsb.pack(side="right", fill="y") self.listbox.pack(side="bottom", fill="both", expand=True) self.entry.bind("<Any-KeyRelease>", self.filter) self.filter() def filter(self, event=None): pattern = self.entry.get().lower() self.listbox.delete(0, "end") filtered = [value for value in VALUES if value.lower().startswith(pattern)] self.listbox.insert("end", *filtered)example = Example()tk.mainloop() |
Average values for same key rows excluding certain columns in pandas I have a table that has a certain subset of columns as a record-key.Record keys might have duplicates e.g several rows might have same key, but different values. I want to average values for such same-key row into one row. But some columns have numbers that represent categories and I want to exclude them from averaging and rather pick a random value. As an example consider this table with keys k1 and k2, numerical value v1 and categorical-int value idk1 | k2 | v1 | id1 | 2 | 4 | 1001 | 3 | 2 | 2001 | 2 | 8 | 3001 | 2 | 2 | 400I want the output to be k1 | k2 | v1 | id1 | 2 |14/3| 100 (or 300 or 400)1 | 3 | 2 | 200Currently I have a code to average values accross same-key columns:g = table.groupby(primary_keys)s = g.sum()table = s.div(g.count(), axis=0)but I do not know to extend it to exclude categorical columns (say I know what they are) and pick random value for categoricals | Here is one way df.groupby(['k1','k2']).agg({'v1':'mean','id':lambda x : x.sample(1)}) v1 idk1 k2 1 2 4.666667 100 3 2.000000 200 |
How To Make a Generator, That's Composed of Another Generator I want to make a generator. And that generator should take an iterable. This is basically so that I can plug the generator into an existing framework. This is the code I've got so far.class Iter1(object): def __init__(self, iterable=None): self.iterable = iterable def __iter__(self): if self.iterable is None: self.iterable = Iter2() return self.iterable def next(self): for thing in self.iterable: yield thingclass Iter2(object): DEFAULT_PATH = r"/Users/Documents/stuff.txt" def __init__(self, path=None): self.path = path or self.DEFAULT_PATH def __iter__(self): return self def next(self): with open(self.path, 'r') as f: for line in f: yield lineif __name__ == "__main__": iterable = Iter1() for thing in iterable: print(thing)There are two problems that I have with this code. The first is that what gets returned (yielded) isn't one of the lines from the file, it's another generator object. The second is that it doesn't return the number of lines that are in the file, it just returns an infinite number of lines. I get that that's because each time I call next in Iter2 I'm opening the file again, but then I don't understand how to yield each line without loading the whole file into memory. | PEP 234 -- Iterators: Iterator objects returned by either form of iter() have a next() method. This method either returns the next value in the iteration, or raises StopIteration (or a derived exception class) to signal the end of the iteration. Any other exception should be considered to signify an error and should be propagated normally, not taken to mean the end of the iteration.You are returning an iterator from next(), which is why it's not working as expected. Instead you should return a single value each time next() is invoked.Also, having __iter__() return self is a bit odd. It is generally assumed that invoking iter(sequence) multiple times will return multiple new iterators, each starting at the beginning of the sequence, but this isn't the case with your code. |
Use contextmanager inside init In the code below I don't understand why the with super().__init__(*args, **kwargs): line in MyFileIO2 is throwing an error about missing __exit__ while everything works perfectly fine with the MyFileIO class. I don't really understand what exactly the difference between doing the with inside or outside of the init is. Can someone enlighten me what is going on here? import ioclass MyFileIO(io.FileIO): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def __enter__(self, *args, **kwargs): f = super().__enter__(*args, **kwargs) print('first byte of file: ', f.read(1)) return fclass MyFileIO2(io.FileIO): def __enter__(self, *args, **kwargs): f = super().__enter__(*args, **kwargs) print('first byte of file: ', f.read(1)) return f def __init__(self, *args, **kwargs): with super().__init__(*args, **kwargs): # AttributeError: __exit__ passpath = 'some_file.bin'with MyFileIO(path, 'rb'): passMyFileIO2(path, 'rb') | You will need to call the context manager on self, because __init__ doesn't actually return anything.class MyFileIO2(io.FileIO): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) with self: pass def __enter__(self, *args, **kwargs): f = super().__enter__(*args, **kwargs) print('First byte of file: ', f.read(1)) return fFor testing, I created a binary file having the contents "hello world"._ = MyFileIO2(path, 'rb') # First byte of file: b'h'What happens is the return value of super().__init__ is being passed through the context manager, so you effectively have this:with None: passAttributeError: __enter__The context manager tries calling the __enter__ method on the NoneType object, but that is an invalid operation. |
Keras - Input arrays should have the same number of samples as target arrays I have the code below which run a Generative Adversarial Network (GAN) on 374 training images of size 32x32.Why am I having the following error?ValueError: Input arrays should have the same number of samples as target arrays. Found 7500 input samples and 40 target samples.which occurs at the following statement:discriminator_loss = discriminator.train_on_batch(combined_images,labels)import kerasfrom keras import layersimport numpy as npimport cv2import osfrom keras.preprocessing import imagelatent_dimension = 32height = 32width = 32channels = 3iterations = 100000batch_size = 20real_images = []# paths to the training and results directoriestrain_directory = '/training'results_directory = '/results'# GAN generatorgenerator_input = keras.Input(shape=(latent_dimension,))# transform the input into a 16x16 128-channel feature mapx = layers.Dense(128*16*16)(generator_input)x = layers.LeakyReLU()(x)x = layers.Reshape((16,16,128))(x)x = layers.Conv2D(256,5,padding='same')(x)x = layers.LeakyReLU()(x)# upsample to 32x32x = layers.Conv2DTranspose(256,4,strides=2,padding='same')(x)x = layers.LeakyReLU()(x)x = layers.Conv2D(256,5,padding='same')(x)x = layers.LeakyReLU()(x)x = layers.Conv2D(256,5,padding='same')(x)x = layers.LeakyReLU()(x)# a 32x32 1-channel feature map is generated (i.e. shape of image)x = layers.Conv2D(channels,7,activation='tanh',padding='same')(x)# instantiae the generator model, which maps the input of shape (latent dimension) into an image of shape (32,32,1)generator = keras.models.Model(generator_input,x)generator.summary()# GAN discriminatordiscriminator_input = layers.Input(shape=(height,width,channels))x = layers.Conv2D(128,3)(discriminator_input)x = layers.LeakyReLU()(x)x = layers.Conv2D(128,4,strides=2)(x)x = layers.LeakyReLU()(x)x = layers.Conv2D(128,4,strides=2)(x)x = layers.LeakyReLU()(x)x = layers.Conv2D(128,4,strides=2)(x)x = layers.LeakyReLU()(x)x = layers.Flatten()(x)# dropout layerx = layers.Dropout(0.4)(x)# classification layerx = layers.Dense(1,activation='sigmoid')(x)# instantiate the discriminator model, and turn a (32,32,1) input# into a binary classification decision (fake or real)discriminator = keras.models.Model(discriminator_input,x)discriminator.summary()discriminator_optimizer = keras.optimizers.RMSprop( lr=0.0008, clipvalue=1.0, decay=1e-8)discriminator.compile(optimizer=discriminator_optimizer, loss='binary_crossentropy')# adversarial networkdiscriminator.trainable = Falsegan_input = keras.Input(shape=(latent_dimension,))gan_output = discriminator(generator(gan_input))gan = keras.models.Model(gan_input,gan_output)gan_optimizer = keras.optimizers.RMSprop( lr=0.0004, clipvalue=1.0, decay=1e-8)gan.compile(optimizer=gan_optimizer,loss='binary_crossentropy')start = 0for step in range(iterations): # sample random points in the latent space random_latent_vectors = np.random.normal(size=(batch_size,latent_dimension)) # decode the random latent vectors into fake images generated_images = generator.predict(random_latent_vectors) stop = start + batch_size i = start for root, dirs, files in os.walk(train_directory): for file in files: for i in range(stop-start): img = cv2.imread(root + '/' + file) real_images.append(img) i = i+1 combined_images = np.concatenate([generated_images,real_images]) # assemble labels and discrminate between real and fake images labels = np.concatenate([np.ones((batch_size,1)),np.zeros(batch_size,1)]) # add random noise to the labels labels = labels + 0.05 * np.random.random(labels.shape) # train the discriminator discriminator_loss = discriminator.train_on_batch(combined_images,labels) random_latent_vectors = np.random.normal(size=(batch_size,latent_dimension)) # assemble labels that classify the images as "real", which is not true misleading_targets = np.zeros((batch_size,1)) # train the generator via the GAN model, where the discriminator weights are frozen adversarial_loss = gan.train_on_batch(random_latent_vectors,misleading_targets) start = start + batch_size if start > len(train_directory)-batch_size: start = 0 # save the model weights if step % 100 == 0: gan.save_weights('gan.h5') print'discriminator loss: ' print discriminator_loss print 'adversarial loss: ' print adversarial_loss img = image.array_to_img(generated_images[0] * 255.) img.save(os.path.join(results_directory,'generated_melanoma_image' + str(step) + '.png')) img = image.array_to_img(real_images[0] * 255.) img.save(os.path.join(results_directory,'real_melanoma_image' + str(step) + '.png'))Thanks. | Your following step causing this problem,i = startfor root, dirs, files in os.walk(train_directory): for file in files: for i in range(stop-start): img = cv2.imread(root + '/' + file) real_images.append(img) i = i+1You are trying to collect 20 samples of real_images, which is done by inner loop. Then there is outer loop, which is running for each files, So outer loop is collecting 20 sample for each file, which collect 7480 sample in total, where you are planned to collect only 20 in total . |
Pandas assign cumulative count for consecutive values in a column This is my data:print(n0data) FULL_MPID DateTime EquipID countIndex 1 5092761672035390000000000000 2018-11-28 00:36:00 1296 12 5092761672035390000000000000 2018-11-28 00:37:00 1634 23 5092761672035390000000000000 2018-11-28 13:36:00 1296 34 5092761672035390000000000000 2018-11-28 13:38:00 1634 45 5092761672035390000000000000 2018-11-29 17:37:00 1290 56 5092761672035390000000000000 2018-11-29 17:37:00 1634 67 5092761672035390000000000000 2018-11-30 21:23:00 1290 78 5092761672035390000000000000 2018-11-30 21:24:00 1634 89 5092761672035390000000000000 2018-12-02 09:37:00 1296 910 5092761672035390000000000000 2018-12-02 09:39:00 1634 1011 5092761672035390000000000000 2018-12-02 09:39:00 1634 1112 5092761672035390000000000000 2018-12-03 11:55:00 1290 1213 5092761672035390000000000000 2018-12-03 12:02:00 1634 1314 5092761672035390000000000000 2018-12-06 12:22:00 1290 1415 5092761672035390000000000000 2018-12-06 12:22:00 1634 1516 5092761672035390000000000000 2018-12-06 12:22:00 1634 1617 5092761672035390000000000000 2018-12-06 12:23:00 1634 1718 5092761672035390000000000000 2018-12-06 12:23:00 1634 1819 5092761672035390000000000000 2018-12-06 12:23:00 1634 1920 5092761672035390000000000000 2018-12-06 12:23:00 1634 2021 5092761672035390000000000000 2018-12-06 12:23:00 1634 2122 5092761672035390000000000000 2018-12-09 05:51:00 1290 22So I have a groupBy function that makes the following ecount column with the command: n0data['ecount'] = n0data.groupby(['EquipID','FULL_MPID']).cumcount() + 1The data is sorted by the time and looks to identify when the changeover of EquipID happens.Ecount is supposed to be:When the EquipID column values changes from one value to another, ecount should reset. However if EquipID does not change, like during index 15-21 rows, EquipID should continue counting. I thought this was what the groupBy delivered also... | You can use the shift and cumsum trick before groupby:v = df.EquipID.ne(df.EquipID.shift())v.groupby(v.cumsum()).cumcount() + 1Index1 12 13 14 15 16 17 18 19 110 111 212 113 114 115 116 217 318 419 520 621 722 1dtype: int64 |
Is it possible to port Python GAE db.GeoPt to a Go type? I'm working on a porting an existing GAE app that was originally written in Python to Go. So far it's been pretty great and reasonably easy (though it's not been without its quirks).Since this port will be deployed to the same GAE app on a different version, the two versions will share the same datastore. The problem is that the original Python app makes extensive use of the db.GeoPt type.I implemented my own custom PropertyLoadSaver on one of my types so I could look at how I might represent a db.GeoPt in Go, via reflection. But apparently the memory layout of db.GeoPt is not compatible with anything in Go at all. Does anybody know how I might go about this? Has anybody done this before?Here's some code to give you guys a better idea of what I'm doing:func (sS *SomeStruct) Load(c <-chan datastore.Property) error { for p := range c { if p.Name == "location" { // "location" is the name of the original db.GeoPt property v := reflect.ValueOf(p.Value) // If I call v.Kind(), it returns reflect.Invalid // And yes, I know v is declared and never used :P } } return nil}Thank you in advance! | appengine.GeoPoint support in appengine/datastore was added in the 1.9.3 App Engine release. |
Making parallel code work in python 2.7 and 3.6 I have some code in python 3.6 which is like this:from multiprocessing import Poolwith Pool(processes=4) as p: p.starmap(parallel_function, list(dict_variables.items()))Here dict_variables looks like this:[('aa', ['ab', 'ab', 'ad']), ('aa1', ['a1b', 'a1b', 'a2d'])]This code only works in python 3.6. How can I make it work in 2.7? | starmap was introduced in Python3.3. In Python2, use Pool.map and unpack the argument yourself:In Python3:import multiprocessing as mpdef starmap_func(x, y): return x**ywith mp.Pool(processes=4) as p: print(p.starmap(starmap_func, [(1,2), (3,4), (5,6)])) # [1, 81, 15625]In Python2 or Python3:import multiprocessing as mpdef map_func(arg): x, y = arg return x**yp = mp.Pool(processes=4)print(p.map(map_func, [(1,2), (3,4), (5,6)]))# [1, 81, 15625]p.close() |
Pandas add dataframes side to side with different indexes I have dataframes like this: Sender USD_Equivalent725 ABC 5777527.31330 CFE 4717812.9012 CDE 3085838.19 Sender USD_Equivalent707 AAP 1962412.94149 EFF 1777705.37189 EFG 1744705.37And I want them like this :Sender USD_Equivalent Sender USD_Equivalent ABC 5777527.31 AAP 1962412.94 CFE 4717812.90 EFF 1777705.37 CDE 3085838.19 EFG 1744705.37Thanks | pd.concat([d.reset_index(drop=True) for d in [df1, df2]], axis=1) Sender USD_Equivalent Sender USD_Equivalent0 ABC 5777527.31 AAP 1962412.941 CFE 4717812.90 EFF 1777705.372 CDE 3085838.19 EFG 1744705.37 |
Save related Images Django REST Framework I have this basic model layout:class Listing(models.Model): name = models.TextField()class ListingImage(models.Model): listing = models.ForeignKey(Listing, related_name='images', on_delete=models.CASCADE) image = models.ImageField(upload_to=listing_image_path)Im trying to write a serializer which lets me add an rest api endpoint for creating Listings including images.My idea would be this:class ListingImageSerializer(serializers.ModelSerializer): class Meta: model = ListingImage fields = ('image',)class ListingSerializer(serializers.ModelSerializer): images = ListingImageSerializer(many=True)class Meta: model = Listing fields = ('name', 'images')def create(self, validated_data): images_data = validated_data.pop('images') listing = Listing.objects.create(**validated_data) for image_data in images_data: ListingImage.objects.create(listing=listing, **image_data) return listingMy Problems are:I'm not sure how and if I can send a list of images in a nested dictionary using a multipart POST request.If I just post an images list and try to convert it from a list to a list of dictionaries before calling the serializer, I get weird OS errors when parsing the actual image.for key, item in request.data.items(): if key.startswith('images'): # images.append({'image': item}) request.data[key] = {'image': item}My request code looks like this:import requestsfrom requests_toolbelt.multipart.encoder import MultipartEncoderapi_token = 'xxxx'images_data = MultipartEncoder( fields={ 'name': 'test', 'images[0]': (open('lilo.png', 'rb'), 'image/png'), 'images[1]': (open('panda.jpg', 'rb'), 'image/jpeg') })response = requests.post('http://127.0.0.1:8000/api/listings/', data=images_data, headers={ 'Content-Type': images_data.content_type, 'Authorization': 'Token' + ' ' + api_token })I did find a very hacky solution which I will post in the answers but its not really robust and there needs to be a better way to do this. | So my solution is based off of this post and works quite well but seems very unrobust and hacky.I change the images field from a relation serializer requiring a dictonary to a ListField. Doing this i need to override the list field method to actually create a List out of the RelatedModelManager when calling "to_repesentation".This baiscally behaves like a list on input, but like a modelfield on read.class ModelListField(serializers.ListField): def to_representation(self, data): """ List of object instances -> List of dicts of primitive datatypes. """ return [self.child.to_representation(item) if item is not None else None for item in data.all()]class ListingSerializer(serializers.ModelSerializer): images = ModelListField(child=serializers.FileField(max_length=100000, allow_empty_file=False, use_url=False)) class Meta: model = Listing fields = ('name', 'images') def create(self, validated_data): images_data = validated_data.pop('images') listing = Listing.objects.create(**validated_data) for image_data in images_data: ListingImage.objects.create(listing=listing, image=image_data) return listing |
Posting Data to Server with volley library from android App Guys i need help am trying to post this data to my server here but it is returning an error, it seems to be working just fine in postman the problem comes in while trying to implement in android app using google's volley library.Link to server script. This is the screenshot of a successful post working in postman rest client:2private void SaveDataToServer() { StringRequest serverPostRequest = new StringRequest(Request.Method.POST, Config.SAVE_INVENTORY_URL, new Response.Listener<String>() { @Override public void onResponse(String json) { try { Toast.makeText(SelectItemsActivity.this, json.toString(), Toast.LENGTH_SHORT).show(); Log.e("RESPONSE FROM SERVER",json); JSONObject dataJson=new JSONObject(json); JSONObject myJson=dataJson.getJSONObject("status"); String status=myJson.getString("status_text"); if (status.equalsIgnoreCase("Success.")){ Toast.makeText(SelectItemsActivity.this, "Data saved Successfully", Toast.LENGTH_SHORT).show(); proggressShow.dismiss(); }else { Toast.makeText(SelectItemsActivity.this, "An error occured while saving data", Toast.LENGTH_SHORT).show(); proggressShow.dismiss(); } } catch (JSONException e) { e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError volleyError) { } }){ @Override protected Map<String, String> getPostParams() { HashMap<String, String> params = new HashMap<String, String>(); params.put("api_key",Config.API_KEY); params.put("move_id", "1"); params.put("room_name", "Attic room"); params.put("item_name", "Halloween Broom"); params.put("item_id", "6"); Log.e("datat to server",params.toString()); return params; } }; saveDataRequest.add(serverPostRequest);} | After adding this lines to my request headers i was able to successfully commit the database to my db.Also i made sure i am using a stringRequest for a jsonRequest it does not work for reasons i do not know. @Override public Map<String, String> getHeaders() throws AuthFailureError { HashMap<String, String> headers = new HashMap<>(); headers.put("Content-Type","application/x-www-form-urlencoded"); return headers; } |
Is it possible to return an HttpResponse object in render function? Reasons for why someone would want to do this aside, is it possible? Something along the lines offrom cms.plugin_base import CMSPluginBasefrom data_viewer.models.data_view import DataPluginfrom django.http import HttpResponse class CMSPlugin(CMSPluginBase): def render(self, context, instance) response = HttpResponse(content_type='text/csv') return responseUsually render functions require a context to be returned, thus this code doesn't work as is. Once again, I know this isn't typical. I just want to know if it's possibleThanks in advance, all help is appreciated! | In short: No.The render method is very unfortunately named and should really be called get_context. It must return a dictionary or Context instance, see the docsIf you want to extend django CMS with something that returns HttpResponse objects, have a look at apphooks. |
How to restart an iterator? How to restart an iterator?I have a list of columns names like this:my_column_names = ["A", "B", "C", "D", "F", "G", "H"]And I take a csv file with rows like this:A,500B,3.0C,87A,200A,300B,3.5D,CALLE,CLEANF,MADRIDG,28000H,SPAINA,150B,1.75C,103D,PUTI want to make a csv file with this format:A,B,C,D,E,F,G,H500,3.0,87,,,,,200,,,,,,,300,3.5,,CALL,CLEAN,MADRID,28000,SPAIN150,1.75,103,PUT,,,,My code:iter_column_names = itertools.cycle(my_column_names)my_new_line = []for old_line in new_file: column_name = iter_column_names.__next__() if old_line[0] == column_name: my_new_line.append(old_line[1]) else: my_new_line.append('') if column_name == "H": print(my_new_line) # to change by writeline() when it works fine my_new_line = []But it doesn't work like I need. I suppose that the problem is that it needs to restart de iter_column_names every time that it reaches "H" element. Or not? | I'd use a csv.DictWriter() and use a dictionary to handle the rows. That way you can detect if a column has been seen already, and start a new row:import csvfields = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H')with open('inputfile.csv', newline='') as infh, open('output.csv', 'w', newline='') as outfh: reader = csv.reader(infh) writer = csv.DictWriter(outfh, fields) writer.writeheader() row = {} for key, value in reader: if key in row: # new row found, write old writer.writerow(row) row = {} row[key] = value # write last row if row: writer.writerow(row)Demo:>>> import csv>>> import sys>>> infh = '''\... A,500... B,3.0... C,87... A,200... A,300... B,3.5... D,CALL... E,CLEAN... F,MADRID... G,28000... H,SPAIN... A,150... B,1.75... C,103... D,PUT... '''.splitlines()>>> outfh = sys.stdout>>> fields = ('A', 'B', 'C', 'D', 'E', 'F', 'G', 'H')>>> if True:... reader = csv.reader(infh)... writer = csv.DictWriter(outfh, fields)... writer.writeheader()... row = {}... for key, value in reader:... if key in row:... # new row found, write old... writer.writerow(row)... row = {}... row[key] = value... # write last row... if row:... writer.writerow(row)... A,B,C,D,E,F,G,H500,3.0,87,,,,,17200,,,,,,,12300,3.5,,CALL,CLEAN,MADRID,28000,SPAIN40150,1.75,103,PUT,,,,22The numbers in between (17, 12, 40, 22) are the writer.writerow() return values (bytes written). |
RNN model running out of memory in TensorFlow I implemented a Sequence to Sequence model using the rnn.rnn helper in TensorFlow.with tf.variable_scope("rnn") as scope, tf.device("/gpu:0"): cell = tf.nn.rnn_cell.BasicLSTMCell(4096) lstm = tf.nn.rnn_cell.MultiRNNCell([cell] * 2) _, cell = rnn.rnn(lstm, input_vectors, dtype=tf.float32) tf.get_variable_scope().reuse_variables() lstm_outputs, _ = rnn.rnn(lstm, output_vectors, initial_state=cell)The model is running out of memory on a Titan X with 16 GB of memory while allocating gradients for the LSTM cells:W tensorflow/core/kernels/matmul_op.cc:158] Resource exhausted: OOM when allocating tensor with shape[8192,16384]W tensorflow/core/common_runtime/executor.cc:1102] 0x2b42f00 Compute status: Resource exhausted: OOM when allocating tensor with shape[8192,16384] [[Node: gradients/rnn/RNN/MultiRNNCell_1/Cell0/BasicLSTMCell/Linear/MatMul_grad/MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/gpu:0"](rnn/RNN/MultiRNNCell_1/Cell0/BasicLSTMCell/Linear/concat, gradients/rnn/RNN/MultiRNNCell_1/Cell0/BasicLSTMCell/add_grad/tuple/control_dependency)]]If I reduce the length of the input and output sequences to 4 or less the model runs without a problem.This indicates to me that TF is trying to allocate the gradients for all time steps at the same time. Is there a way of avoiding this? | The function tf.gradients as well as the minimize method of the optimizers allow you to set parameter called aggregation_method. The default value is ADD_N. This method constructs the graph in such a way that all gradients need to be computed at the same time. There are two other undocumented methods called tf.AggregationMethod.EXPERIMENTAL_TREE and tf.AggregationMethod.EXPERIMENTAL_ACCUMULATE_N, which do not have this requirement. |
Python - how to print entry text tkinter? I'm writing a basic Q&A program as I learn Python, and I'm messing around with tkinter's functions. I'm trying to print user input, but it just prints a blank line. Here is my code:from tkinter import *from tkinter import ttkdef response(): value = str(var.get()) print(value)root = Tk()root.title("Bot")mainframe = ttk.Frame(root, padding = "5 5 15 15")mainframe.grid(column=0, row=0), sticky=(N, W, E, S))mainframe.columnconfigure(0, weight=1)mainframe.rowconfigure(0, weight=1)var = StringVar()input_entry = ttk.Entry(mainframe, width=20, textvariable=var)input_entry.grid(column=5, row=5, sticky = (W, E))input_entry.pack()ttk.Label(mainframe, textvariable=response).grid(column=2, row=2, sticky=(W, E)) ttk.Button(mainframe, text="Ask away!", command=response).grid(column=3,row=3, sticky=W)root.mainloop() | To get an entry widgets text you can use input_entry.get()You can see the documentation for the ttk entry widget here |
remove the item in string How do I remove the other stuff in the string and return a list that is made of other strings ? This is what I have written. Thanks in advance!!! def get_poem_lines(poem):r""" (str) -> list of strReturn the non-blank, non-empty lines of poem, with whitespace removed from the beginning and end of each line.>>> get_poem_lines('The first line leads off,\n\n\n'... + 'With a gap before the next.\nThen the poem ends.\n')['The first line leads off,', 'With a gap before the next.', 'Then the poem ends.']"""list=[]for line in poem: if line == '\n' and line == '+': poem.remove(line)s = poem.remove(line)for a in s: list.append(a)return list | split and strip might be what you need:s = 'The first line leads off,\n\n\n With a gap before the next.\nThen the poem ends.\n'print([line.strip() for line in s.split("\n") if line])['The first line leads off,', 'With a gap before the next.', 'Then the poem ends.']Not sure where the + fits in as it is, if it is involved somehow either strip or str.replace it, also avoid using list as a variable name, it shadows the python list. lastly strings have no remove method, you can .replace but since strings are immutable you will need to reassign the poem to the the return value of replace i.e poem = poem.replace("+","") |
python random choice choosing nothing I have a method to generate random string that always starts with a character and has length of at minimum 1.class Util: @staticmethod def get_random_name(): N = r.randint(0, 5) return "".join( r.choice( string.ascii_lowercase + string.ascii_uppercase ) ).join( r.choice( string.ascii_lowercase + string.ascii_uppercase + string.digits ) for _ in range(N) )Now when I do this:for i in range(0,50): logging.debug(str(i)+" -- "+Util().get_random_name())Some of them gives me empty string or sometimes it starts with number.What am I missing ?Check the log: | I think you're looking for something likeimport randomimport stringdef get_random_name(min_n=0, max_n=5): initial = random.choice( string.ascii_lowercase + string.ascii_uppercase ) return initial + "".join( random.choice( string.ascii_lowercase + string.ascii_uppercase + string.digits ) for _ in range( random.randint(min_n, max_n) ) )for x in range(10): print(x, get_random_name(max_n=x))Output (e.g.):0 z1 cm2 W3 oku94 nh5 Ul36 yNPH7 Rw7hW0eW8 qR9 BYKaGyv |
windows suddenly can't run python "this app can't run on your PC anymore" I was coding on VSCODE and when I wanted to run the script I suddenly got thrown this error by windows so now if I type python on cmd I get this error and python returns "Access is denied."this happened suddenly so one minute I could run python and now I can't, so I don't think it's a windows update causing a problemI looked around and it seems most people that get this error downloaded 32bit version of an app on a 64bit machineedit: it seems like python.exe now is 0kb it's corrupted so I'll probably have to reinstall python but I'd love to know what caused it. | Ran into same issue. Try downloading "Windows x86-64 MSI installer" from this link https://www.python.org/downloads/release/python-2718/ for Windows. This one worked for me.This is the latest 2.X version as of writing this answer. To future check the 2.X latest release if any from the default download page of python. |
elasticsearch.exceptions.SSLError: ConnectionError hostname doesn't match I've been using the Elasticsearch Python API to do some basic operation on a cluster (like creating an index or listing them). Everything worked fine but I decided to activate SSL authentification on the cluster and my scripts aren't working anymore.I have the following errors :Certificate did not match expected hostname: X.X.X.X. Certificate: {'subject': ((('commonName', 'X.X.X.X'),),), 'subjectAltName': [('DNS', 'X.X.X.X')]} GET https://X.X.X.X:9201/ [status:N/A request:0.009s] Traceback (most recent call last): File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request self._validate_conn(conn) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn conn.connect() File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 386, in connect _match_hostname(cert, self.assert_hostname or server_hostname) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 396, in _match_hostname match_hostname(cert, asserted_hostname) File "/home/esadm/env/lib/python3.7/ssl.py", line 338, in match_hostname % (hostname, dnsnames[0])) ssl.SSLCertVerificationError: ("hostname 'X.X.X.X' doesn't match 'X.X.X.X'",)During handling of the above exception, another exception occurred:Traceback (most recent call last): File "/home/esadm/env/lib/python3.7/site-packages/elasticsearch/connection/http_urllib3.py", line 233, in perform_request method, url, body, retries=Retry(False), headers=request_headers, **kw File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/esadm/env/lib/python3.7/site-packages/urllib3/util/retry.py", line 376, in increment raise six.reraise(type(error), error, _stacktrace) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/packages/six.py", line 734, in reraise raise value.with_traceback(tb) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request self._validate_conn(conn) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn conn.connect() File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 386, in connect _match_hostname(cert, self.assert_hostname or server_hostname) File "/home/esadm/env/lib/python3.7/site-packages/urllib3/connection.py", line 396, in _match_hostname match_hostname(cert, asserted_hostname) File "/home/esadm/env/lib/python3.7/ssl.py", line 338, in match_hostname % (hostname, dnsnames[0])) urllib3.exceptions.SSLError: ("hostname 'X.X.X.X' doesn't match 'X.X.X.X'",)The thing I don't understand is that this message doesn't make any sense :"hostname 'X.X.X.X' doesn't match 'X.X.X.X'"Because the two adresses matches, they are exactly the same !I've followed the docs and my configuration of the instance Elasticsearch looks like this :Elasticsearch([get_ip_address()], http_auth=('elastic', 'pass'), use_ssl=True, verify_certs=True, port=get_instance_port(), ca_certs='ca.crt', client_cert='pvln0047.crt', client_key='pvln0047.key' )Thanks for your help | Problem solved, the issue was in the constructor :Elasticsearch([get_ip_address()], http_auth=('elastic', 'pass'), use_ssl=True, verify_certs=True, port=get_instance_port(), ca_certs='ca.crt', client_cert='pvln0047.crt', client_key='pvln0047.key' )Instead of mentioning the ip address I needed to mention the DNS name, I also changed the arguments by using context object just to follow the original docs.context = create_default_context(cafile="ca.crt")context.load_cert_chain(certfile="pvln0047.crt", keyfile="pvln0047.key")context.verify_mode = CERT_REQUIREDElasticsearch(['dns_name'], http_auth=('elastic', 'pass'), scheme="https", port=get_instance_port(), ssl_context=context )This is how I generated the certificates :bin/elasticsearch-certutil cert ca --pem --in /tmp/instance.yml --out /home/user/certs.zipAnd this is my instance.yml file :instances: - name: 'dns_name' dns: [ 'dns_name' ]Hope, it will help someone ! |
Error while importing tensorflow in jupyter environment Its repetitively showing this error while importing tensorflowI am using a separate environment in anaconda with jupyter installed.Can anyone help me solve this errorImportError Traceback (most recent calllast)E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.pyin 63 try:---> 64 from tensorflow.python._pywrap_tensorflow_internal import *65 # This try catch logic is because there is no bazel equivalent for py_extension.ImportError: DLL load failed: The specified module could not be found.During handling of the above exception, another exception occurred:ImportError Traceback (most recent calllast) in 1 import os----> 2 import tensorflow as tf3 import matplotlib.pyplot as plt4 import numpy as np5 import pandas as pdE:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow_init_.pyin 39 import sys as _sys40---> 41 from tensorflow.python.tools import module_util as _module_util42 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader43E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python_init_.pyin 37 # go/tf-wildcard-import38 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top---> 39 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow4041 from tensorflow.python.eager import contextE:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.pyin 81 for some common reasons and solutions. Include the entire stack trace82 above this error message when asking for help.""" % traceback.format_exc()---> 83 raise ImportError(msg)8485 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-longImportError: Traceback (most recent call last): File"E:\Anaconda2\Library\envs\tf-gpu\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found. | DLL load fail error is because either you have not installed Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019 or your CPU does not support AVX2 instructionsThere is a workaround either you have to compile Tensorflow from source or use google colaboratory to work. Follow the instructions mentioned here to build Tensorflow from source. |
Vectorise nested vmap Here's some data I have:import jax.numpy as jnpimport numpyro.distributions as distimport jaxxaxis = jnp.linspace(-3, 3, 5)yaxis = jnp.linspace(-3, 3, 5)I'd like to run the functiondef func(x, y): return dist.MultivariateNormal(jnp.zeros(2), jnp.array([[.5, .2], [.2, .1]])).log_prob(jnp.asarray([x, y]))over each pair of values from xaxis and yaxis.Here's a "slow" way to do:results = np.zeros((len(xaxis), len(yaxis)))for i in range(len(xaxis)): for j in range(len(yaxis)): results[i, j] = func(xaxis[i], yaxis[j])Works, but it's slow.So here's a vectorised way of doing it:jax.vmap(lambda axis: jax.vmap(func, (None, 0))(axis, yaxis))(xaxis)Much faster, but it's hard to read.Is there a clean way of writing the vectorised version? Can I do it with a single vmap, rather than having to nest one within another one?EDITAnother way would bejax.vmap(func)(xmesh.flatten(), ymesh.flatten()).reshape(len(xaxis), len(yaxis)).Tbut it's still messy. | I believe Vectorization guidelnes for jax is quite similar to your question; to replicate the logic of nested for-loops with vmap requires nested vmaps.The cleanest approach using jax.vmap is probably something like this:from functools import partial@partial(jax.vmap, in_axes=(0, None))@partial(jax.vmap, in_axes=(None, 0))def func(x, y): return dist.MultivariateNormal(jnp.zeros(2), jnp.array([[.5, .2], [.2, .1]])).log_prob(jnp.asarray([x, y]))func(xaxis, yaxis)Another option here is to use the jnp.vectorize API (which is implemented via multiple vmaps), in which case you can do something like this:print(jnp.vectorize(func)(xaxis[:, None], yaxis)) |
Split a column into multiple columns with condition I have a question about splitting columns into multiple rows at Pandas with conditions.For example, I tend to do something as follows but takes a very long time using for loop| Index | Value || ----- | ----- || 0 | 1 || 1 | 1,3 || 2 | 4,6,8 || 3 | 1,3 || 4 | 2,7,9 |into| Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 || ----- | - | - | - | - | - | - | - | - | - || 0 | 1 | | | | | | | | || 1 | 1 | | 3 | | | | | | || 2 | | | | 4 | | 6 | | 8 | || 3 | 1 | | 3 | | | | | | || 4 | | 2 | | | | | 7 | | 9 |I wonder if there are any packages that can help this out rather than to write a for loop to map all indexes. | Assuming the "Value" column contains strings, you can use str.split and pivot like so:value = df["Value"].str.split(",").explode().astype(int).reset_index()output = value.pivot(index="index", columns="Value", values="Value")output = output.reindex(range(value["Value"].min(), value["Value"].max()+1), axis=1)>>> outputValue 1 2 3 4 5 6 7 8 9index 0 1.0 NaN NaN NaN NaN NaN NaN NaN NaN1 1.0 NaN 3.0 NaN NaN NaN NaN NaN NaN2 NaN NaN NaN 4.0 NaN 6.0 NaN 8.0 NaN3 1.0 NaN 3.0 NaN NaN NaN NaN NaN NaN4 NaN 2.0 NaN NaN NaN NaN 7.0 NaN 9.0Input df:df = pd.DataFrame({"Value": ["1", "1,3", "4,6,8", "1,3", "2,7,9"]}) |
Given an array arr of size n and an integer X. Find if there's a triplet in the array which sums up to the given integer X Given an array arr of size n and an integer X. Find if there's a triplet in the array which sums up to the given integer X. Input: n = 5, X = 10 arr[] = [1 2 4 3 6] Output: Yes Explanation: The triplet {1, 3, 6} in the array sums up to 10. | the line of reasoning is:Get all the possible combinations of 3 numbers in the array arr. Find which has sum=X, print only these tripletsimport numpy as npimport itertoolsarr=np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])X=10combinations=np.array(list(itertools.combinations(arr, 3)))triplets=combinations[combinations.sum(axis=1)==X]print(f'Triplets with sum equal to {X} are:\n{triplets}')output:Triplets with sum equal to 10 are:[[0 1 9] [0 2 8] [0 3 7] [0 4 6] [1 2 7] [1 3 6] [1 4 5] [2 3 5]] |
How do I sum up the numbers in a tuple, thats in a list? I have the following list of tuples:[(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]I would like to get the total sum (or other operation_ of all the numbers in each tuple, then get the sum of the entire list.Desired outcome:addition: 1+2+3+(1+2)+(1+3)+(2+3)+(1+2+3) = 24multiplication: 1+2+3+(1×2)+(1×3)+(2×3)+(1×2×3)=23bit operator: 1+2+3+(1⊕2)+(1⊕3)+(2⊕3)+(1⊕2⊕3)=1+2+3+3+2+1+0 = 12. | I would solve this by iterating through your list, applying your desired operation, then take the sum:>> import math>> from operator import xor>> from functools import reduce>> my_values = [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]# Addition>> addition_values = [sum(x) for x in my_values][0, 1, 2, 3, 3, 4, 5, 6]>> sum(addition_values)24# Multiplication>> multiplication_value = [math.prod(x) for x in my_values][1, 1, 2, 3, 2, 3, 6, 6]>> sum(multiplication_value)24# Bit operation>> xor_value = [reduce(xor, x) for x in my_values if x][1, 2, 3, 3, 2, 1, 0]>> sum(xor_value)12You could put this together as a single function:Especially helpful if you want to extend functionality to additional operators...import mathfrom operator import xor, mul, addfrom functools import reducefrom typing import List, Tuple, Literaldef operator_then_sum(my_list: List[Tuple], op: Literal[add, mul, xor]) -> int: """ Performs operation on tuples within list then returns the total sum of the list Args: my_list: list of tuples to perform operator on op: an operator """ operated_values = [reduce(op, x) for x in my_values if x] return sum(operated_values)# Test it out on your valuesmy_values = [(), (1,), (2,), (3,), (1, 2), (1, 3), (2, 3), (1, 2, 3)]>> operator_then_sum(my_values, add)24>> operator_then_sum(my_values, mul)23>> operator_then_sum(my_values, xor)12 |
Python: Convert list of dictionaries to list of lists I want to convert a list of dictionaries to list of lists.From this.d = [{'B': 0.65, 'E': 0.55, 'C': 0.31}, {'A': 0.87, 'D': 0.67, 'E': 0.41}, {'B': 0.88, 'D': 0.72, 'E': 0.69}, {'B': 0.84, 'E': 0.78, 'A': 0.64}, {'A': 0.71, 'B': 0.62, 'D': 0.32}]To[['B', 0.65, 'E', 0.55, 'C', 0.31], ['A', 0.87, 'D', 0.67, 'E', 0.41], ['B', 0.88, 'D', 0.72, 'E', 0.69], ['B', 0.84, 'E', 0.78, 'A', 0.64], ['A', 0.71, 'B', 0.62, 'D', 0.32]]I can acheive this output froml=[]for i in range(len(d)): temp=[] [temp.extend([k,v]) for k,v in d[i].items()] l.append(temp)My question is: Is there any better way to do this?Can I do this with list comprehension? | Since you are using python 3.6.7 and python dictionaries are insertion ordered in python 3.6+, you can achieve the desired result using itertools.chain:from itertools import chainprint([list(chain.from_iterable(x.items())) for x in d])#[['B', 0.65, 'E', 0.55, 'C', 0.31],# ['A', 0.87, 'D', 0.67, 'E', 0.41],# ['B', 0.88, 'D', 0.72, 'E', 0.69],# ['B', 0.84, 'E', 0.78, 'A', 0.64],# ['A', 0.71, 'B', 0.62, 'D', 0.32]] |
How to perform two aggregate operations in one column of same pandas dataframe? I have a column in pandas data frame where I want to find out the min and max of a column in the same result. But the problem is I am getting only one aggregated value in return.import pandas as pdprint(df)col1 col25 96 63 44 3df.agg({'col1':'sum','col1':'mean'})The output of this aggregation is giving only mean :col1 4.5dtype: float64However, the output which I need should have both sums and mean for col1 and I am only getting mean. | Try below code:import pandas as pdfrom io import StringIOcontent= """col1 col2 5 9 6 6 3 4 4 3 """df=pd.read_csv(StringIO(content),sep='\s+')df.agg({"col1":["sum","mean"],"col2":"std"})if you want to apply multiple functions in one columns, you has to use list, otherwise, the later function to col1 will replace the former. if you want to apply mutliple functions for different columns, just use dict inside of the agg fucntions. |
Im not getting desired accuracy in logistic regression on MNIST I'm not getting output on logistic regression problem. I have used MNIST dataset to predict the number digit,I have used Adam optimizer, It is not giving the desired accuracy.Model Reduces cost but does not give accuracy well.My code looks like this.# In[1]:import tensorflow as tfimport matplotlib.pyplot as pltimport numpy as npimport pandas as pdfrom tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("MNIST_data/", one_hot=True)get_ipython().run_line_magic('matplotlib', 'inline')# In[2]:train_x = mnist.train.imagestrain_y = mnist.train.labelsX = tf.placeholder(shape=[None,784],dtype=tf.float32,name="X")Y = tf.placeholder(shape=[None,10],dtype=tf.float32,name="Y")# In[3]:#hyperparameterstraining_epoches = 25batch_size = 1000total_batches = int(mnist.train.num_examples/batch_size)W = tf.Variable(tf.random_normal([784,10]))b = tf.Variable(tf.random_normal([10]))# In[6]:y_ = tf.nn.sigmoid(tf.matmul(X,W)+b)cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_), reduction_indices=1))optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)init = tf.global_variables_initializer()# In[7]:with tf.Session() as sess: sess.run(init) for epoches in range(training_epoches): for i in range(total_batches): xs_batch,ys_batch = mnist.train.next_batch(batch_size) sess.run(optimizer,feed_dict={X:train_x,Y:train_y}) print("cost after epoch %i : %f"%(epoches+1,sess.run(cost,feed_dict={X:train_x,Y:train_y}))) correct_prediction = tf.equal(tf.argmax(y_, 1), tf.argmax(Y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))The output of code is :cost after epoch 1 : 0.005403cost after epoch 2 : 0.002935cost after epoch 3 : 0.001866cost after epoch 4 : 0.001245cost after epoch 5 : 0.000877cost after epoch 6 : 0.000652cost after epoch 7 : 0.000507cost after epoch 8 : 0.000407cost after epoch 9 : 0.000334cost after epoch 10 : 0.000279cost after epoch 11 : 0.000237cost after epoch 12 : 0.000204cost after epoch 13 : 0.000178cost after epoch 14 : 0.000156cost after epoch 15 : 0.000138cost after epoch 16 : 0.000123cost after epoch 17 : 0.000111cost after epoch 18 : 0.000100cost after epoch 19 : 0.000091cost after epoch 20 : 0.000083cost after epoch 21 : 0.000076cost after epoch 22 : 0.000070cost after epoch 23 : 0.000065cost after epoch 24 : 0.000060cost after epoch 25 : 0.000056Accuracy: 0.1859It is giving Accuracy of 0.1859.which is not expected | You need to use - y_ = tf.nn.softmax(tf.matmul(X,W)+b)instead of :y_ = tf.nn.sigmoid(tf.matmul(X,W)+b)as the MNIST data set has multi-class labels (sigmoid is used in case of 2 classes).You may also need to add a small number tocost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_), reduction_indices=1)) like -cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(y_ + 1e-10), reduction_indices=1))in case the cost results in nan |
Installing rpy2 on python 2.7 on macOS I have python version 2.7.10 on macOS High Sierra and would like to install rpy2.When I do sudo pip install rpy2I get the error message:Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-Nwbha3/rpy2/I have already upgraded setuptools (version 39.0.1). I also downloaded the older version rpy2-2.7.0.tar.gz and tried installing it with sudo pip install rpy2-2.7.0.tar.gz. I then get the following error message:clang: error: unsupported option '-fopenmp'clang: error: unsupported option '-fopenmp'error: command 'cc' failed with exit status 1----------------------------------------Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/tmp/pip-O0cu4E-build/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-haDUA3-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-O0cu4E-build/If somebody has the answer to my installation problem, it would be greatly appreciate it.Many thanks in advance! | The clang that ships with Mac does not support openmp which is what the -fopenmp flag is for. You'll likely need a version of clang that supports openmp.One possible solution would be to get the full llvm/clang build with openmp support. With homebrew you can do:brew install llvm # clang/llvmbrew install libomp # OpenMP supportAnd then try installing rpy2 again with newly installed version of clang.As an example, the current version is 6.0.0 so you would runCC=/usr/local/Cellar/llvm/6.0.0/bin/clang pip install rpy2 |
runtime error: dictionary changed size during iteration I have a social network graph 'G'.I'm trying to check that they keys of my graph are in the characteristics dataset as well.I ran this command:for node in G.nodes(): if node in caste1.keys(): pass else: G = G.remove_node(node)It shows an error RuntimeError: dictionary changed size during iteration | The RuntimeError is self-explanatory. You're changing the size of an object you're iterating within the iteration. It messes up the looping.What you can do is to first iterate across the nodes to get the ones you would like to remove, and store it in a separate variable. Then you can iterate with this variable to remove the nodes:# Identify nodes for removalnodes2remove = [node for node in G.nodes() if node not in caste1.keys()]# Remove target-nodesfor node in nodes2remove: G = G.remove_node(node) |
declaring a python variable in a list [data] = self.read()? while studying the open source repo of Odoo I found a line of code that I don't understand like the following[data] = self.read()found there https://github.com/odoo/odoo/blob/8f297c9d5f6d31370797d64fee5ca9d779f14b81/addons/hr_holidays/wizard/hr_holidays_summary_department.py#L25I really would like to know why would you put the variable in a list | It seems to ensure that [data] is an iterable of one item and therefore unpacks the first value from self.read()It cannot be assigned to a non-iterable>>> [data] = 1Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: cannot unpack non-iterable int objectWorks for iterable types, though must have a length equal to one>>> [data] = {'some':2}>>> data'some'>>> [data] = {'foo':2, 'bar':3}Traceback (most recent call last): File "<stdin>", line 1, in <module>ValueError: too many values to unpack (expected 1)>>> [data] = [1]>>> data1>>> [data] = [[1]]>>> data[1]>>> [data] = [1, 2]Traceback (most recent call last): File "<stdin>", line 1, in <module>ValueError: too many values to unpack (expected 1)>>> [data] = []Traceback (most recent call last): File "<stdin>", line 1, in <module>ValueError: not enough values to unpack (expected 1, got 0) |
Is there a way to change the color name's color in mplstyle file? I would like to assign colors of a matplotlib plot e.g. by writingplt.plot(x,y, color="red"). But "red" should be a color I predefined in a mplstyle file.I know I can change the default colors in the mplstyle file that are used for drawing by # colors: blue, red, greenaxes.prop_cycle: cycler('color', [(0,0.3,0.5),(0.9, 0.3,0.3),(0.7, 0.8, 0.3)])But appart from that I sometimes need to draw something with specific colorsplt.bar([0,1],[1,1],color=["green","red"])Unfortunately, those colors always refer to the predefined "green" and "red" in matplotlib. I would know like to assign a my green for "green" and my red for "red".Of course, I could do a workaround and define a variable as, say a RGB-tuplered = (0.9, 0.3,0.3)green = (0.7, 0.8, 0.3)plt.bar([0,1],[10,8],color=[red,green])but I think more natural would be to redefine the colors referred to by "green" and "red".Does any of you know how I can change a color name's color in the mplstyle file? | The list of color names is stored in matplotlib.colors.get_named_colors_mapping() and you can edit it.import matplotlib.colorsimport matplotlib.pyplot as pltcolor_map = matplotlib.colors.get_named_colors_mapping()color_map["red"] = "#ffaaaa"color_map["green"] = "#aaffaa"plt.bar([0, 1], [1, 1], color=["green", "red"])plt.show() |
Returning function from another function I'm developing a package in which I want to modify incoming functions to add extra functionality. I'm not sure how I would go about doing this. A simple example of what I'm trying to do:def function(argument): print(argument,end=",")# this will not work but an attempt at illustrating what I wish to accomplishdef modify_function(original_function): return def new_function(arguments): for i in range(3): original_function(arguments) print("finished",end="")and running the original function:function("hello")gives the output: hello,and running the modified function:modified_function = modify_function(function)modified_function(i)gives the output: hello,hello,hello,finished | Credit to @Matiss for pointing out the way to do this. Adding this answer so it is documented.def original_function(argument): print(argument, end=",")def modify_function(input_func): # Using *args and **kwargs allows handling of multiple arguments # and keyword arguments, in case something more general is required. def new_func(*args, **kwargs): for i in range(3): input_func(*args, **kwargs) print("finished", end="") return new_funcmodified_function = modify_function(original_function)modified_function("hello")For other kinds of modification where you want to e.g. fix some of the arguments passed in, this way will still work, but you may also make use of https://docs.python.org/3/library/functools.html#functools.partial. |
How to make something like a log box in wxPython I'm assuming this is possible with a multiline text box, but not sure how to do it. What I'm looking to do is make a log box in my wxPython program, where I can write messages to it when certain actions happen. Also, i need to write the messages not only when an event happens, but certain times in the code. How would i get it to redraw the window so the messages appear at that instant? | I wrote an article on this sort of thing a couple years ago:http://www.blog.pythonlibrary.org/2009/01/01/wxpython-redirecting-stdout-stderr/ |
How to select class from lists of classes based on key – "like" in dicts? I have a list of classes in my code and I want to access them based on a 'key' of a dictionary, in a loop, so I am trying to understand which could be the most appropriate way to do it. I am trying to think, if maybe I associate an attribute to the class, likeClass.key, then maybe I can iterate among my elements, but maybe I am just approaching the problem the wrong way.To be more clear, this is just a rough example. If my dict is:dict = {'A': 'blue', 'B': 'apple', 'C': 'dog'}and my list of classes:listOfClasses = [Class_A, Class_B, Class_C]I would like to operate in such a way that:for key in dict: print(key, dict[key]) print(key, ---"something like: Class.value if Class.key == key" ---)Maybe the question is confusing, or stupid, or maybe I am just approaching the wrong way and should create a dict of classes, (if it is possible?). Just need a piece of advice I guess. | The classes themselves are python objects, and therefore valid dictionary values.class ClassA: passclass ClassB: passclass ClassC: passclass_registry = { 'A': ClassA, 'B': ClassB, 'C': ClassC,}for name, cls in class_registry.items(): print (name, cls.__name__) |
What is the most pythonic way to iterate through a long list of strings and structure new lists from that original list? I have a large list of strings of song lyrics. Each element in the list is a song, and each song has multiple lines and some of those lines are headers such as '[Intro]', '[Chorus]' etc. I'm trying to iterate through the list and create new lists where each new list is comprised of all the lines in a certain section like '[Intro]' or '[Chorus]'. Once I achieve this I want to create a Pandas data frame where each row are all the song lyrics and each column is that section(Intro, Chorus, Verse 1, etc.) of the song. Am I thinking about this the right way? Here's an example of 1 element in the list and my current partial attempt to iterate and store:song_index_number = 0line_index_in_song = 0intro = []bridge = []verse1 = []prechorus = []chorus = []verse2 = []verse3 = []verse4 = []verse5 = []outro = []lyrics_by_song[30]['[Intro]', '(Just the two of us, just the two of us)', 'Baby, your dada loves you', "And I'ma always be here for you", '(Just the two of us, just the two of us)', 'No matter what happens', "You're all I got in this world", '(Just the two of us, just the two of us)', "I would never give you up for nothin'", '(Just the two of us, just the two of us)', 'Nobody in this world is ever gonna keep you from me', 'I love you', '', '[Verse 1]', "C'mon Hai-Hai, we goin' to the beach", 'Grab a couple of toys and let Dada strap you in the car seat', "Oh, where's Mama? She's takin' a little nap in the trunk", "Oh, that smell? Dada must've runned over a skunk", "Now, I know what you're thinkin', it's kind of late to go swimmin'", "But you know your Mama, she's one of those type of women", "That do crazy things, and if she don't get her way, she'll throw a fit", "Don't play with Dada's toy knife, honey, let go of it (No)", "And don't look so upset, why you actin' bashful?", "Don't you wanna help Dada build a sandcastle? (Yeah)", 'And Mama said she wants to show you how far she can float', "And don't worry about that little boo-boo on her throat", "It's just a little scratch, it don't hurt", "Her was eatin' dinner while you were sweepin'", 'And spilled ketchup on her shirt', "Mama's messy, ain't she? We'll let her wash off in the water", "And me and you can play by ourselves, can't we?", '', '[Chorus]', 'Just the two of us, just the two of us',....for line in lyrics_by_song: if lyrics_by_song == '[Intro]': intro.append(line) | Refer to python's doc: https://docs.python.org/3/tutorial/datastructures.html#list-comprehensionsyou could also use thisIntro = lyrics_by_song[lyrics_by_song.index('[Intro]'):lyrics_by_song.index('something_else')]See top answer here: Understanding slice notation |
Create dataframe from existing one by counting the rows according to the values in a specific column I have this dataframe,|order_id|customername|product_count||1 |a |2 ||2 |b |-1 ||3 |Q |3 ||4 |a |-1 ||5 |c |-1 ||6 |Q |-1 ||7 |d |-1 |What I want is another dataframe with the count of the rows wherever it is 'Q' in customername and count of the rows with the rest of the items in customername. As given below where test2 represents Q and test1 represents rest of the items. Pecentage column is (total request/count of coustomername)*100, which in this case is (5/7)*100 and (2/7)*100|users|Total request|Percentage||test1 |5 | 71.4 ||test2 |2 | 28.5 | | Compare column for Q and count by Series.value_counts, last rename values of index and create DataFrame:df = pd.DataFrame({'order_id': [1, 2, 3, 4, 5, 6, 7], 'customername': ['a', 'b', 'Q', 'a', 'c', 'Q', 'd'], 'product_count': [2, -1, 3, -1, -1, -1, -1]})print (df) order_id customername product_count0 1 a 21 2 b -12 3 Q 33 4 a -14 5 c -15 6 Q -16 7 d -1s = df['customername'].eq('Q').value_counts().rename({True:'test2', False:'test1'})df1 = s.rename_axis('users').reset_index(name='Total request')df1['Percentage'] = df1['Total request'].div(df1['Total request'].sum()).mul(100).round(2)print (df1) users Total request Percentage0 test1 5 71.431 test2 2 28.57 |
How can I remove values from a ranking system if they are a positive value k apart? Suppose I have the following code:def compute_ranks(graph, k): d = .8 #dampening factor loops = 10 ranks = {} npages = len(graph) for page in graph: ranks[page] = 1.0 / npages for c in range(0, loops): newranks = {} for page in graph: newrank = (1-d) / npages for node in graph: if page in graph[node]: newrank = newrank + d * (ranks[node]/len(graph[node])) newranks[page] = newrank ranks = newranks return ranksAlright so now suppose I want to not allow any items that can collude with each other. If I have an item dictionary g = {'a': ['a', 'b', 'c'], 'b':['a'], 'c':['d'], 'd':['a']}For any path A==>B, I don't want to allow paths from B==>A that are at a distance at or below my number k.For example if k = 0, then the only path I would not allow is A==>A.However if k = 2, then I would not allow the links A==>A as before but also links such as D==>A, B==>A, or A==>C.I know this is very confusing and a majority of my problem comes from not understanding exactly what this means. Here's a transcript of the question:# Question 2: Combatting Link Spam# One of the problems with our page ranking system is pages can # collude with each other to improve their page ranks. We consider # A->B a reciprocal link if there is a link path from B to A of length # equal to or below the collusion level, k. The length of a link path # is the number of links which are taken to travel from one page to the # other.# If k = 0, then a link from A to A is a reciprocal link for node A, # since no links needs to be taken to get from A to A.# If k=1, B->A would count as a reciprocal link if there is a link # A->B, which includes one link and so is of length 1. (it requires # two parties, A and B, to collude to increase each others page rank).# If k=2, B->A would count as a reciprocal link for node A if there is# a path A->C->B, for some page C, (link path of length 2),# or a direct link A-> B (link path of length 1).# Modify the compute_ranks code to # - take an extra input k, which is a non-negative integer, and # - exclude reciprocal links of length up to and including k from # helping the page rank. | A possible solution could be to introduce a recursive method which detects a collusion. Something like:def Colluding(p1,p2,itemDict,k): if p1 == p2: return True elif k == 0: return False else p2 in itemDict[p1]: return True for p in itemDict[p1]: if Colluding(p1,p,itemDict,k-1): return True return FalseThen where it says if item in itemDict[node] you would have if item in itemDict[node] and not Colluding(item,node,itemDict,k) or something similar. That does a depth-first search which might not be the best choice if there are a lot of colluding links at a small depth (say A->B->A) since they may only be found after several full depth searches. You may be able to find another way which does a breadth-first search in that case. Also, if you have a lot of links, it might be worth trying to convert to a loop instead because Python might have a stack overflow problem if you use recursion. The recursive algorithm is what came to my mind first because it seems natural to traverse trees using recursion. Note: Since this is help with homework, I have not tested it and some comparisons might not be quite right and need to be adjusted in some way. |
Running cython extension directly using python interpreter? i.e. "python test.so" I want to compile main.py into main.so and run it using the python interpreter in linux, like this: "/usr/bin/python main.so" How can i do this? So far running extensions compiled the official way give me this:root@server:~/TEMP# python test.so File "test.so", line 1SyntaxError: Non-ASCII character '\x8b' in file test.so on line 2,... | You can't execute a .so directly. Cause of the binary form, you have to import it with:python -m testIf you want to make an executable out of the module, you could use the "-embed" option of cython:cython -embed test.pyxgcc ...your flags... test.c -o test./test |
Transfer dependency to --dev in poetry If you accidentally install a dependency in poetry as a main dependency (i.e. poetry add ...), is there a way to quickly transfer it to dev dependencies (i.e. poetry add --dev ...), or do you have to uninstall it and reinstall with poetry add --dev? | You can move the corresponding line in the pyproject.toml from the [tool.poetry.dependencies] section to [tool.poetry.dev-dependencies] by hand and run poetry lock --no-update afterwards. |
get distinct columns dataframe based on 2 ids Hello how can I do to only the lines where val is different in the 2 dataframes.The way I need to filter is the following:For each row of F1 (take each id1 if it is not null search for id1 F2 ) compare the VAL and if its different return it. else look at id2 and do the same thing. Notice that I can have id1 or id2 or both as shown below:d2 = {'id1': ['X22', 'X13',np.nan,'X02','X14'],'id2': ['Y1','Y2','Y3','Y4',np.nan],'VAL1':[1,0,2,3,0]}F1 = pd.DataFrame(data=d2)d2 = {'id1': ['X02', 'X13',np.nan,'X22','X14'],'id2': ['Y4','Y2','Y3','Y1','Y22'],'VAL2':[1,0,4,3,1]}F2 = pd.DataFrame(data=d2)Where F1 is: id1 id2 VAL10 X22 Y1 11 X13 Y2 02 NaN Y3 23 X02 Y4 34 X14 NaN 0and F2 is: id1 id2 VAL20 X02 Y4 11 X13 Y2 02 NaN Y3 43 X22 Y1 34 X14 Y22 1Expected output:d2 = {'id1': ['X02',np.nan,'X22','X14'],'id2': ['Y4','Y3','Y1',np.nan],'VAL1':[3,2,1,0],'VAL2':[1,4,3,1]}F3 = pd.DataFrame(data=d2) id1 id2 VAL1 VAL20 X02 Y4 3 11 NaN Y3 2 42 X22 Y1 1 33 X14 NaN 0 1 | Ok it is a rather complex merge, because you want to merge on 2 columns, and any of them can contain NaN which should match anything (but not both).I would to 2 separate merges:first where id1 is not NaN in F1 on id1second where id1 is NaN in F1 on id2In both resultant dataframe, I would only keep rows where:VAL1 != VAL2AND (F1.id2 == F2.id2 or F1.id2 is NaN or F2.id2 is NaN)Then I would concat them. Code could be:t = F1.loc[~F1['id1'].isna()].merge(F2, on=['id1']).query('VAL1!=VAL2')t = t[(t.id2_x==t.id2_y)|t.id2_x.isna()|t.id2_y.isna()]t2 = F1.loc[F1['id1'].isna()].merge(F2, on=['id2']).query('VAL1!=VAL2')t2 = t2[(t2.id1_x==t2.id1_y)|t2.id1_x.isna()|t2.id1_y.isna()]# build back lost columnst['id2'] = np.where(t['id2_x'].isna(), t['id2_y'], t['id2_x'])t2['id1'] = np.where(t2['id1_x'].isna(), t2['id1_y'], t2['id1_x'])# concat and reorder the columnsresul = pd.concat([t.drop(columns=['id2_x', 'id2_y']), t2.drop(columns=['id1_x', 'id1_y'])], ignore_index=True, sort=True).reindex(columns= ['id1', 'id2', 'VAL1', 'VAL2'])Result is: id1 id2 VAL1 VAL20 X22 Y1 1 31 X02 Y4 3 12 X14 Y22 0 13 NaN Y3 2 4 |
Print item from list in chunks of 50 in Python I have a list with 2,500 items. I want to print the first 50 items in a line and the next 50 items in the next line. So there will be a total of 50 lines with 50 items in each line.myList = ['item1', item2,..., 'item2500'] line1 = item1, item2, ..., item50line2 = item51, item52,...., item100..line 50 = item2451, item2452,...., item 2500Tried some while loops but it didn't quite work out. Is there a built-in functions or an easier way to do this? Thank you. | Same thing really, but looks nicer and reusable chunks function as a generator, I think.def chunks_of_n(l,n): for i in xrange(0, len(l), n): yield l[i:i+n]def show_my_list_in_chunks(l): for chunk in chunks_of_n(l,50): print ', '.join(l) |
How to import zip image folder as data in a cnn model? I am trying to run a basic cnn model on cats vs dogs images.The images exist as zip file.How can we extract the images in the zip folder to a directory using the kaggle online notebook?enter image description here | Try unzipping it :!unzip "filePath" |
How to convert multiple yolo darknet format into .csv file First I have Yolo darknet around 321 txt files. Each file contain of text 5 or 6 row.as example below.1 0.778906 0.614583 0.0828125 0.09583330 0.861719 0.584375 0.0984375 0.106250 0.654688 0.6 0.14375 0.1250 0.254687 0.663542 0.146875 0.1395830 0.457031 0.64375 0.120312 0.1083330 0.960938 0.566667 0.078125 0.129167 (First column : 1 or 0 is class and another columns is coordinate x,y,w,h)I try to convert to a csv file and I found solution as below.os.chdir(r'C:\xxx\labels')myFiles = glob.glob('*.txt') width=1024height=1024image_id=0final_df=[]for item in myFiles: row=[] bbox_temp=[] with open(item, 'rt') as fd: first_line = fd.readline() splited = first_line.split(); row.append(fd.readline(1)) row.append(width) row.append(height) try: bbox_temp.append(float(splited[1])*width) bbox_temp.append(float(splited[2])*height) bbox_temp.append(float(splited[3])*width) bbox_temp.append(float(splited[4])*height) row.append(bbox_temp) final_df.append(row) except: print("file is not in YOLO format!")df = pd.DataFrame(final_df,columns=['image_id', 'width', 'height','bbox'])df.to_csv("saved.csv",index=False)and got output.But this code make CSV file only first line of Yolo Darknet txt.I want to get all of row (5 or 6 row for each text file.)If code is work. CSV should has 321* 5 or 6 = 1,xxx rows x 4 columns.Please help me for adjust this code. | Can you just replace first_line = fd.readline() with for first_line in fd.readlines(): and indent the remainder of the code?You may also need to move row=[] and bbox_temp=[] into the new for loop. |
Why are these two strings not equal in Python? I have a simple Python code sampleimport jsonhello = json.dumps("hello")print(type(hello))if hello == "hello": print("They are equal")else: print("They are not equal")This is evaluating to "They are not equal". I don't understand why these values are not equal.I'm re-familiarizing myself with Python but I read that this "==" can be used as an operator to compare strings in Python. I also printed the type of hello which evaluates to "str"Can someone clarify this? | The behavior becomes much more clear once you print out the result from json.dumps():print("hello", len("hello"))print(hello, len(hello))This outputs:hello 5"hello" 7json.dumps() adds extra quotation marks -- you can see that the lengths of the two strings aren't the same. This is why your check fails. |
Issue using progress_recorder (celery-progress): extends time of task I want to use celery-progress to display progress bar when downloading csv filesmy task loop over list of cvs files, open each files, filtered data and produce a zip folder with csv filtered files (see code below)but depending where set_progress is called, task will take much more timeif I count (and set_progress) for files processed, it is quite fast even for files with 100000 recordsbut if I count for records in files, that would be more informative for user, it extends time by 20I do not understand whyhow can I manage this issuefor file in listOfFiles: # 1 - count for files processed i += 1 progress_recorder.set_progress(i,numberOfFilesToProcess, description='Export in progess...') records = [] with open(import_path + file, newline='', encoding="utf8") as csvfile: spamreader = csv.reader(csvfile, delimiter=',', quotechar='|') csv_headings = ','.join(next(spamreader)) for row in spamreader: # 2 - count for records in each files processed (files with 100000 records) # i += 1 # progress_recorder.set_progress(i,100000, description='Export in progess...') site = [row[0][positions[0]:positions[1]]] filtered_site = filter(lambda x: filter_records(x,sites),site) for site in filtered_site: records.append(','.join(row)) | If there is a very high number of records then likely there's no need to update the progress on every one, and the overhead of updating the progress in the backend every time could become substantial. Instead you could do something like this in the inner loop:if i % 100 == 0: # update progress for every 100th entry progress_recorder.set_progress(i,numberOfFilesToProcess, description='Export in progress...') |
how to remove axis label while keeping ticklabel and tick in matplotlib I would like to remove axis label with keeping tick, ticklabel.This is seaborn heatmap example.In this case, I'd like to remove only yaixs label('month') and xaxis label('year') label.I tried to use follows but I couldn't remove only labels.ax.xaxis.set_visible(False) ax.set_xticklabels([]) ax.set_xticks([]) Codes is follow.import seaborn as snsimport matplotlib.pylab as pltfig, ax = plt.subplots()flights = sns.load_dataset("flights")flights = flights.pivot("month", "year", "passengers")ax = sns.heatmap(flights)#ax.xaxis.set_visible(False) # remove axis label & xtick label #ax.set_xticklabels([]) # remove xtick label #ax.set_xticks([]) # remove xtick label & tickplt.show() | ax.set_xlabel('')ax.set_ylabel('') |
PySpark add column if date in range by quarter I have a df as follows:name date x 2020-07-20y 2020-02-13z 2020-01-21I need a new column with the corresponding quarter as an integer, e.g.name date quarterx 2020-07-20 3y 2020-02-13 1 z 2020-01-21 1I have defined my quarters as a list of strings so I thought I could use .withColumn + when col('date') in quarter range but get an error saying I cannot convert column to boolean. | You can use quarter function to extract it as an integer.from pyspark.sql.functions import *df1=spark.createDataFrame([("x","2020-07-20"),("y","2020-02-13"),("z","2020-01-21")], ["name", "date"])df1.show()+----+----------+|name| date|+----+----------+| x|2020-07-20|| y|2020-02-13|| z|2020-01-21|+----+----------+df1.withColumn("quarter", quarter(col("date"))).show()+----+----------+-------+|name| date|quarter|+----+----------+-------+| x|2020-07-20| 3|| y|2020-02-13| 1|| z|2020-01-21| 1|+----+----------+-------+ |
Unexpected valid syntax in annotated assignment expression Apparently (and surprisingly, to me at least), this is a perfectly valid expression in Python 3.6+:x: 10What is up with this? I checked it out using the ast module and got the following:[ins] In [1]: ast.parse('x: 10').bodyOut[1]: [<_ast.AnnAssign at 0x110ff5be0>]Okay so it's an annotated assignment. I looked up the grammar reference and saw that it corresponds to this rule:annassign: ':' test ['=' test]This doesn't make a lot of sense to me. If it's an annotated assignment, then why is the assignment portion of the expression optional? What could it possibly mean if the assignment portion of the expression isn't present? Isn't that really strange? The annasign node is only referenced in one rule:expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) | ('=' (yield_expr|testlist_star_expr))*)In each of the other possible projections at that level, some kind of assignment expression is required (augassign is a token like +=). So why is it optional for annassign?I guess it's plausible that this is intended to be the annotated version of a bare name expression (i.e. just x), but it's really quite confusing. I'm not too familiar with the static type checkers out there but can they make use of an annotation like this?More than likely this is intentional, but it kind of seems like a bug. It's a little bit problematic, because it's possible to write syntactically valid but utterly nonsensical code like this:a = 1b = 2c: 3 # see what I did there? oops!d = 4I recently made a similar mistake in my own code when I converted a dict representation into separate variables, and only got caught out when my test pipeline ran in a Python 3.5 environment and produced a SyntaxError.Anyway, I'm mostly just curious about the intent, but would also be really excited to find out that I discovered an actual grammar bug. | It's classified as an annotated assignment for parser reasons. If there were separate annotation and annassign rules, Python's LL(1) parser wouldn't be able to tell which one it's supposed to be parsing when it sees the colon. |
Downloading excel file from url in pandas (post authentication) I am facing a strange problem, I dont know much about for my lack of knowledge of html. I want to download an excel file post login from a website. The file_url is:file_url="https://xyz.xyz.com/portal/workspace/IN AWP ABRL/Reports & Analysis Library/CDI Reports/CDI_SM_Mar'20.xlsx"There is a share button for the file which gives the link2 (For the same file):file_url2='http://xyz.xyz.com/portal/traffic/4a8367bfd0fae3046d45cd83085072a0'When I use requests.get to read link 2 (post login to a session) I am able to read the excel into pandas. However, link 2 does not serve my purpose as I cant schedule my report on this on a periodic basis (by changing Mar'20 to Apr'20 etc). Link1 suits my purpose but gives the following on passing r=requests.get in the r.content method:b'\n\n\n\n\n\n\n\n\n\n<html>\n\t<head>\n\t\t<title></title>\n\t</head>\n\t\n\t<body bgcolor="#FFFFFF">\n\t\n\n\t<script language="javascript">\n\t\t<!-- \n\t\t\ttop.location.href="https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar\'20.xlsx";\t\n\t\t-->\n\t</script>\n\t</body>\n</html>'I have tried all encoding decoding of url but cant understand this alphanumeric url (link2). My python code (working) is:import requestsurl = 'http://xyz.xyz.com/portal/site'username=''password=''s = requests.Session()headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'}r = s.get(url,auth=(username, password),verify=False,headers=headers)r2 = s.get(file_url,verify=False,allow_redirects=True)r2.content# df=pd.read_excel(BytesIO(r2.content)) | You get HTML with JavaScript which redirects browser to new url. But requests can't run JavaScript. it is simple methods to block some simple scripts/bots.But HTML is only string so you can use string's functions to get url from string and use this url with requests to get file.content = b'\n\n\n\n\n\n\n\n\n\n<html>\n\t<head>\n\t\t<title></title>\n\t</head>\n\t\n\t<body bgcolor="#FFFFFF">\n\t\n\n\t<script language="javascript">\n\t\t<!-- \n\t\t\ttop.location.href="https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar\'20.xlsx";\t\n\t\t-->\n\t</script>\n\t</body>\n</html>'text = content.decode()print(text)print('\n---\n')start = text.find('href="') + len('href="')end = text.find('";', start)url = text[start:end]print('url:', url)response = s.get(url)Results:<html> <head> <title></title> </head> <body bgcolor="#FFFFFF"> <script language="javascript"> <!-- top.location.href="https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar'20.xlsx"; --> </script> </body></html>---url: https://xyz.xyz.com/portal/workspace/IN%20AWP%20ABRL/Reports%20&%20Analysis%20Library/CDI%20Reports/CDI_SM_Mar'20.xlsx |
How to find all symlinks in directory and its subdirectories in python I need to list symlinks using python. Broken aswellHow do I do it? I was searching everywhere and tried alot.The best result I found was:import os,sysprint '\n'.join([os.path.join(sys.argv[1],i) for i in os.listdir(sys.argv[1]) if os.path.islink(os.path.join(sys.argv[1],i))])It does not show where its linked to and it doesn't go to subdirs. | You can use a code similar to this one to achieve what you need. Directories to search are passed as arguments or current directory taken as the default. You can modify this further with the os.walk method to make it recursive. import sys, osdef lll(dirname): for name in os.listdir(dirname): if name not in (os.curdir, os.pardir): full = os.path.join(dirname, name) if os.path.isdir(full) and not os.path.islink(full): lll(full) elif os.path.islink(full): print(name, '->', os.readlink(full))def main(args): if not args: args = [os.curdir] first = 1 for arg in args: if len(args) > 1: if not first: print() first = 0 print(arg + ':') lll(arg)if __name__ == '__main__': main(sys.argv[1:])Ref: https://github.com/python/cpython/blob/master/Tools/scripts/lll.py |
openpyxl.load_workbook(file, data_only=True doens't work? Why does x = "None" instead of "500"?I have tried everything that I know and searched 1 hour for answer...Thank you for any help!import openpyxlwb = openpyxl.Workbook()sheet = wb.activesheet["A1"] = 200sheet["A2"] = 300sheet["A3"] = "=SUM(A1+A2)"wb.save("writeFormula.xlsx")wbFormulas = openpyxl.load_workbook("writeFormula.xlsx")sheet = wbFormulas.activeprint(sheet["A3"].value)wbDataOnly = openpyxl.load_workbook("writeFormula.xlsx", data_only=True)sheet = wbDataOnly.activex = (sheet["A3"].value)print(x) # None? Should print 500? | From the documentation openpyxl never evaluates formula |
How to progammatically pass the callable to gunicorn instead of arguments I have the following implementation to spin up a web app using [email protected]("run_app", help="starts application in gunicorn")def run_uwsgi(): """ Runs the project in gunicorn """ import sys sys.argv = ["--gunicorn"] sys.argv.append("-b 0.0.0.0:5000") sys.argv.append("myapp.wsgi:application") WSGIApplication(usage="%(prog)s [OPTIONS] [APP_MODULE]").run()This will spin up the app using gunicorn, as per the requirement how to spin this up without using arguments? Is there a way to assign sys.argv values to the gunicorn ? | I would like to post the solution which I have worked out @click.command("uwsgi", help="starts application in gunicorn")def run_uwsgi(): """ Runs the project in gunicorn """ from gunicorn.app.base import Application import sys class MyApplication(Application): """ Bypasses the class `WSGIApplication` and made it independent from command line arguments """ def init(self, parser, opts, args): self.cfg.set("default_proc_name", args[0]) # Added this to ensure the application integrity self.app_uri = "myapp.wsgi:application" def load_wsgiapp(self): # This would do the trick # returns application callable return application def load(self): return self.load_wsgiapp() sys.argv = ["--gunicorn"] sys.argv.append(f"-b {os.environ['APP_HOST']}:{os.environ['APP_PORT']}") # Throws an error if this is missing. sys.argv.append("myapp.wsgi:application") MyApplication(usage="%(prog)s [OPTIONS] [APP_MODULE]").run()I have directly returning the callable from def load_wsgiapp(self): # This would do the trick # returns application callable return applicationwsgi.pyapplication = main.create_app()But still needed to pass a command line argument for the module, other wise it throws an error.If you are using Nuitka to bundle your application you can spin it up and use with gunicorn. |
NLTK - Lemmatizing the tokens before being chunked I am currently stuck in this problem. NLTK's Chunking function is like this:tokens = nltk.word_tokenize(word)tagged = nltk.pos_tag(tokens)chunking = nltk.chunk.ne_chunk(tagged)is there any way to lemmatize the tokens with its tag before being chunked? Like lmtzr.lemmatize('tokens, pos=tagged)I have tried to lemmatize the chunk, but it is not working (error says something about chunking being a list). I am new to python, so my knowledge about it isn't that great. Any help would be great! | You can lemmatize directly without pos_tag - import nltkfrom nltk.corpus import wordnetlmtzr = nltk.WordNetLemmatizer()word = "Here are words and cars"tokens = nltk.word_tokenize(word)token_lemma = [ lmtzr.lemmatize(token) for token in tokens ]tagged = nltk.pos_tag(token_lemma)chunking = nltk.chunk.ne_chunk(tagged)Output['Here', 'are', 'word', 'and', 'car'] # lemmatize output[('Here', 'RB'), ('are', 'VBP'), ('word', 'NN'), ('and', 'CC'), ('car', 'NN')](S Here/RB are/VBP word/NN and/CC car/NN) |
Add a row to pandas dataframe based on dictionary Here is my example dataframe row:A B C D EI have a dictionary formatted like:{'foo': ['A', 'B', 'C'], 'bar': ['D', 'E']}I would like to add a row above my original dataframe so my new dataframe is:foo foo foo bar bar A B C D EI think maybe the df.map function should be able to do it, but I've tried it and can't seem to get the syntax right. | I believe you want set columns names by row of DataFrame with dict and map:d = {'foo': ['A', 'B', 'C'], 'bar': ['D', 'E']}#swap keys with valuesd1 = {k: oldk for oldk, oldv in d.items() for k in oldv}print (d1){'E': 'bar', 'A': 'foo', 'D': 'bar', 'B': 'foo', 'C': 'foo'}df = pd.DataFrame([list('ABCDE')])df.columns = df.iloc[0].map(d1).valuesprint (df) foo foo foo bar bar0 A B C D EIf need set first row in one row DataFrame:df = pd.DataFrame([list('ABCDE')])df.loc[-1] = df.iloc[0].map(d1)df = df.sort_index().reset_index(drop=True)print (df) 0 1 2 3 40 foo foo foo bar bar1 A B C D E |
Python's print function in a class I can't execute print function in the class:#!/usr/bin/pythonimport sysclass MyClass: def print(self): print 'MyClass'a = MyClass()a.print()I'm getting the following error:File "./start.py", line 9 a.print() ^SyntaxError: invalid syntaxWhy is it happening? | In Python 2, print is a keyword. It can only be used for its intended purpose. I can't be the name of a variable or a function.In Python 3, print is a built-in function, not a keyword. So methods, for example, can have the name print.If you are using Python 2 and want to override its default behavior, you can import Python 3's behavior from __future__:from __future__ import print_functionclass MyClass: def print(self): print ('MyClass')a = MyClass()a.print() |
creating event gives 400 bad request - Google calendar API using AUTHLIB (Python package) This is my request to create an event (Using AuthLib from PyPi)resp = google.post( 'https://www.googleapis.com/calendar/v3/calendars/primary/events', data={ 'start': {'dateTime': today.strftime("%Y-%m-%dT%H:%M:%S+05:30"), 'timeZone': 'Asia/Kolkata'}, 'end': {'dateTime': tomorrow.strftime("%Y-%m-%dT%H:%M:%S+05:30"), 'timeZone': 'Asia/Kolkata'}, 'reminders': {'useDefault': True}, }, token=dict(session).get('token'))The response im getting{"error":{"code":400,"errors":[{"domain":"global","message":"Bad Request","reason":"badRequest"}],"message":"Bad Request"}}Notes:I have done get requests with the same library and methods and worksScopes included (as mentioned in documentation)https://www.googleapis.com/auth/calendarhttps://www.googleapis.com/auth/calendar.eventsAlso note:I have gone through all existing stackoverflow answers regarding this specific error request and api endpoint, and most of them have an issue with their time formatting which i have tried all off, now im sticking with the google api docs time format. | You can try gcsa library (documentation). It handles time formatting for you:from gcsa.google_calendar import GoogleCalendarfrom gcsa.event import Eventevent = Event( 'My event', start=today, end=tomorrow, timezone='Asia/Kolkata')calendar.add_event(event)Install it withpip install gcsa |
Top and right axes labels in matplotlib pyplot I have a matplotlib/pyplot plot that appears as I want, in that the axes show the required range of values from -1 to +1 on both the x and y axes. I have labelled the x and y axes. However I also wish to label the right-hand vertical axis with the text "Thinking" and the top axis with the text "Extraversion".I have looked at the matplotlib documentation but can't get my code to execute using set_xlabel and set_ylabel. I have commented these lines out in my code so my code runs for now - but hopefully the comments will make it clear enough what I am trying to do.import matplotlib.pyplot as pltw = 6h = 6d = 70plt.figure(figsize=(w, h), dpi=d)x = [-0.34,-0.155,0.845,0.66,-0.34]y = [0.76,0.24,-0.265,0.735,0.76,] plt.plot(x, y)plt.xlim(-1,1)plt.ylim(-1,1)plt.xlabel("Intraverted")plt.ylabel("Feeling")#secax = plt.secondary_xaxis('top')#secax.set_xlabel('Extraverted')#secay = plt.secondary_xaxis('right')#secay.set_ylabel('Thinking')#plt.show()plt.savefig("out.png") | As @Mr. T pointed out, there is no plt.secondary_xaxis method so you need the axes objectimport matplotlib.pyplot as pltplt.figure(figsize=(6, 6), constrained_layout=True, dpi=70)x = [-0.34,-0.155,0.845,0.66,-0.34]y = [0.76,0.24,-0.265,0.735,0.76,] plt.plot(x, y)plt.xlim(-1,1)plt.ylim(-1,1)plt.xlabel("Intraverted")plt.ylabel("Feeling")secax = plt.gca().secondary_xaxis('top')secax.set_xlabel('Extraverted')secay = plt.gca().secondary_yaxis('right')secay.set_ylabel('Thinking')#plt.show()plt.savefig("out.png")Better, would be just to create the axes object from the start:fig, ax = plt.subplots(figsize=(w, h), constrained_layout=True, dpi=d)...ax.plot(x, y)ax.set_xlim(-1, 1)...secax = ax.secondary_xaxis('top')...fig.savefig("out.png")Further note the use of constrained_layout=True to make the secondary yaxis label fit on the figure. |
How to create a groupby of two columns with all possible combinations and aggregated results I want to group a large dataframe over two or more columns and aggregate the other columns. I use groupby but realised after some time that groupby(label1, label2) only creates rows for existing combinations of label1 and label2. Example:lijst = [['a', 1, 3], ['b', 2, 6], ['a', 2, 7], ['b', 2, 2], ['a', 1, 8]]data = pd.DataFrame(lijst, columns=['letter', 'cijfer', 'getal'])data['Aantal'] = 0label1 = 'letter'label2 = 'cijfer'df = data.groupby([label1, label2]).agg({'Aantal': 'count', 'getal': sum})Result: Aantal getalletter cijfer a 1 2 11 2 1 7b 2 2 8And I wanted something like: Aantal getalletter cijfer a 1 2 11 2 1 7b 1 NaN NaN 2 2 8I tried this link and several others but they all don't handle the case of having to aggregate many columns (sorry if I haved missed it).The only solution I can thing of is making a template DataFrame from: template = pd.DataFrame(index=pd.MultiIndex.from_product([data[label1].unique(), data[label2].unique()]), columns=df.columns)and next copy all data over from df. That seems to me a very tedious solution. Is there a better solution to get what I want? | Use DataFrame.unstack with DataFrame.stack:df = df.unstack().stack(dropna=False)print (df) Aantal getalletter cijfer a 1 2.0 11.0 2 1.0 7.0b 1 NaN NaN 2 2.0 8.0Or another idea with DataFrame.reindex:df = df.reindex(pd.MultiIndex.from_product(df.index.levels))print (df) Aantal getalletter cijfer a 1 2.0 11.0 2 1.0 7.0b 1 NaN NaN 2 2.0 8.0 |
How to tell a asyncio.Protocol some information? After reading a post here, I thought that was possible to send arguments to my protocol factory using a lambda function, but for some reason, it just doesn't work (it doesn't recognize any connection).Since create_server doesn't accept arguments, how could I tell my protocol some useful information? I start a bunch of them using a loop for every door in a list, but after that, I can't relate to which protocol is which.Any ideas? | Alright, I found the problem.Instead of using the lambda like in the example:await asyncio.start_server(lambda r, w: handle_client(r, w, session), '', 55555)I should be using lambda like this:await asyncio.start_server(lambda: handle_client(r, w, session), '', 55555)I hope this may be helpful to someone else. |
Am I able to extend the range in a for loop? This is a problem from the Project Euler website and I can not seem to get it right. I want to extend the range of the for loop if the number i am trying to divide is not evenly divisible by x. The question is at the top. Any ideas?# 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.# What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?def main(): num = 0 to = 20 for num in range(1, to): isDivisible = True for x in range(1, 20): if num % x != 0: isDivisible = False to += 1 #Here I try to extend the loop continue if isDivisible: print(num) breakmain() | I'm not sure it's true but:def f(x): z = 1 for i in range(1, x + 1): for i_2 in range(1,i + 1): #Check the multipliers! For example, since the number 9 is multiplied by 3 before, if it is multiplied by 3 again, it becomes a layer. if i % i_2 == 0 and ( z * i_2 ) % i == 0: z *= i_2 break print(z)f(20)232792560 |
1 px thick line cv2 I need to draw a line in an image where no pixel is thicker than 1 pixel in the horizontal dimension.Despite I use thickness=1 in poly lines,cv2.polylines(img, np.int32([points]), isClosed=False, color=(255, 255, 255), thickness=1)in the resulting plot there may be 2 pixels horizontally adjacent set to 255, like in this pic:How can I prevent to have adjacent pixels set to 255? Or equivalently: what is an efficient way to set to 0 one of the 2?I thought to Erosion but then, in those lines where there is only 1 255 pixel, such a pixel would be set to 0 as well. | It looks like we need to use for loops.Removing one pixel out of two horizontally adjacent pixels is an iterative operation.I can't see a way to vectorize it, or use filtering or morphological operations.There could be something I am missing, but I think we need to use a loop.In case the image large, you may use Numba (or Cython) for accelerating the execution time.import cv2import numpy as npfrom numba import jit@jit(nopython=True) # Use Numba JIT for accelerating the code execution time def horiz_skip_pix(im): for y in range(im.shape[0]): for x in range(1, im.shape[1]): # Use logical operation instead of using "if statement". # We could have used an if statement, like if im[y, x]==255 and im[y, x-1]==255: im[y, x] = 0 ... im[y, x] = im[y, x] & (255-im[y, x-1])# Build sample input imagesrc_img = np.zeros((10, 14), np.uint8)points = np.array([[2,2], [5,8], [12,5], [12,2], [3, 2]], np.int32)cv2.polylines(src_img, [points], isClosed=False, color=(255, 255, 255), thickness=1)dst_img = src_img.copy()# Remove horizontally adjacent pixels.horiz_skip_pix(dst_img)# Show resultcv2.imshow('src_img', cv2.resize(src_img, (14*20, 10*20), interpolation=cv2.INTER_NEAREST))cv2.imshow('dst_img', cv2.resize(dst_img, (14*20, 10*20), interpolation=cv2.INTER_NEAREST))cv2.waitKey()cv2.destroyAllWindows()src_img:dst_img:I wouldn't call the result a "1 px thick line", but it meats the condition of "prevent to having adjacent pixels". |
Dataflow works with directrunner but not with dataflowrunner (PubSub to GCS) I'm doing a very simple pipeline with dataflow.It gets a raw data from pubsub and adds a timestamp then write to raw file (I tried parquet first).Code:class GetTimestampFn(beam.DoFn): """Prints element timestamp""" def process(self, element, timestamp=beam.DoFn.TimestampParam): timestamp_utc = float(timestamp) yield {'raw':str(element),"inserted_at":timestamp_utc}options = PipelineOptions(streaming=True)p = beam.Pipeline(DirectRunner(), options=options)parser = argparse.ArgumentParser()parser.add_argument('--input_topic',required=True)parser.add_argument('--output_parquet',required=True)known_args, _ = parser.parse_known_args(argv)raw_data = p | 'Read' >> beam.io.ReadFromPubSub(subscription=known_args.input_topic) raw_with_timestamp = raw_data | 'Getting Timestamp' >> beam.ParDo(GetTimestampFn())_ = raw_with_timestamp | 'Write' >> beam.io.textio.WriteToText(known_args.output_parquet,append_trailing_newlines=True ,file_name_suffix='.gzip')p.run().wait_until_finish()It works with direct runner but it fails on dataflowrunner with this message "Workflow failed."Job id: 2021-04-14_17_11_02-16453427249129279174How I'm running the job:python real_time_events.py \--region us-central1 \--input_topic 'projects/{project}/subscriptions/{subscription}' \--output_parquet 'gs://{bucket}/stream/' \--project "{project}" \--temp_location "gs://{bucket}/tmp" \--staging_location "gs://{bucket}/stage" Any ideas on how to solve ? | WriteToText does not support streaming pipelines. I recommend you try using fileio.WriteToFiles instead, to get a transform that supports streaming. Note that you may need to group before it. |
Why can't it read the xml data properly? Okay first of all, in my experience as a beginner in programming I never had encounter this kind of weirdness in my whole life.Hello I have a very large xml file and I cannot show it here but I can show the first part here as an imageAs you can see the arrows are pointing is the very first tag along with its respective children. Now I have here a program that reads that LARGE xml file and as you can see there is only the first 5 of it, here is the codedef parse(word,translator,file): language_target = "de" if os.path.isfile(file): data = [] #[(a,b,c),(a,b,c),(a,b,c)] file_path = os.path.join(os.getcwd(),file) nsmap = {"xml": "http://www.w3.org/XML/1998/namespace"} for event, elem in ET.iterparse(file_path,events=("start","end")): if event == "start" and elem.tag == "tu": temp_list = [] sentence = elem.find(f'.//tuv[@xml:lang="{language_target}"]', nsmap).find('seg', nsmap).text else: print("\nNo such File") os.system("pause")There's no need to give attention much on the parameters, those three just gets a word and a translator and a filename. Now on this part is where I read the LARGE xml filensmap = {"xml": "http://www.w3.org/XML/1998/namespace"} for event, elem in ET.iterparse(file_path,events=("start","end")): if event == "start" and elem.tag == "tu": temp_list = [] sentence = elem.find(f'.//tuv[@xml:lang="{language_target}"]', nsmap).find('seg', nsmap).textWhat happens there is it gets all of the tags cause I want to get the text of its children. Now look what happens here when I run the programIt says that it is a NoneType object, now I am wondering how can it be a NoneType object where there YOU CAN SEE it is definitely NOT a NoneType Object cause it has the corresponding data and this is the first data and how can it said that it is NoneType?<tu> <tuv xml:lang="de"><seg>- Gloucester? ! - Die sollten doch in Serfam sein.</seg></tuv> <tuv xml:lang="en"><seg>They should have surrendered when they had the chance!</seg></tuv></tu>Now look what happens when I put print() right below this code sentence = elem.find(f'.//tuv[@xml:lang="{language_target}"]', nsmap).find('seg', nsmap).textSo it would be like this:sentence = elem.find(f'.//tuv[@xml:lang="{language_target}"]', nsmap).find('seg', nsmap).textprint(sentence)As you can see now it works! however it stopped again on a specific data but I checked it and it is not NoneType there is DATA on that part and I am wondering why is it saying it to be NoneType. Also I am so mindblown by the fact that I just put a print() function below the sentence code and it made a lot of difference. Can someone help me with this? To be honest I am really mindblown by this and I do not know what is happening, I feel like there is a lack of understanding that I am having in reading the XML file with python. Can someone help me with it and guide me? maybe there's a better way to do this.Thank you so much! I really need your help stackoverflow community! Thank you!Also here I made a run again and got this resultJeremiah! Jetzt bist du dran!Jeremiah, this is a purge!Traceback (most recent call last): File "us24.py", line 76, in <module> parse(word,translator,file) File "us24.py", line 35, in parse english_translation = elem.find('.//tuv[@xml:lang="en"]', nsmap).find('seg', nsmap).text #Human TranslationAttributeError: 'NoneType' object has no attribute 'find'and it again said NoneType where in fact I looked at my xml file and there is DATA on it!<tu> <tuv xml:lang="de"><seg>Jeremiah! Jetzt bist du dran!</seg></tuv> <tuv xml:lang="en"><seg>Jeremiah, this is a purge!</seg></tuv> </tu> <tu> <tuv xml:lang="de"><seg>Suzaku!</seg></tuv> <tuv xml:lang="en"><seg>Suzaku-kun!</seg></tuv> </tu> <tu> <tuv xml:lang="de"><seg>- Cécile-san? !</seg></tuv> <tuv xml:lang="en"><seg>Cecil-san!</seg></tuv> </tu> <tu> <tuv xml:lang="de"><seg>- Hier ist's zu gefahrlich!</seg></tuv> <tuv xml:lang="en"><seg>It's dangerous here!</seg></tuv> </tu>The next one it should be reading is this , but it says NoneType how can it be Nonetype where you can see it has the correct data and why are the others working besides this one? :(<tu> <tuv xml:lang="de"><seg>Suzaku!</seg></tuv> <tuv xml:lang="en"><seg>Suzaku-kun!</seg></tuv> </tu> | I would simplify your parsing approach.Take advantage of the event-based parser and remove all .find() calls. Look at it this way: The parser presents to you all the elements in the XML, you only have to decide which ones you find interesting.In this case, the interesting elements have a certain tag name ('seg') and they need to be in a section with the right language. It's easy to have a Boolean flag (say, is_correct_language) that is toggled based on the xml:lang attribute of the previous <tuv> element.Since the start event is enough for checking attributes and text, we don't need the parser to notify us of end events at all:import xml.etree.ElementTree as ETdef parse(word, translator, file, language_target='de'): is_correct_language = False for event, elem in ET.iterparse(file, events=('start',)): if elem.tag == 'tuv': xml_lang = elem.attrib.get('{http://www.w3.org/XML/1998/namespace}lang', '') is_correct_language = xml_lang.startswith(language_target) elif is_correct_language and elem.tag == 'seg': yield elem.textusage:for segment in parse('', '', 'test.xml', 'de'): print(segment)Other notes:I've used a generator function (yield instead of return) as those tend to be more versatile.I don't think that using os.getcwd() is a good idea in the function, as this needlessly restricts the function's usefulness.By default Python will look in the current working directory anyway,so in the best case prefixing the filename with os.getcwd() issuperfluous. In the worst case you want to parse a file from adifferent directory and your function would needlessly break thepath."File exists" checks are useless. Just open the file (or call ET.iterparse() on it, same thing). Wrap the whole thing in try/except and catch the FileNotFoundError, if you want to handle this situation. |
How to fix ValueError: too many values to unpack (expected 2) I am trying to use the sorted() in python and trying to find the interpretation but I get this error.ValueError: too many values to unpack (expected 2)Here's my code:from treeinterpreter import treeinterpreter as tiX = processed_data[0]y = predictionrf = pickle.load(open("original/PermutationModelNew.sav", "rb"))prediction, bias, contributions = ti.predict(rf, X)print("Bias (trainset mean)", bias[0])c, feature = sorted(zip(contributions[0],X.columns))X is the test data and it looks like this:Age DailyRate DistanceFromHome ... BusinessTravel_ OverTime_ Over18_0 39 903 2 ... 2 1 1[1 rows x 28 columns]and y looks like this:[0]Can someone please help me to fix this? I am using this Example | Here is an example on how I was doing the sorting for the contribution (I was creating a dictionary with the results and then convert it into a DataFrame, but you don't have to use the dictionary):prediction, bias, contributions = ti.predict(model, chunk_data)contr_dict = {}for i in range(len(chunk_data)): contr_dict.setdefault('instance', []).append(i) contr_dict.setdefault('prediction', []).append(prediction[i][1]) contr_dict.setdefault('bias', []).append(bias[i][1]) contribution = contributions[i] # sort contributions in descending order ids = contribution[:, 1].argsort()[::-1] for j in ids: contr_dict.setdefault(chunk_data.columns[j][]).append(contribution[j][1])pd.DataFrame(contr_dict) |
Numpy Error: this is the wrong setup.py file to run while trying to install Numpy I tried to install Numpy library with VisualStudio Code (VS Code) used the terminal and official website for instructionsEven though I followed each step I keep getting "This is the wrong setup.py file to run error"I tried to update every element to not get an error, deleted and installed NumPy files in the directories which are in site-packages, and my anaconda files (i use jupyter as well but I need to implement this on my VSCode editor).I also tried to get in the NumPy file and triedpip install.python setup.py build_ext --inplaceI used this site's instructions as well to install NumPy:here I tried :python -m pip install --user numpybut keep getting the same error. What am I doing wrong? | In the screenshot you provided, I noticed that the installed module "numpy" exists in the "python3.7" folder, not in the "python3.8" you are currently using.This is where my environment and numpy are located:It is recommended that you use the shortcut key Ctrl+Shift+` to open a new terminal, VSCode will automatically enter the current environment, and then you can use "pip install numpy" to install numpy into "python3.8".Or you can switch the environment directly to the python3.7 environment that includes numpy.If it still doesn't work, you can uninstall numpy and reinstall it. ("pip uninstall numpy", "pip install numpy")Since we are using pip to install the module numpy, we can use "pip --version" to check the currently used pip version, the module is installed in this environment: |
Comparing items in an Excel file with Openpyxl in Python I am working with a big set of data, which has 9 rows (B3:J3 in column 3) and stretches until B1325:J1325. Using Python and the Openpyxl library, I need to get the biggest and second biggest value of each row and print those to a new field in the same row. I already assigned values to single fields manually (headings), but cannot seem to even get the max value in my range automatically written to a new field. My code looks like the following:for row in ws.rows['B3':'J3']:sumup = 0.0for cell in row: if cell.value != None: .........It throws the error:for row in ws.rows['B3':'J3']:TypeError: 'generator' object has no attribute '__getitem__'How could I get to my goal here? | You can you iter_rows to do what you want.Try this:for row in ws.iter_rows('B3':'J3'): sumup = 0.0 for cell in row: if cell.value != None: ........Check out this answer for more info:How we can use iter_rows() in Python openpyxl package? |
How to format a CSV data with str values to float values by reading it? This is how the CSV looks likeand python after using the codedef get_returns(file): return pd.read_csv(file + ".csv", index_col = 0, parse_dates = True).pct_change()#exampledf= get_returns("SP500")shows up with folloiwng errorTypeError: unsupported operand type(s) for /: 'str' and 'float'result[mask] = op(xrav[mask], yrav[mask])Anyone an idea how to solve this?With this formating data there is no problem (other web source, other dataset)For sure i could format it first in excel before reading it but on longterm that could be annyoing. | got clarified in total to avoida)TypeError: unsupported operand type(s) for /: 'str' and 'float'result[mask] = op(xrav[mask], yrav[mask])and then b) IndexError: list index out of rangethis solved it alldef get_returns(file): dfcolumns = pd.read_csv(file + ".csv",nrows=1) return pd.read_csv(file + ".csv", index_col = 0, parse_dates = True,dtype={"Open": float, "High":float, "Low":float, "Close":float},usecols = list(range(len(dfcolumns.columns)))).pct_change()BUT! Before in Excel i had to change the date xx/xx/xx to xx-xx-xx & "" by using search and replace to nothing |
How can I avoid overwriting of csv file writing the data into? for page in range(start,end+1,1): url = "http://wrf.meteo.kg/aws/index?AwsSearch%5Bid%5D=36927&AwsSearch%5Bdate_range%5D=20.10.2020+-+13.01.2021&page="+str(page)handle = requests.get(url) doc = lh.fromstring(handle.content) tr_elements = doc.xpath('//tr') col=[] i=0for t in tr_elements[0]: i+=1 name=t.text_content() col.append((name,[]))for j in range(1,len(tr_elements)): T=tr_elements[j] if len(T)!=14: break i=0 for t in T.iterchildren(): data=t.text_content() if i>0: try: data=int(data) except: pass col[i][1].append(data) i+=1Dict={title:column for (title,column) in col}df=pd.DataFrame(Dict)print(df)So I am scraping similar scructured tables from multiple pages and I did it but can not save it as csv file without overwriting How Can I do this? | Use the built in csv library with the 'append' mode. Hope will help you. |
if xldate when I am trying to read date in column 0 using the code below, I get an error as given below. Can anyone help me? Thanks in advancewb = xlrd.open_workbook('call.xlsx')sheet = wb.sheet_by_index(0)for r in range(sheet.nrows): for c in range(sheet.ncols): if c==0: a1=sheet.cell_value(r,0) a1_as_date=datetime.datetime(*xlrd.xldate_as_tuple(a1,wb.datemode)) print(a1_as_date) else: print(sheet.cell_value(r,c))error a1_as_date=datetime.datetime(*xlrd.xldate_as_tuple(a1,wb.datemode)) File "C:\Users\user78\anaconda3\lib\site-packages\xlrd\xldate.py", line 95, in xldate_as_tuple if xldate < 0.00:TypeError: '<' not supported between instances of 'str' and 'float' | I understand that you received an error that < is not supported by the two types.I think you can try this out:if float(xldate) < 0.00: |
TypeError at /api/register/ 'module' object is not callable I am trying to register users using django rest framework but this is the error i am getting, Please help Identify the issueTypeError at /api/register/'module' object is not callableRequest Method: POSTRequest URL: http://127.0.0.1:8000/api/register/Django Version: 3.1.5Exception Type: TypeErrorException Value:'module' object is not callableException Location: C:\Users\ben\PycharmProjects\buddyroo\lib\site-packages\rest_framework\generics.py, line 110, in get_serializerPython Executable: C:\Users\ben\PycharmProjects\buddyroo\Scripts\python.exePython Version: 3.8.5below is RegisterSerializerfrom django.contrib.auth.password_validation import validate_passwordfrom rest_framework import serializersfrom django.contrib.auth.models import Userfrom rest_framework.validators import UniqueValidatorclass RegisterSerializer(serializers.ModelSerializer): email = serializers.EmailField( required=True, validators=[UniqueValidator(queryset=User.objects.all())] ) password = serializers.CharField(write_only=True, required=True, validators=[validate_password]) password2 = serializers.CharField(write_only=True, required=True) class Meta: model = User fields = ('username', 'password', 'password2', 'email', 'first_name', 'last_name') extra_kwargs = { 'first_name': {'required': True}, 'last_name': {'required': True} } def validate(self, attrs): if attrs['password'] != attrs['password2']: raise serializers.ValidationError({"password": "Password fields didn't match."}) return attrs def create(self, validated_data): user = User.objects.create( username=validated_data['username'], email=validated_data['email'], first_name=validated_data['first_name'], last_name=validated_data['last_name'] ) user.set_password(validated_data['password']) user.save() return userand RegisterView.pyfrom django.contrib.auth.models import Userfrom rest_framework import genericsfrom rest_framework.permissions import IsAuthenticated, AllowAny # <-- Herefrom rest_framework.response import Responsefrom rest_framework.views import APIViewfrom api import UsersSerializer, RegisterSerializerclass RegisterView(generics.CreateAPIView): queryset = User.objects.all() serializer_class = RegisterSerializer permission_classes = (AllowAny,) | I suppose that the name of the module (file) where RegisterSerializer is defined is RegisterSerializer.py.If this is the case, in the RegisterView.py you are importing the module RegisterSerializer and not the class.So, it should befrom api.RegisterSerializer import RegisterSerializerIn Python it is common to have more than one class in one module, so I would advise you to rename your modules to serializers.py and views.py and put all your serializers and views there.Of course, if they are many, you may split this and create serializers/views packages and put several serializers/views modules there: user_serializers.py, whaterver_serializers.py... |
pandas: groupby with multiple conditions Would you, please help me, to group pandas dataframe by multiple conditions.Here is how I do it in SQL:with a as ( select high ,sum( case when qr = 1 and now = 1 then 1 else 0 end ) q1_bad,sum( case when qr = 2 and now = 1 then 1 else 0 end ) q2_bad from #tmp2 group by high)select a.high from awhere q1_bad >= 2 and q2_bad >= 2 and a.high is not nullHere is the part of the dataset:import pandas as pda = pd.DataFrame()a['client'] = range(35)a['high'] = ['02','47','47','47','79','01','43','56','46','47','17','58','42','90','47','86','41','56','55','49','47','49','95','23','46','47','80','80','41','49','46','49','56','46','31']a['qr'] = ['1','1','1','1','2','1','1','2','2','1','1','2','2','2','1','1','1','2','1','2','1','2','2','1','1','1','2','2','1','1','1','1','1','1','2']a['now'] = ['0','0','0','0','0','0','0','0','0','0','0','0','1','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','1','0','0','0']Thank you very much! | it's very similar, you need to define your columns ahead of the groupby then apply your operation.assuming you have actual integers and not strings.import numpy as npimport pandas as pda.assign(q1_bad = np.where((a['qr'].eq(1) & a['now'].eq(1)),1,0), q2_bad = np.where((a['qr'].eq(2) & a['now'].eq(1)),1,0)).groupby('high')[['q1_bad','q2_bad']].sum() q1_bad q2_badhigh 01 0 002 0 017 0 023 0 031 0 041 0 042 0 143 0 046 0 047 0 049 1 055 0 056 0 058 0 079 0 080 0 086 0 090 0 095 0 0for you extra where clause you can filter it one of many ways, but for ease we can add query at the end.a.dropna(subset='high').assign(q1_bad = np.where((a['qr'].eq(1) & a['now'].eq(1)),1,0), q2_bad = np.where((a['qr'].eq(2) & a['now'].eq(1)),1,0)).groupby('high')[['q1_bad','q2_bad']].sum().query('q2_bad >= 2 and q1_bad >= 2') |
How to run --upgrade with pipenv? Running (say for numpy) pipenv install --upgrade numpy tries to install --upgrade and numpy instead of normal pip behavior for --upgrade switch.Is anyone else having this problem?Edit:Everyone, stop using pipenv. It's terrible. Use poetry instead. | For pipenv use update command , not --upgrade switch. You can update a package with:pipenv update numpy See comments in documentation.This will also persist new version of package in Pipfile/Pipfile.lock, no manual editing needed. There was a bug with this command under certain scenarios, but hopefully it is fixed now. |
How to use the cl command? All, I found a piece of information on how to call c files in python, in these examples: there is a c file, which includes many other header files, the very beginning of this c files is #include Python.h, then I found that #include Python.h actually involves many many other header files, such as pystate.h, object.h, etc, so I include all the required header files. In an cpp IDE environment, it did not show errors. What I am trying to do is call this c code in python, so from ctypes import *, then it seems that a dll should be generated by code such as: cl -LD test.c -test.dll, but how to use the cl in this case? I used the cygwin: gcc, it worked fine. Could anyone help me with this i.e.: Call the C in python? Do I make myself clear? Thank you in advance!!Well, Now I feel it important to tell me what I did:The ultimate goal I wanna achieve is:I am lazy, I do not want to re-write those c codes in python, (which is very complicated for me in some cases), so I just want to generate dllfiles that python could call. I followed an example given by googleing "python call c", there are two versions in this examples: linux and windows:The example test.c:#include <windows.h>BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { return TRUE; } __declspec(dllexport) int multiply(int num1, int num2) { return num1 * num2; }Two versions:1, Complie under linuxgcc -c -fPIC test.c gcc -shared test.o -o test.so I did this in cygwin on my vista system, it works fine; :)2, Compile under windows:cl -LD test.c -test.dllI used the cl in windows command line prompt, it won't work!These are the python codes:from ctypes import * import os libtest = cdll.LoadLibrary(os.getcwd() + '/test.so') print test.multiply(2, 2) Could anyone try this and tell me what you get? thank you! | You will find the command line options of Microsoft's C++ compiler here.Consider the following switches for cl:/nologo /GS /fp:precise /Zc:forScope /Gd...and link your file using/NOLOGO /OUT:"your.dll" /DLL <your lib files> /SUBSYSTEM:WINDOWS /MACHINE:X86 /DYNAMICBASEPlease have a look at what those options mean in detail, I just listed common ones. You should be aware of their effect nonetheless, so try to avoid copy&paste and make sure it's really what you need - the documentation linked above will help you. This is just a setup I use more or less often.Be advised that you can always open Visual Studio, configure build options, and copy the command line invokations from the project configuration dialog.Edit:Ok, here is some more advice, given the new information you've edited into your original question. I took the example code of your simple DLL and pasted it into a source file, and made two changes:#include <windows.h>BOOL APIENTRY DllMain(HANDLE hModule, DWORD dwReason, LPVOID lpReserved){ return TRUE; } extern "C" __declspec(dllexport) int __stdcall multiply(int num1, int num2){ return num1 * num2; } First of all, I usually expect functions exported from a DLL to use stdcall calling convention, just because it's a common thing in Windows and there are languages who inherently cannot cope with cdecl, seeing as they only know stdcall. So that's one change I made.Second, to make exports more friendly, I specified extern "C" to get rid of name mangling. I then proceeded to compile the code from the command line like this:cl /nologo /GS /Zc:forScope /Gd c.cpp /link /OUT:"foobar.dll" /DL kernel32.lib /SUBSYSTEM:WINDOWS /MACHINE:X86If you use the DUMPBIN tool from the Visual Studio toolset, you can check your DLL for exports:dumpbin /EXPORTS foobar.dllSeeing something like this...ordinal hint RVA name 1 0 00001010 ?multiply@@[email protected] can notice the exported name got mangled. You'll usually want clear names for exports, so either use a DEF file to specify exports in more details, or the shortcut from above.Afterwards, I end up with a DLL that I can load into Python like this:In [1]: import ctypesIn [2]: dll = ctypes.windll.LoadLibrary("foobar.dll")In [3]: dll.multiplyOut[3]: <_FuncPtr object at 0x0928BEF3>In [4]: dll.multiply(5, 5)Out[4]: 25Note that I'm using ctypes.windll here, which implies stdcall. |
Python-Arduino USB communication floats error i have made a project that a python script communicates with arduino sending various data types.Well everything works great except when arduino sends back floats in some cases.For e.g:When arduino sends numbers 4112.5, -7631.5 python receive them correct In case of 4112.112, -7631.23 python receives 4112.11181641, -7631.22998047What is causing this??Python code: https://drive.google.com/file/d/1uKBTUW319oTh6YyZU9tOQYa3jXrUvZVF/viewimport os import struct import serialimport timeprint('HELLO WORLD!!!!\nI AM PYTHON READY TO TALK WITH ARDUINO\nINSERT PASSWORD PLEASE.')ser=serial.Serial("COM5", 9600) #Serial port COM5, baudrate=9600ser.close()ser.open() #open Serial Porta = int(raw_input("Enter number: ")) #integer objectb = int(raw_input("Enter number: ")) #integer objectc = float(raw_input("Enter number: ")) #float objectd = float(raw_input("Enter number: ")) #float objecttime.sleep(2) #wait ser.write(struct.pack("2i2f",a,b,c,d)) #write to port all all number bytesif a == 22 :if b == -22 :if c == 2212.113 : if d == -3131.111 : print("Congratulations!!! Check the ledpin should be ON!!!") receivedbytes=ser.read(16) #read from Serial port 16 bytes=2 int32_t + 2 floats from arduino (number1,number2,number3,number4,)=struct.unpack("2i2f",receivedbytes) #convert bytes to numbers print "Arduino also send me back ",str(number1),",",str(number2),",",str(number3),",",str(number4) else : print("WRONG PASSWORD") os.system("pause") #wait for user to press enterArduino code:https://drive.google.com/file/d/1ifZx-0PGtex-M4tu7KTsIjWSjLqxMvMz/viewstruct sendata { //data to send volatile int32_t a=53; volatile int32_t b=-2121; volatile float c=4112.5; volatile float d=-7631.5;};struct receive { //data to receive volatile int32_t a; //it will not work with int volatile int32_t b; volatile float c; volatile float d;};struct receive bytes;struct sendata values;const int total_bytes=16; //total bytes to sendint i;byte buf[total_bytes]; //each received Serial byte saved into byte arrayvoid setup() { Serial.begin(9600); pinMode(13,OUTPUT); //Arduino Mega ledpin }void loop() {}void serialEvent() { //Called each time Serial data is received if (Serial.available()==total_bytes){ //Receive data first saved toSerial buffer,Serial.available return how many bytes are saved.The Serial buffer space is limited. while(i<=total_bytes-1){ buf[i] = Serial.read(); //Save each byte from Serial buffer to byte array i++; } memmove(&bytes,buf,sizeof(bytes)); //Move each single byte memory location of array to memory field of the struct,the numbers are reconstructed from bytes. if (bytes.a==22){ //Access each struct number. if (bytes.b==-22){ if (bytes.c==2212.113){ if (bytes.d==-3131.111){ //If the password is right Serial.write((const uint8_t*)&values,sizeof(values)); //Write struct to Serial port. delay(100); digitalWrite(13,HIGH);//Turn ON LED. } } } } } } For further information you can also check my video:https://www.youtube.com/watch?v=yjfHwO3qSgY&t=7s | Well, after a few tests i came up that i can only send float numbers betwwen 8-bit Arduino and Python with maximum 3 decimal places with accuracy.I also wrote a non-struct code:Edit:added codeNON_STRUCTArduino side:https://drive.google.com/file/d/1lvgT-LqQa7DxDorFF0MTe7UMpBfHn6LA/view?usp=sharing//values to send//int32_t aa=53; int32_t bb=-2121;float cc=4112.3; //each float must have max 3 decimal places else it will #rounded to 3!!float dd=-7631.23;////***////////values to receive////int32_t a; //it will not work with int int32_t b;float c;float d;int i,e;/////****////void setup() { Serial.begin(9600); pinMode(13,OUTPUT); //Arduino Mega ledpin }void loop() {}void serialEvent() { //Called each time Serial data is received a=Serial.parseInt(); b=Serial.parseInt(); c=Serial.parseFloat(); d=Serial.parseFloat(); if (a==22){ //Access each struct number. if (b==-22){ if (c==2212.113){ if (d==-3131.111){ //If the password is right Serial.println(aa); Serial.println(bb); Serial.println(cc,3); //must be <=3 decimal places else it //will //rounded Serial.println(dd,3); //must be <=3 decimal places else it //will //rounded delay(100); digitalWrite(13,HIGH);//Turn ON LED. } } } }}Python side:https://drive.google.com/file/d/1gPKfhTvbd4vp4L4VrZuns95yQoekg-vn/view?usp=sharingimport os import struct import serialimport timeprint('HELLO WORLD!!!!\nI AM PYTHON READY TO TALK WITH ARDUINO\nINSERT PASSWORD PLEASE.')ser=serial.Serial("COM5", 9600) #Serial port COM5, baudrate=9600ser.close()ser.open() #open Serial Porta = int(raw_input("Enter number: ")) #integer objectb = int(raw_input("Enter number: ")) #integer objectc = float(format(float(raw_input("Enter number: ")), '.3f'))#float object # #<=3 #decimal placesd = float(format(float(raw_input("Enter number: ")), '.3f'))time.sleep(2) #wait ser.write(str(a).encode()) #convert int to string and write it to port ser.write('\n'.encode())ser.write(str(b).encode())ser.write('\n'.encode())ser.write(str(c).encode())ser.write('\n'.encode())ser.write(str(d).encode())ser.write('\n'.encode())if str(a) == "22" : if str(b) == "-22" : if str(c) == "2212.113" : if str(d) == "-3131.111" : print("Congratulations!!! Check the ledpin should be ON!!!") number1=int(ser.readline()) #read from Serial port convert to int number2=int(ser.readline()) number3=float(ser.readline()) ##read from Serial port convert to float #(3 #decimal places from arduino) number4=float(ser.readline()) print "Arduino also send me back ",str(number1),",",str(number2),",",str(number3),",",str(number4) else : print("WRONG PASSWORD")os.system("pause") #wait for user to press enterWITH_STRUCT better performanceArduino side:https://drive.google.com/file/d/153fuSVeMz2apI-JbDNjdkw9PQKHfGDGI/view?usp=sharingstruct sendata { //data to send volatile int32_t a=53; volatile int32_t b=-2121; volatile float c=4112.3; volatile float d=-7631.4; };struct receive { //data to receive volatile int32_t a; //it will not work with int volatile int32_t b; volatile float c; volatile float d;};struct receive bytes;struct sendata values;const int total_bytes=16; //total bytes to sendint i;byte buf[total_bytes]; //each received Serial byte saved into byte arrayvoid setup() { Serial.begin(9600); pinMode(13,OUTPUT); //Arduino Mega ledpin }void loop() {}void serialEvent() { //Called each time Serial data is received if (Serial.available()==total_bytes){ //Receive data first saved to Serial //buffer,Serial.available return how many bytes are saved.The Serial buffer //space //is limited. while(i<=total_bytes-1){ buf[i] = Serial.read(); //Save each byte from Serial buffer to //byte //array i++; } memmove(&bytes,buf,sizeof(bytes)); //Move each single byte memory //location of array to memory field of the struct,the numbers are //reconstructed //from bytes. if (bytes.a==22){ //Access each struct number. if (bytes.b==-22){ if (bytes.c==2212.113){ if (bytes.d==-3131.111){ //If the password is right Serial.write((const uint8_t*)&values,sizeof(values)); //Write //struct to Serial port. delay(100); digitalWrite(13,HIGH);//Turn ON LED. } } } } }}Python side:https://drive.google.com/file/d/1M6iWnluXdNzTKO1hfcsk3qi9omzMiYeh/view?usp=sharingimport os import struct import serialimport timeprint('HELLO WORLD!!!!\nI AM PYTHON READY TO TALK WITH ARDUINO\nINSERT PASSWORD PLEASE.')ser=serial.Serial("COM5", 9600) #Serial port COM5, baudrate=9600ser.close()ser.open() #open Serial Porta = int(raw_input("Enter number: ")) #integer objectb = int(raw_input("Enter number: ")) #integer objectc = float(format(float(raw_input("Enter number: ")), '.3f'))#float object #<=3 #decimal placesd = float(format(float(raw_input("Enter number: ")), '.3f'))time.sleep(2) #wait ser.write(struct.pack("2i2f",a,b,c,d)) #write to port all all number bytesif a == 22 : if b == -22 : if c == 2212.113 : if d == -3131.111 : print("Congratulations!!! Check the ledpin should be ON!!!") receivedbytes=ser.read(16) #read from Serial port 16 bytes=2 int32_t + 2 #floats from arduino (number1,number2,number3,number4,)=struct.unpack("2i2f",receivedbytes) #convert bytes to numbers number3=float(format(number3, '.3f')) #floats must be under 3 decimal #points else will be rounded number4=float(format(number4, '.3f')) print "Arduino also send me back ",str(number1),",",str(number2),",",str(number3),",",str(number4) else : print("WRONG PASSWORD")os.system("pause") #wait for user to press enterYoutube video: https://www.youtube.com/watch?v=yjfHwO3qSgY&t=170s |
python: Using class variable as counter in a multiply run method Lets say I have following simplified python code:class GUI: def __init__(self): self.counter = 0 self.f1c = 0 self.f2c = 0 def update(self): self.counter = self.counter + self.f1() self.counter = self.counter + self.f4() self.counter = self.counter + self.f2() self.counter = self.counter + self.f3() print(self.counter) if (self.counter > 4): print("doing update now") # do_update() self.counter = 0 def f1(self): if self.f1c < 2: self.f1c = self.f1c + 1 self.update() return 1 def f2(self): if self.f2c < 4: self.f2c = self.f2c + 1 self.update() return 0 def f3(self): return 1 def f4(self): return 0g = GUI()g.update()A sample output from a test run in my case is:6doing update now5doing update now43222While I would expect it to be:12345doing update now01In my understanding, it is not even possible in my code sample, that self.counter can go from 2 to 1 without doing do_update().What would be the correct way to do this?edit: Basically, what I want is, that do_update will only run after all other functions came to an end AND counter > 0. | You have some very complicated recursion going on here, and I honestly think you just got lucky in not creating an infinite loop.To see what's going on with what you have, I opened up your code with a debugger, and have annotated the calls just to trace the order of flow through the first couple print statements. Increasing indentation indicates a deeper nested function call, and the line number with the line itself is then listed. The comments indicate the current value of certain variables at each stage, and indicate when print statements happen:--Call-- 39| g.update() --Call-- 08| self.counter = self.counter + self.f1() #f1c is 0 --Call-- 22| self.update() #f1c is 1 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 1 --Call-- 22| self.update() #f1c is 2 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 1 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 1 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 0 --Call-- 28| self.update() #f2c is 1 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 2 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 2 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 1 --Call-- 28| self.update() #f2c is 2 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 3 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 3 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 2 --Call-- 28| self.update() #f2c is 3 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 4 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 4 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 3 --Call-- 28| self.update() #f2c is 4 --Call-- 08| self.counter = self.counter + self.f1() #f1c is 2 --Return-- 23| return 1 #self.counter now == 5 --Call-- 09| self.counter = self.counter + self.f4() --Return-- 35| return 0 #self.counter now == 5 --Call-- 10| self.counter = self.counter + self.f2() #f2c is 4 --Return-- 29| return 0 #self.counter now == 5 --Call-- 11| self.counter = self.counter + self.f3() --Return-- 32| return 1 #self.counter now == 6 ### print(self.counter) ### #self.counter now == 6 ### print('doing update') #self.counter now == 0 --Return-- 18| #self.counter in prior stack frame was only 4 --Return-- 29| return 0 #self.counter now == 4 --Call-- 11| self.counter = self.counter + self.f3() --Return-- 32| return 1 #self.counter now == 5 ### print(self.counter) ### #self.counter now == 5 ### print('doing update') #self.counter now == 0 --Return-- 18| #self.counter in prior stack frame was only 3 |
GPU optimization with Keras I am running Keras on a Windows 10 computer with a GPU. I have gone from Tensorflow 1 to Tensorflow 2 and I now feel that fitting is much slower and hope for your advice.I am testing whether Tensorflow sees the GPU with the following statements:from tensorflow.python.client import device_libprint(device_lib.list_local_devices())K._get_available_gpus()giving the response[name: "/device:CPU:0"device_type: "CPU"memory_limit: 268435456locality {}incarnation: 17171012743200670970, name: "/device:GPU:0"device_type: "GPU"memory_limit: 6682068255locality { bus_id: 1 links { }}incarnation: 5711519511292622685physical_device_desc: "device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1"So, that seems to indicate that the GPU is working?I am training a modified version of ResNet50 with up to 10 images (257x257x2) as input. It has 4.3M trainable parameters. Training is very slow (could be several days). Part of the code is shown here:import os,cv2,sysimport numpy as npimport tensorflow as tfimport matplotlib.pyplot as pltimport scipy.ioimport h5pyimport timefrom tensorflow.python.keras import backend as Kfrom tensorflow.python.keras.models import load_modelfrom tensorflow.python.keras import optimizersfrom buildModelReduced_test import buildModelReducedfrom tensorflow.keras.utils import plot_modelK.set_image_data_format('channels_last') #set_image_dim_ordering('tf')sys.setrecursionlimit(10000)# Check that gpu is runningfrom tensorflow.python.client import device_libprint(device_lib.list_local_devices())K._get_available_gpus()# Generator to read one batch at a time for large datasetsdef imageLoaderLargeFiles(data_path, batch_size, nStars, nDatasets=0):--------- yield(train_in,train_target # Repository for parametersnStars = 10img_rows = 257img_cols = 257bit_depth = 16channels = 2num_epochs = 1batch_size = 8data_path_train = 'E:/TomoA/large/train2'data_path_validate = 'E:/TomoA/large/validate2'nDatasets_train = 33000nDatasets_validate = 8000nBatches_train = nDatasets_train//(batch_size)validation_steps = nDatasets_validate//(batch_size)output_width = 35; runSize = 'large'restartFile = ‘’#%% Train modelif restartFile == '': model = buildModelReduced(nStars,img_rows, img_cols, output_width,\ batch_size=batch_size,channels=channels, use_l2_regularizer=True) model.summary() plot_model(model, to_file='model.png', show_shapes=True) all_mae = [] adam=optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) model.compile(optimizer='adam',loss='MSE',metrics=['mae'])history = model.fit_generator(imageLoaderLargeFiles(data_path_train,batch_size,nStars,nDatasets_train), steps_per_epoch=nBatches_train,epochs=num_epochs, validation_data=imageLoaderLargeFiles(data_path_validate,batch_size,nStars,nDatasets_validate), validation_steps=validation_steps,verbose=1,workers=0, use_multiprocessing=False, shuffle=False)print('\nSaving model...\n')if runSize == 'large': model.save(runID + '_' + runSize + '.h5')When I open Windows task manager and look at the GPU, I see that the memory allocation is 6.5GB, copying activity less than 1%, and CUDA about 4%. Disk activity is low, I read a cache of 1000 data sets at a time from an SSD. See screen clip below. I think that shows that the GPU is not working well. The CPU load is 19%. I use a batch size of 8, if I go higher, I get an resource exhausted error.Any ideas on how to proceed or where to find more information? Any rules of thumb on how to tune your run to exploit the GPU well? | It seems there is a bottleneck somewhere on a training operation that we can not detect by looking at Task Manager. It can be caused by I/O, GPU, or CPU. You should detect which part of the processing is slow with using an advanced tool.You can use TensorFlow Profiler to inspect all processes of the TensorFlow. Also, it gives suggestions that how you can speed up your processes. Here is a simple video tutorial about it. |
Extract images using xpath I have been trying to get information from this website https://www.leadhome.co.za/property/poortview-ah/roodepoort/lh-95810/magnificent-masterpiece-in-poortview- and I am having issues with getting all the images of the property; more specifically the URLthis is how the class looks like:<div class="lazy-image listing-slider-carousel-item lazy-image-loaded"> <div class="lazy-image-background" style="background-image: url(&quot;https://s3-eu-west-1.amazonaws.com/leadhome-listing-photos/025c90ab-9c87-47d5-b11c-1cfbce3f67f2-md.jpg&quot;);"></div></div>What I have so far: for item in response.xpath('//div[@class="lazy-image-background"]/*[starts-with(@style,"background-image")]/@style').getall(): yield {"image_link":item}But unfortunately this is empty. Any tips on what I'm doing wrong?Thank you! | If you inspect original html source of this webpage (CTRL + U on google Chrome webbrowser, !!!not html code from Crhome developer tools /elements section) you will see 2 important things:Images in tags like <div class="lazy-image listing-slider-carousel-item lazy-image-loaded"> as well as other data don't exists inside these html tags.All data stored inside script tag and inside window.REDUX_INITIAL_STATE javascript variable: In this case we can convert data from javascript variable into basic python dict format using python's built-in json module. The most complicated part of this task is to correctly fit content of that script tag into json.loads function. It should be strictly a text after window.REDUX_INITIAL_STATE = and before next javascript operation (in this case before the latest ; symbol).As result we will get this code:def parse(self, response): script_tag = [script for script in response.css("script::text").extract() if "window.REDUX_INITIAL_STATE = {" in script] script_data = json.loads(script_tag[0].split("window.REDUX_INITIAL_STATE = ")[-1][:-1], encoding="utf-8")As you can see on following debugger screenshot all data successfully converted:Images stored in script_data['app']['listing']['listing']['entity']['lh-95810']['images'] as list of dictionaries:lh-95810 is entity id so in updated code this entity id will be separately selected in order to be able to use it in other pages:def parse(self, response): script_tag = [script for script in response.css("script::text").extract() if "window.REDUX_INITIAL_STATE = {" in script] script_data = json.loads(script_tag[0].split("window.REDUX_INITIAL_STATE = ")[-1][:-1], encoding="utf-8") entity_key = [k for k in script_data['app']['listing']['listing']['entity'].keys()] images = [image["medium"] for image in script_data['app']['listing']['listing']['entity'][entity_key[0]]['images']]This website uses javascript to render data on webpage. Hovewer any javascript formed content have it's *roots in original html code.This approach uses only built-in json module and don't require css or Xpath selectors. |
How to solve tensor flow cpu dll not found error I have install tensorflow v2.1.0 with python version 3.6.6 and pip version 20.0.2. When i try to import tensorflow i got below error.C:\Users\Dexter>pythonPython 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)] on win32Type "help", "copyright", "credits" or "license" for more information.>>> import tensorflowTraceback (most recent call last): File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\Dexter\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic return _load(spec)ImportError: DLL load failed: The specified module could not be found.When i searched on google i always get tensorflow-gpu solution i don't have any graphic card in my system. below is info of my display driver. Please help me with this i stuck in this. I have c++ Redistributable for Visual Studio 2017 | As per installation instructions for Windows, Tensorflow 2.1.0 requires Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017, and 2019 - which is what you are (partially) missing. Moreover, starting with the TensorFlow 2.1.0 version, the msvcp140_1.dll file is required from this package (which may not be provided from older redistributable packages). That's why you're getting the error. Install the missing packages following these instructions. In essence, grab the 2015, 2017 and 2019 Redistributable, all in single package, available from here. |
match and replace string items in list with string items from another list I have three lists - base, match, and replac ; match and replac are same lengthbase = ['abc', 'def', 'hjk']match = ['abc', 'hjk']replac = ['abcde', 'hjklm']I would like to modify the base list by matching string items in match and replace these with the same index item from replac.Expected output: base = ['abcde', 'def', 'hjklm'] | Here is how I'd do it:mapp = dict(zip(match,replac))res = [mapp[e] if e in mapp else e for e in base] |
Remove certain word using regular expression I have a string which is shown below:a = 'steven (0.00030s ). prince (0.00040s ). kavin (0.000330s ). 23.24.21'I want to remove the numbers inside () and the brackets and want to have it like this:a = 'steven prince kavin 23.24.21' | Use re.subEx:import rea = 'steven (0.00030s ). prince (0.00040s ). kavin (0.000330s ). 23.24.21'print(re.sub(r"(\(.*?\)\.)", "", a))Output:steven prince kavin 23.24.21 |
mod_wsgi fails when it is asked to read a file I'm having a puzzling issue where when my Flask application requires reading a file from disk, it fails with the following error:[Mon Aug 26 22:29:48 2013] [error] [client 67.170.62.218] (2)No such file or directory: mod_wsgi (pid=15678): Unable to connect to WSGI daemon process 'flaskapp' on '/var/run/apache2/wsgi.14164.5.1.sock' after multiple attempts.When using the Flask development server, or running an application that does not read files, it works fine.directory structure:/flaskapp /static style.css /templates index.html flaskapp.py flaskapp.wsgi config.jsonflaskapp.py:import flaskimport jsonapp = flask.Flask(__name__)#config = json.loads(open('config.json', 'r').read())@app.route('/')def index(): return "Hello World" #return flask.render_template('index.html')if __name__ == '__main__': app.run(host='0.0.0.0')flaskapp.wsgiimport syssys.path.append('/root/flaskapp')from flaskapp import app as applicationsites-available:<VirtualHost *:80> ServerName localhost WSGIDaemonProcess flaskapp user=www-data group=www-data threads=5 WSGIScriptAlias / /root/flaskapp/flaskapp.wsgi WSGIScriptReloading On <Directory /root/flaskapp> WSGIProcessGroup flaskapp WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory></VirtualHost> | For the socket error see:http://code.google.com/p/modwsgi/wiki/ConfigurationIssues#Location_Of_UNIX_SocketsBTW, don't use relative path names for files you want to load either:http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Application_Working_DirectoryAlthough commented out right now, loading config.json in your code as you are would also usually fail. |
How to make a python webserver that gives a requested csv file I want to create a webserver on linux in python so that I can use it to get csv files from the linux machine onto a windows machine. I am quite unfamiliar with networking terminology so I would greatly appreciate it if the answer is a little detailed. I dont want to create a website, just the webserver to get the csv file requested. | If you have the files on disk and just want to serve them over HTTP you can use python's built in modules:python3 -m http.serverin python 3.xor python -m SimpleHTTPServerFor python 2.x.If you need something more dynamic check out Flask |
Count of most 'two words combination' popular Hebrew words in a pandas Dataframe with nltk I have a csv data file containing column 'notes' with satisfaction answers in Hebrew.I want to find the most popular words and popular '2 words combination', the number of times they show up and plotting them in a bar chart.My code so far:PYTHONIOENCODING="UTF-8" df= pd.read_csv('keep.csv', encoding='utf-8' , usecols=['notes'])words= df.notes.str.split(expand=True).stack().value_counts()This produce a list of the words with a counter but takes into account all the stopwords in Hebrew and don't produce '2 words combination' frequencies.I also tried this code and it's not what I'm looking for: top_N = 30 txt = df.notes.str.lower().str.replace(r'\|', ' ').str.cat(sep=' ') words = nltk.tokenize.word_tokenize(txt) word_dist = nltk.FreqDist(words) rslt = pd.DataFrame(word_dist.most_common(top_N), columns=['Word', 'Frequency']) print(rslt) print('=' * 60)How can I use nltk to do that? | In addition to what jezrael posted, I would like to introduce another hack of achieving this. Since you are trying to get individual as well as the two-word frequencies, you can also take advantage of the everygram function.Given a dataframe:import pandas as pddf = pd.DataFrame()df['notes'] = ['this is sentence one', 'is sentence two this one', 'sentence one was good']Get the one-word and two-word forms using everygrams(word_tokenize(x), 1, 2), to get the combinations of one, two, three word combinations, you can change 2 to 3, and so on. So in your case it should be:from nltk import everygrams, word_tokenizex = df['notes'].apply(lambda x: [' '.join(ng) for ng in everygrams(word_tokenize(x), 1, 2)]).to_frame()At this point you should see: notes0 [this, is, sentence, one, this is, is sentence...1 [is, sentence, two, this, one, is sentence, se...2 [sentence, one, was, good, sentence one, one w...You can now get the count by flattening the list and value_counts:import numpy as npflattenList = pd.Series(np.concatenate(x.notes))freqDf = flattenList.value_counts().sort_index().rename_axis('notes').reset_index(name = 'frequency')Final output: notes frequency0 good 11 is 22 is sentence 23 one 34 one was 15 sentence 36 sentence one 27 sentence two 18 this 29 this is 110 this one 111 two 112 two this 113 was 114 was good 1And now plotting the graph is easy:import matplotlib.pyplot as plt plt.figure()flattenList.value_counts().plot(kind = 'bar', title = 'Count of 1-word and 2-word frequencies')plt.xlabel('Words')plt.ylabel('Count')plt.show()Output: |
Using ConfigParser in Python - tests OK but wipes file once deployed I am running my own pool control system and I wanted to implement copying certain system parameters to a flat file for processing by a web interface I am working on. Since there are just a few entries I liked ConfigParser for the job. I built a test setup and it worked great. Here is that code:import ConfigParserconfig = ConfigParser.ConfigParser()get_pool_level_resistance_value = 870def read_system_status_values(file, section, system): config.read(file) current_status = config.get(section, system) print("Our current {} is {}.".format(system, current_status))def update_system_status_values(file, section, system, value): cfgfile = open(file, 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() print("{} updated to {}".format(system, value))def read_test(): read_system_status_values("current_system_status", "system_status", "pool_level_resistance_value")def write_test(): update_system_status_values("current_system_status", "system_status", "pool_level_resistance_value", get_pool_level_resistance_value)read_test()write_test()read_test()This is my config file "current_system_status":[system_status]running_status = Truefill_control_manual_disable = Falsepump_running = Truepump_watts = 865pool_level_resistance_value = 680pool_level = MIDWAYpool_is_filling = Falsepool_is_filling_auto = Falsepool_is_filling_manual = Falsepool_current_temp = 61pool_current_ph = 7.2pool_current_orp = 400sprinklers_running = Falsepool_level_sensor_battery_voltage = 3.2pool_temp_sensor_battery_voltage = 3.2pool_level_sensor_time_delta = 32pool_temp_sensor_time_delta = 18When I run my test file I get this output:ssh://root@scruffy:22/usr/bin/python -u /root/pool_control/V3.2/system_status.pyOur current pool_level_resistance_value is 350.pool_level_resistance_value updated to 870Our current pool_level_resistance_value is 870.Process finished with exit code 0This is exactly as expected. However when I move it to my main pool_sensors.py module, anytime I run it I get the following error:Traceback (most recent call last): File "/root/pool_control/V3.2/pool_sensors.py", line 58, in update_system_status_values config.set(section, system, value) File "/usr/lib/python2.7/ConfigParser.py", line 396, in set raise NoSectionError(section)ConfigParser.NoSectionError: No section: 'system_status'Process finished with exit code 1I then debugged (using PyCharm) and as I was walking through the code as soon as it gets to this line in the code:cfgfile = open(file, 'w')it wipes out my file completely, and hence I get the NoSectionError. When I debug my test file and it gets to that exact same line of code, it opens the file and updates it as expected.Both the test file and the actual file are in the same directory on the same machine using the same version of everything. The code that opens and writes the files is an exact duplicate of the test code with the exception of a debug print statement in the "production" code. I tried various methods including:cfgfile = open(file, 'w')cfgfile = open(file, 'r')cfgfile = open(file, 'wb')but no matter which one I use, once I include the test code in my production file, as soon as it hits that line it completely wipes out the file as opposed to updating it like my test files does. Here is the pertenent lines of the code where I call it:import pooldb # Database informationimport mysql.connectorfrom mysql.connector import errorcodeimport timeimport notificationsimport loggingimport ConfigParserDEBUG = pooldb.DEBUGconfig = ConfigParser.ConfigParser()def read_system_status_values(file, section, system): config.read(file) current_status = config.get(section, system) if DEBUG: print("Our current {} is {}.".format(system, current_status))def update_system_status_values(file, section, system, value): cfgfile = open(file, 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() if DEBUG: print("{} updated to {}".format(system,value))def get_pool_level_resistance(): """ Function to get the current level of our pool from our MySQL DB. """ global get_pool_level try: cnx = mysql.connector.connect(user=pooldb.username, password=pooldb.password, host=pooldb.servername, database=pooldb.emoncms_db) except mysql.connector.Error as err: if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: logger.error( 'Database connection failure: Check your username and password') if DEBUG: print( "Database connection failure: Check your username and " "password") elif err.errno == errorcode.ER_BAD_DB_ERROR: logger.error('Database does not exist. Please check your settings.') if DEBUG: print("Database does not exist. Please check your settings.") else: logger.error( 'Unknown database error, please check all of your settings.') if DEBUG: print( "Unknown database error, please check all of your " "settings.") else: cursor = cnx.cursor(buffered=True) cursor.execute(("SELECT data FROM `%s` ORDER by time DESC LIMIT 1") % ( pooldb.pool_resistance_table)) for data in cursor: get_pool_level_resistance_value = int("%1.0f" % data) cursor.close() logger.info("Pool Resistance is: %s", get_pool_level_resistance_value) if DEBUG: print( "pool_sensors: Pool Resistance is: %s " % get_pool_level_resistance_value) print( "pooldb: Static critical pool level resistance set at (" "%s)." % pooldb.pool_resistance_critical_level) print( "pooldb: Static normal pool level resistance set at (%s)." % pooldb.pool_resistance_ok_level) cnx.close() print("We made it here with a resistance of (%s)" % get_pool_level_resistance_value) update_system_status_values("current_system_status", "system_status", "pool_level_resistance_value", get_pool_level_resistance_value) if get_pool_level_resistance_value >= pooldb.pool_resistance_critical_level: get_pool_level = "LOW" update_system_status_values("current_system_status", "system_status", "pool_level", get_pool_level) if DEBUG: print("get_pool_level_resistance() returned pool_level = LOW") else: if get_pool_level_resistance_value <= pooldb.pool_resistance_ok_level: get_pool_level = "OK" update_system_status_values("current_system_status", "system_status", "pool_level", get_pool_level) if DEBUG: print("get_pool_level_resistance() returned pool_level = OK") if DEBUG: print("Our Pool Level is %s." % get_pool_level) return get_pool_levelI suspect that it might have something to do with another import maybe conflicting with the open(file,'w').My main module is pool_fill_control.py and it has these imports:import pooldb # Configuration informationimport datetimeimport loggingimport osimport socketimport subprocessimport threadingimport timeimport RPi.GPIO as GPIO # Import GPIO Libraryimport mysql.connectorimport requestsimport serialfrom mysql.connector import errorcodeimport notificationsimport pool_sensorsimport ConfigParserWithin a function in that module, it then calls my pool_sensors.py module shown above using this line of code:get_pool_level = pool_sensors.get_pool_level_resistance()Any informtion or help as to why it works one way and not the other would be greatly appreciated. | Well, interesting enough after doing more research and looking line by line through the code I realized that the only thing different I was doing in my test file was reading my file first before writing to it. So I changed my code as follows:Old Code:def update_system_status_values(file, section, system, value): cfgfile = open(file, 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() if DEBUG: print("{} updated to {}".format(system,value))New Code (thanks @ShadowRanger)def update_system_status_values(file, section, system, value): config.read(file) cfgfile = open('tempfile', 'w') config.set(section, system, value) config.write(cfgfile) cfgfile.close() os.rename('tempfile', file) if DEBUG: print("{} updated to {}".format(system, value))and now it works like a charm!!These are the steps now:1) Read it2) Open a temp file3) Update it4) Write temp file5) Close temp file6) Rename temp file over main file |
How to call function once in testcases in pytest If i execute below testcase i get the below outputsampletest_asample test_bIn this function sample() executes every method in testcases , i want to execute function at the start of the testcase not every method . I want out put like belowsampletest_atest_bEx:def sample(): print("sample")class Test_example(APITestCase): def setUp(self): sample() def test_a(self): print("test_a") def test_b(self): print("test_b") | You want a class scoped fixture:@pytest.fixture(scope="class")def sample(): print("sample")But you need to explicitly use the fixture in your tests:@pytest.mark.usefixtures("sample")class Test_example(APITestCase): def test_x(self): passNote that you don't need to call the fixture, either. It's a feature of your testing suite and is automatically called by pytest. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.