questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
---|---|
Spacy Phrase Matcher space sensitive issue terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."]doc = nlp("German Chancellor Angela Merkel and US President Barack Obama " "converse in the Oval Office inside the White House in Washington, D.C.")If I enter an extra space between the words "Barack Obama", the phrase matcher does not work since it is space sensitive.Is there a way to overcome this space sensitive issue?Operating System: Windows 8Python Version Used: 3.7spaCy Version Used: 2.2.3Environment Information: Conda | import rere.sub(' +',' ', "barack obama")#op'barack obama'refereing to the docs https://spacy.io/api/phrasematcherimport en_core_web_smnlp = en_core_web_sm.load()matcher = PhraseMatcher(nlp.vocab)matcher.add("OBAMA", None, nlp("Barack Obama"))doc = nlp("Barack Obama urges Congress to find courage to defend his healthcare reforms")matches = matcher(doc)#op[(7732777389095836264, 0, 2)]but when there is a multiple space between the string it will return empty list. i.e there is multiple space between barack obama doc = nlp("Barack Obama urges Congress to find courage to defend his healthcare reforms")print(matcher(doc))#op[]to Solve this, i thought of removing extra space from the given string string_= 'Barack Obama urges Congress to find courage to defend his healthcare reforms'space_removed_string = re.sub(' +',' ', string_)#now passing the string in modeldoc = nlp(space_removed_string)print(matcher(doc))#op[(7732777389095836264, 0, 2)] |
Textual parsing I am a newby with Python and Panda, but i would like to parse from multiple downloaded files (which have the same format).On every HTML there is an section like below where the executives are mentioned.<DIV id=article_participants class="content_part hid"><P>Redhill Biopharma Ltd. (NASDAQ:<A title="" href="http://seekingalpha.com/symbol/rdhl" symbolSlug="RDHL">RDHL</A>)</P><P>Q4 2014 <SPAN class=transcript-search-span style="BACKGROUND-COLOR: yellow">Earnings</SPAN> Conference <SPAN class=transcript-search-span style="BACKGROUND-COLOR: #f38686">Call</SPAN></P><P>February 26, 2015 9:00 AM ET</P><P><STRONG>Executives</STRONG></P><P>Dror Ben Asher - CEO</P><P>Ori Shilo - Deputy CEO, Finance and Operations</P><P>Guy Goldberg - Chief Business Officer</P>and further in the files there is a section called "DIV id=article_qanda class="content_part hid" where the executives like Ori Shilo is named followed by an answer, like: <P><STRONG><SPAN class=answer>Ori Shilo</SPAN></STRONG></P><P>Good morning, Vernon. Both safety which is obvious and fertility analysis under the charter of the data and safety monitoring board will be - will be on up.</P>Till now i only succeeded with an html parser for one individual by name to collect all their answers. I am not sure how to proceed and base the code on a variable list of executives. Does someone have a suggestion?import textwrapimport osfrom bs4 import BeautifulSoupdirectory ='C:/Research syntheses - Meta analysis/SeekingAlpha/out'for filename in os.listdir(directory): if filename.endswith('.html'): fname = os.path.join(directory,filename) with open(fname, 'r') as f: soup = BeautifulSoup(f.read(),'html.parser')print('{:<30} {:<70}'.format('Name', 'Answer'))print('-' * 101)for answer in soup.select('p:contains("Question-and-Answer Session") ~ strong:contains("Dror Ben Asher") + p'): txt = answer.get_text(strip=True) s = answer.find_next_sibling() while s: if s.name == 'strong' or s.find('strong'): break if s.name == 'p': txt += ' ' + s.get_text(strip=True) s = s.find_next_sibling() txt = ('\n' + ' '*31).join(textwrap.wrap(txt)) print('{:<30} {:<70}'.format('Dror Ben Asher - CEO', txt), file=open("output.txt", "a")) | To give some color to my original comment, I'll use a simple example. Let's say you've got some code that is looking for the string "Hello, World!" in a file, and you want the line numbers to be aggregated into a list. Your first attempt might look like:# where I will aggregate my resultsline_numbers = []with open('path/to/file.txt') as fh: for num, line in enumerate(fh): if 'Hello, World!' in line: line_numbers.append(num)This code snippet works perfectly well. However, it only works to check 'path/to/file.txt' for 'Hello, World!'. Now, you want to be able to change the string you are looking for. This is analogous to saying "I want to check for different executives". You could use a function to do this. A function allows you to add flexibility into a piece of code. In this simple example, I would do:# Now I'm checking for a parameter string_to_search# that I can change when I call the functiondef match_in_file(string_to_search): line_numbers = [] with open('path/to/file.txt') as fh: for num, line in enumerate(fh): if string_to_search in line: line_numbers.append(num) return line_numbers# now I'm just calling that function hereline_numbers = match_in_file("Hello, World!")You'd still have to make a code change, but this becomes much more powerful if you wanted to search for lots of strings. I could feasibly use this function in a loop if I wanted to (though I would do things a little differently in practice), for the sake of the example, I now have the power to do:list_of_strings = [ "Hello, World!", "Python", "Functions"]for s in list_of_strings: line_numbers = match_in_file(s) print(f"Found {s} on lines ", *line_numbers)Generalized to your specific problem, you'll want a parameter for the executive that you want to search for. Your function signature might look like:def find_executive(soup, executive): for answer in soup.select(f'p:contains("Question-and-Answer Session") ~ strong:contains({executive}) + p'): # rest of codeYou've already read in the soup, so you don't need to do that again. You only need to change the executive in your select statement. The reason you want a parameter for soup is so you aren't relying on variables in global scope. |
why are pylint's error squiggle lines not showing in python visual studio code? i'm using vscode for python3 in Ubuntu. Error-squiggle-lines have stopped working for Python(it works for other languages). And I am using Microsoft's Python extension.vscode v1.41.1Ubuntu v18.04this is what i have tried:I thought maybe it's because i installed anaconda so uninstalled it but didn't fix it.then I re-installed vs code after deleting its config from .config/code but that didn't work either.also set python linting to true from command paletteit's not showing error squiggle lines:here is the Microsoft's python extension's contributions regarding linting(sorry for poor readability):Whether to lint Python files. true python.linting.flake8Args Arguments passed in. Each argument is a separate item in the array. python.linting.flake8CategorySeverity.E Severity of Flake8 message type 'E'. Error python.linting.flake8CategorySeverity.F Severity of Flake8 message type 'F'. Error python.linting.flake8CategorySeverity.W Severity of Flake8 message type 'W'. Warning python.linting.flake8Enabled Whether to lint Python files using flake8 false python.linting.flake8Path Path to flake8, you can use a custom version of flake8 by modifying this setting to include the full path. flake8 python.linting.ignorePatterns Patterns used to exclude files or folders from being linted. .vscode/*.py,**/site-packages/**/*.py python.linting.lintOnSave Whether to lint Python files when saved. true python.linting.maxNumberOfProblems Controls the maximum number of problems produced by the server. 100 python.linting.banditArgs Arguments passed in. Each argument is a separate item in the array. python.linting.banditEnabled Whether to lint Python files using bandit. false python.linting.banditPath Path to bandit, you can use a custom version of bandit by modifying this setting to include the full path. bandit python.linting.mypyArgs Arguments passed in. Each argument is a separate item in the array. --ignore-missing-imports,--follow-imports=silent,--show-column-numbers python.linting.mypyCategorySeverity.error Severity of Mypy message type 'Error'. Error python.linting.mypyCategorySeverity.note Severity of Mypy message type 'Note'. Information python.linting.mypyEnabled Whether to lint Python files using mypy. false python.linting.mypyPath Path to mypy, you can use a custom version of mypy by modifying this setting to include the full path. mypy python.linting.pycodestyleArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pycodestyleCategorySeverity.E Severity of pycodestyle message type 'E'. Error python.linting.pycodestyleCategorySeverity.W Severity of pycodestyle message type 'W'. Warning python.linting.pycodestyleEnabled Whether to lint Python files using pycodestyle false python.linting.pycodestylePath Path to pycodestyle, you can use a custom version of pycodestyle by modifying this setting to include the full path. pycodestyle python.linting.prospectorArgs Arguments passed in. Each argument is a separate item in the array. python.linting.prospectorEnabled Whether to lint Python files using prospector. false python.linting.prospectorPath Path to Prospector, you can use a custom version of prospector by modifying this setting to include the full path. prospector python.linting.pydocstyleArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pydocstyleEnabled Whether to lint Python files using pydocstyle false python.linting.pydocstylePath Path to pydocstyle, you can use a custom version of pydocstyle by modifying this setting to include the full path. pydocstyle python.linting.pylamaArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pylamaEnabled Whether to lint Python files using pylama. false python.linting.pylamaPath Path to pylama, you can use a custom version of pylama by modifying this setting to include the full path. pylama python.linting.pylintArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pylintCategorySeverity.convention Severity of Pylint message type 'Convention/C'. Information python.linting.pylintCategorySeverity.error Severity of Pylint message type 'Error/E'. Error python.linting.pylintCategorySeverity.fatal Severity of Pylint message type 'Fatal/F'. Error python.linting.pylintCategorySeverity.refactor Severity of Pylint message type 'Refactor/R'. Hint python.linting.pylintCategorySeverity.warning Severity of Pylint message type 'Warning/W'. Warning python.linting.pylintEnabled Whether to lint Python files using pylint. true python.linting.pylintPath Path to Pylint, you can use a custom version of pylint by modifying this setting to include the full path. pylint python.linting.pylintUseMinimalCheckers Whether to run Pylint with minimal set of rules. truepython.linting.pylintEnabled is: truepython.linting.pylintPath is: pylintall the errors in visual studio's console of developer tools:console.ts:137 [Extension Host] Error Python Extension: 2020-01-18 18:35:53: Failed to serialize gatherRules for DATASCIENCE.SETTINGS [TypeError: Cannot convert object to primitive value at Array.join (<anonymous>) at Array.toString (<anonymous>) at /home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:12901 at Array.forEach (<anonymous>) at Object.l [as sendTelemetryEvent] (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:12818) at C.sendSettingsTelemetry (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:75:707093) at C.r.value (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:87512) at Timeout._onTimeout (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:86031) at listOnTimeout (internal/timers.js:531:17) at processTimers (internal/timers.js:475:7)]t.log @ console.ts:1372console.ts:137 [Extension Host] Notification handler 'textDocument/publishDiagnostics' failed with message: Cannot read property 'connected' of undefinedt.log @ console.ts:1372console.ts:137 [Extension Host] (node:21707) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.t.log @ console.ts:137$logExtensionHostMessage @ mainThreadConsole.ts:39_doInvokeHandler @ rpcProtocol.ts:398_invokeHandler @ rpcProtocol.ts:383_receiveRequest @ rpcProtocol.ts:299_receiveOneMessage @ rpcProtocol.ts:226(anonymous) @ rpcProtocol.ts:101fire @ event.ts:581fire @ ipc.net.ts:453_receiveMessage @ ipc.net.ts:733(anonymous) @ ipc.net.ts:592fire @ event.ts:581acceptChunk @ ipc.net.ts:239(anonymous) @ ipc.net.ts:200t @ ipc.net.ts:28emit @ events.js:200addChunk @ _stream_readable.js:294readableAddChunk @ _stream_readable.js:275Readable.push @ _stream_readable.js:210onStreamRead @ internal/stream_base_commons.js:166output for python in output panel:User belongs to experiment group 'AlwaysDisplayTestExplorer - control'User belongs to experiment group 'ShowPlayIcon - start'User belongs to experiment group 'ShowExtensionSurveyPrompt - enabled'User belongs to experiment group 'DebugAdapterFactory - experiment'User belongs to experiment group 'AA_testing - experiment'> conda --version> pyenv root> python3.7 -c "import sys;print(sys.executable)"> python3.6 -c "import sys;print(sys.executable)"> python3 -c "import sys;print(sys.executable)"> python2 -c "import sys;print(sys.executable)"> python -c "import sys;print(sys.executable)"> /usr/bin/python3.8 -c "import sys;print(sys.executable)"> conda info --json> conda env listStarting Microsoft Python language server.> conda --version> /usr/bin/python3.8 ~/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/interpreterInfo.py> /usr/bin/python3.8 ~/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/interpreterInfo.pyhow to get the squiggle lines to work again? | In your settings.json file(search for settings.json in the command palette), declare the following:"python.linting.pylintEnabled": true, "python.jediEnabled": falseif you just want the changes in your workspace then change the settings.json file in .vscode folderIn latest version of visual studio code, workspace is not registering settings from checkboxes so you have to explicitly declare in settings.json what settings you want to enable for your workspace. Flake8 is not affected by this. Pylint and Microsoft Python Language Server seem to be not working due to this.side note:got this solution from sys-temd's reply on github.com/microsoft/vscode-python/issues |
Python 2.7 scoping issue with variable/method when placed inside a function I'm new to python and notice this code works when written without being put inside a function. from selenium import webdriverdriver = lambda: Nonedef setup_browser(): # unnecessary code removed driver = webdriver.Firefox() return driversetup_browser()driver.set_window_size(1000, 700)driver.get("https://icanhazip.com/")As shown above, I get this error:`AttributeError: 'function' object has no attribute 'set_window_size'My reading is that driver is not being updated before it is called. Why is this? | The problem is that inside of setup_browser() you're setting a local variable named driver, but you are not modifying the global variable driver. To do that, you need to use the global keyword:def setup_browser(): global driver driver = webdriver.Firefox() return driverHowever, overriding the driver global variable and returning it at the same time is redundant. It would be better to not define driver globally as a null function, but to assign it directly. E.g.,from selenium import webdriverdef setup_browser(): driver = webdriver.Firefox() return driverdriver = setup_browser()driver.set_window_size(1000, 700)driver.get("https://icanhazip.com/") |
Bulbs python Connection to a remote TitanDB + Rexster I'm using TitanGraphDB + Cassandra. I'm starting Titan as followscd titan-cassandra-0.3.1bin/titan.sh config/titan-server-rexster.xml config/titan-server-cassandra.propertiesI have a Rexster shell that I can use to communicate to Titan + Cassandra above.cd rexster-console-2.3.0bin/rexster-console.shI'm attempting to model a network topology using Titan Graph DB. I want to program the Titan Graph DB from my python program. I'm using python bulbs package for that.My code to create the graph is as follows.from bulbs.titan import Graph self.g = Graph()Now I have rexster-console and Titan running on machine with IP Address 192.168.65.93.If my python application is runnnig on the same machine I use self.g = Graph().What if I want to connect to the Titan AND Rexster running on machine with IP 192.168.65.93 from python application on 192.168.65.94How do I do that? Can I pass some parameter (e.g a config file to Graph())? Where can I find it? | Simply set the Titan graph URI in the Bulbs Config object:>>> from bulbs.titan import Graph, Config>>> config = Config('http://192.168.65.93:8182/graphs/graph')>>> g = Graph(config)See Bulbs Config...http://bulbflow.com/docs/api/bulbs/config/https://github.com/espeed/bulbs/blob/master/bulbs/config.pyAnd Bulbs Graph (note Titan's Graph class is a subclass of Rexster's Graph class)...http://bulbflow.com/docs/api/bulbs/rexster/graph/ https://github.com/espeed/bulbs/blob/master/bulbs/titan/graph.pyAnd I encourage you to read through the Bulbs Quickstart and other docs because many of these questions are answered in there...http://bulbflow.com/docs/http://bulbflow.com/quickstart/The Quickstart uses bulbs.neo4jserver as an example, but since the Bulbs API is consistent regardless of the backend server you are using, the Quickstart examples are also relevant to Titan Server and Rexster.To adapt the Bulbs Quickstart for Titan or Rexster, simply change the Graph import from...>>> from bulbs.neo4jserver import Graph>>> g = Graph()...to...>>> from bulbs.titan import Graph>>> g = Graph()...or...>>> from bulbs.rexster import Graph>>> g = Graph() |
How to add Column to DataFrame while keeping dates correlated I am working with Pandas and Matplotlib to chart some Crypto Transactions.The column I am working with is Amount, where I am trying to chart the incoming and outgoing transactions. Incoming has a + in front of the number, and outgoing has a -.The goal is to use Matplotlib to create a bar chart with the incoming and outgoing transactions.What I think needs to be done is for the Amount column to be sorted by if it contains a + or a -, and then each type have their own column that is correlated with the date of the transaction.For example, the +20,000 Transaction on the first row would be filed under the Incoming Transactions column, while on the same row that it was originally in (to keep the same date).I have attempted to create this but based on my error code I am having trouble when it comes to creating a new column.parse_dates = ['Time']df = pd.read_csv('DSb5CvAXhXnzFoxmiMaWpgxjDF6CfMK7h2.csv', index_col=0, parse_dates=parse_dates)df2 = df.assign(Outgoing = df.loc[df["Amount"].str.contains('\-', regex=True)]) #outgoing_transactions = df.loc[df["Amount"].str.contains('\-', regex=True)]#incoming_transactions = df.loc[df["Amount"].str.contains('\+', regex=True)]df2This is the error code I receive:ValueError: Wrong number of items passed 5, placement implies 1 | You could use a regular expression to extract from the Amount column only the value relative to the Dogecoin. Then, create a variable to indicate the transaction's direction and use it with accessor .dt.date to create the groups.Use agg sum to add values within the same day and transaction type, follow by unstack to pivot the transaction type into two different columns. Use the columns created to plot the data using two different plt.bar commands, this will give the effect of two bars on the same day, one for each transaction type.df used as input Time Amount0 2022-01-01 00:00:00.000000000 +7,965,429.87 DOGE (18,343.48 USD)1 2022-01-01 07:30:54.545454545 -5,986,584.84 DOGE (15,601.86 USD)2 2022-01-01 15:01:49.090909090 +999,749.16 DOGE (45,924.89 USD)3 2022-01-01 22:32:43.636363636 +6,011,150.12 DOGE (70,807.26 USD)4 2022-01-02 06:03:38.181818181 -564,115.79 DOGE (72,199.88 USD).. ... ...95 2022-01-30 17:56:21.818181818 -6,454,722.96 DOGE (17,711.07 USD)96 2022-01-31 01:27:16.363636363 -4,699,445.14 DOGE (27,956.03 USD)97 2022-01-31 08:58:10.909090909 -3,701,587.0 DOGE (1,545.66 USD)98 2022-01-31 16:29:05.454545454 -3,307,503.05 DOGE (55,276.5 USD)99 2022-02-01 00:00:00.000000000 +9,636,199.77 DOGE (85,300.95 USD)[100 rows x 2 columns]df['DOGE'] = df['Amount'] \ .str.extract(r'([+-](?:\d+,?)+?(?:.\d+)?)\s') \ .replace(",","", regex=True).astype(float)flow = df['DOGE'].apply(lambda x: "outcome" if x<0 else "income")grouped = df.groupby([df['Time'].dt.date, flow])action = grouped.agg(amount=('DOGE', sum)).unstack()if ('amount','income') in action: plt.bar(action.index, action[('amount', 'income')], color='g', label='income')if ('amount', 'outcome') in action: plt.bar(action.index, action[('amount', 'outcome')], color='r', label='outcome')plt.xticks(rotation=45)plt.legend()plt.show() |
Get Prior Row of Dataframe in For Loop Python This is what I believe to be a simple logic problem, but I have been working at this for a while and haven't figured it out, so hopefully someone can find the easy solution that I have been missing. I would like to be able to get the prior part of a dataframe using the following code, and have settle for the (row-1) solve in the fourth row, but that obviously did not work.for row in players_at_start_of_period.iterrows(): if(row[1]['PERIOD']): continue elif(row[1]['PERIOD'] - 2) > (row-1)[1]['PERIOD']: sub_map.update = {row[1]['TEAM_ID_1']: split_row(row[1]['TEAM_1_PLAYERS']), row[1]['TEAM_ID_2']: split_row(row[1]['TEAM_2_PLAYERS'])} else: continueWhat would I be able to do to access the value that exists one iteration prior to the current value of 'row'? Thanks! | I am not sure how your data is, but iterrows() already returns the index, so you could do something like this:import pandas as pdimport random# read the data from the downloaded CSV file.df = pd.read_csv('https://s3-eu-west-1.amazonaws.com/shanebucket/downloads/uk-500.csv')# set a numeric id for use as an index for examples.df['index'] = [random.randint(0,1000) for x in range(df.shape[0])]for index, row in df.iterrows(): previous_name = '' if index > 0: previous_name = df.loc[index - 1]['first_name'] print(previous_name, df.loc[index]['first_name']) |
fastest calculation of largest prime factor of 512 bit number in python i am simulating my crypto scheme in python, i am a new user to it.p = 512 bit number and i need to calculate largest prime factor for it, i am looking for two things:Fastest code to process this large prime factorizationCode that can take 512 bit of number as input and can handle it.I have seen different implementations in other languages, my whole code is in python and this is last point where i am stuck. So let me know if there is any implementation in python.Kindly explain in simple as i am new user to pythonsorry for bad english.edit (taken from OP's answer below):#!/usr/bin/env pythondef highest_prime_factor(n): if isprime(n): return n for x in xrange(2,n ** 0.5 + 1): if not n % x: return highest_prime_factor(n/x)def isprime(n): for x in xrange(2,n ** 0.5 + 1): if not n % x: return False return Trueif __name__ == "__main__": import time start = time.time() print highest_prime_factor(1238162376372637826) print time.time() - startThe code above works (with a bit of delay) for "1238162376372637826" butextending it to 10902610991329142436630551158108608965062811746392 57767545600484549911304430471090261099132914243663 05511581086089650628117463925776754560048454991130443047makes python go crazy. Is there any way so that just like above, i can have itcalculated it in no time? | For a Python-based solution, you might want to look at pyecm On a system with gmpy installed also, pyecm found the following factors:101, 521, 3121, 9901, 36479, 300623, 53397071018461, 1900381976777332243781There still is a 98 digit unfactored composite:60252507174568243758911151187828438446814447653986842279796823262165159406500174226172705680274911Factoring this remaining composite using ECM may not be practical.Edit: After a few hours, the remaining factors are6060517860310398033985611921721and9941808367425935774306988776021629111399536914790551022447994642391 |
User input filename when reading in netCDF files in Python I have a set of soil moisture data files from 1953 to 2014. All of them are of the form cpc_soil_YYYY.nc (where YYYY is one of those years). Is there a way for me to ask for user input of which year the user would like to view, and have my program open the corresponding function? I currently have it where I must manually change the year within gedit, and wrote functions to grab each variable (soil moisture as a function of time, lat, lon): import netCDF4 as nc import numpy as np import numpy.ma as ma import csv as csv fid=nc.MFDataset('/data/reu_data/soil_moisture/cpc_soil_1957.nc','r') fid.close() ncf='/data/reu_data/soil_moisture/cpc_soil_1957.nc' def read_var(ncfile, varname): fid=nc.Dataset(ncfile, 'r') out=fid.variables[varname][:] fid.close() return out time=read_var(ncf, 'time') lat=read_var(ncf, 'lat') lon=read_var(ncf, 'lon') soil=read_var(ncf, 'soilw') | You can use input() to ask user to enter the year. Then you can use that to generate the filepath....year = input("Enter year: "))filename = '/data/reu_data/soil_moisture/cpc_soil_%s.nc' % (year,)fid=nc.MFDataset(filename,'r')fid.close()...You should do error checking to make sure the user entered value is actually a year and falls within the range of your data.You can read more on input/output in Python here. |
divide list and generate series of new lists. one from each list and rest into other I have three lists and want to sort and generate two new list. Can any one please tell how it can be done?list1=[12,25,45], list2=[14,69], list3=[54,98,68,78,48]I want to print the output likechosen1=[12,14,54], rest1=[25,45,69,98,68,78,48]chosen2=[12,14,98], rest2=[25,45,69,54,68,78,48]and so on (every possible combination for chosen list)I have tried to write this but I don't know list1=[12,25,45]list2=[14,69]list3=[54,98,68,78,48]for i in xrange (list1[0],list1[2]):for y in xrange(list2[0], list2[1]):for z in xrange(list[0],list[4]) for a in xrange(chosen[0],[2])chosed1.append()for a in xrange(chosen[0],[7])rest1.append()Print rest1Print chosen1 | itertools.product generates all permutations of selecting one thing each out of different sets of things:import itertoolslist1 = [12,25,45]list2 = [14,69]list3 = [54,98,68,78,48]for i,(a,b,c) in enumerate(itertools.product(list1,list2,list3),1): # Note: Computing rest this way will *not* work if there are duplicates # in any of the lists. rest1 = [n for n in list1 if n != a] rest2 = [n for n in list2 if n != b] rest3 = [n for n in list3 if n != c] rest = ','.join(str(n) for n in rest1+rest2+rest3) print('chosen{0}=[{1},{2},{3}], rest{0}=[{4}]'.format(i,a,b,c,rest))Output:chosen1=[12,14,54], rest1=[25,45,69,98,68,78,48]chosen2=[12,14,98], rest2=[25,45,69,54,68,78,48]chosen3=[12,14,68], rest3=[25,45,69,54,98,78,48]chosen4=[12,14,78], rest4=[25,45,69,54,98,68,48]chosen5=[12,14,48], rest5=[25,45,69,54,98,68,78]chosen6=[12,69,54], rest6=[25,45,14,98,68,78,48]chosen7=[12,69,98], rest7=[25,45,14,54,68,78,48]chosen8=[12,69,68], rest8=[25,45,14,54,98,78,48]chosen9=[12,69,78], rest9=[25,45,14,54,98,68,48]chosen10=[12,69,48], rest10=[25,45,14,54,98,68,78]chosen11=[25,14,54], rest11=[12,45,69,98,68,78,48]chosen12=[25,14,98], rest12=[12,45,69,54,68,78,48]chosen13=[25,14,68], rest13=[12,45,69,54,98,78,48]chosen14=[25,14,78], rest14=[12,45,69,54,98,68,48]chosen15=[25,14,48], rest15=[12,45,69,54,98,68,78]chosen16=[25,69,54], rest16=[12,45,14,98,68,78,48]chosen17=[25,69,98], rest17=[12,45,14,54,68,78,48]chosen18=[25,69,68], rest18=[12,45,14,54,98,78,48]chosen19=[25,69,78], rest19=[12,45,14,54,98,68,48]chosen20=[25,69,48], rest20=[12,45,14,54,98,68,78]chosen21=[45,14,54], rest21=[12,25,69,98,68,78,48]chosen22=[45,14,98], rest22=[12,25,69,54,68,78,48]chosen23=[45,14,68], rest23=[12,25,69,54,98,78,48]chosen24=[45,14,78], rest24=[12,25,69,54,98,68,48]chosen25=[45,14,48], rest25=[12,25,69,54,98,68,78]chosen26=[45,69,54], rest26=[12,25,14,98,68,78,48]chosen27=[45,69,98], rest27=[12,25,14,54,68,78,48]chosen28=[45,69,68], rest28=[12,25,14,54,98,78,48]chosen29=[45,69,78], rest29=[12,25,14,54,98,68,48]chosen30=[45,69,48], rest30=[12,25,14,54,98,68,78] |
User Inputs and Conditional statements along with Code Executional delay and Loops This is my Mad Libs Project. I am a beginner so I tried to mixed all my learnings in Python which are User Inputs, Variables, Conditional Statements, and etc. Unfornately, It doesnt work and I cant identify the problem. For me its all good, I guess. I hope you could help me guys.Please bear with me. I am still a noob.import timename = input('Hello! What is your name? ')print('Hi! ' + name + ' I\'m Sean. Nice to meet you!')time.sleep(2)def main(): ans = input('\'Wanna play a game? ').upper() if ans='YES': print('Great! Lets get started') time.sleep(2) print('The called Mad Libs. \nThe mechanics is simple, your going to give words according to its category \nand your answer will be added to my script I made beforehand.') def main2() ans2=input('Are you ready? ').lower() if ans2=='yes': Vegetable = input('Vegetable: ') Superhero = input('Superhero: ') Celebrity = input('Celebrity: ') Country = input('Country: ') Time_of_day = input(r'Time of day (ex. 11:11): ') Number = input('Number: ') Vegetable2 = input('Another Vegetable: ') Childhood_toy = input('Childhood Toy: ') Liquid = input(r'Liquid (ex. water,ketchup,etc.): ') Joke = input('Joke Quote: ') Emotion = input('Emotion: ') Unusual_pet = input('A unusual pet: ') Plant = input('A plant: ') Body_part = input('A body part: ') Furniture = input('Furniture: ') Number2 = input('Another number: ') Animal = input('Another animal: ') Food = input('Food: ') Catchphrase = input('A Catchphrase: ') elif ans2=='no': print('Aww! Maybe next time.') else: print('I didn\'t quite understand that, come again?').lower() main2() elif ans=='NO': print('Aww! Maybe next time.') time.sleep(3) exit() else: print('I didn\'t quite understand that, come again?').lower() main()main() | import timename = input('Hello! What is your name? ')print('Hi! ' + name + ' I\'m Sean. Nice to meet you!')time.sleep(2)def main(): ans = input('\'Wanna play a game? ').upper() if ans == 'YES': # Was missing an equal sign print('Great! Lets get started') time.sleep(2) print( 'The called Mad Libs. \nThe mechanics is simple, your going to give words according to its category \nand your answer will be added to my script I made beforehand.') def main2(): # Was missing colon ans2 = input('Are you ready? ').lower() # Everything below was not indented if ans2 == 'yes': Vegetable = input('Vegetable: ') Superhero = input('Superhero: ') Celebrity = input('Celebrity: ') Country = input('Country: ') Time_of_day = input(r'Time of day (ex. 11:11): ') Number = input('Number: ') Vegetable2 = input('Another Vegetable: ') Childhood_toy = input('Childhood Toy: ') Liquid = input(r'Liquid (ex. water,ketchup,etc.): ') Joke = input('Joke Quote: ') Emotion = input('Emotion: ') Unusual_pet = input('A unusual pet: ') Plant = input('A plant: ') Body_part = input('A body part: ') Furniture = input('Furniture: ') Number2 = input('Another number: ') Animal = input('Another animal: ') Food = input('Food: ') Catchphrase = input('A Catchphrase: ') elif ans2 == 'no': print('Aww! Maybe next time.') else: print('I didn\'t quite understand that, come again?') # There should be no .lower() on a print function main2() # Everything above was not indented main2() # You only defined main2(), you never actually used it elif ans == 'NO': print('Aww! Maybe next time.') time.sleep(3) exit() else: print('I didn\'t quite understand that, come again?') # There should be no .lower() on a print function main()main() |
Django form - update boolean field to true I'm trying to up update a boolean field but I got this issue: save() got an unexpected keyword argument 'update_fields'.I got different issue: at the beginning when seller complete the form it was creating a new channel. I just want to update the current channel.Logic= consumer create a channel with a seller (channel is not active) -> if seller wants to launch it. he has a form to make it true and launch it.models:class Sugargroup(models.Model): consumer = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="sugargroup_consumer", blank=True, null=True) seller = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="sugargroup_seller") is_active = models.BooleanField('Make it happen', default=False) slug = models.SlugField(editable=False, unique=True)views:@method_decorator(login_required(login_url='/cooker/login'),name="dispatch")class CheckoutDetail(generic.DetailView, FormMixin): model = Sugargroup context_object_name = 'sugargroup' template_name = 'checkout_detail.html' form_class = CreateSugarChatForm validation_form_class = LaunchSugargroupForm def get_context_data(self, **kwargs): context = super(CheckoutDetail, self).get_context_data(**kwargs) context['form'] = self.get_form() context['validation_form'] = self.get_form(self.validation_form_class) #self.validation_form_class() return context def form_valid(self, form): if form.is_valid(): form.instance.sugargroup = self.object form.instance.user = self.request.user form.save() return super(CheckoutDetail, self).form_valid(form) else: return super(CheckoutDetail, self).form_invalid(form) def form_valide(self, validation_form): if validation_form.is_valid(): validation_form.instance.sugargroup = self.object #validation_form.instance.seller = self.request.user validation_form.save(update_fields=["is_active"]) return super(CheckoutDetail, self).form_valid(validation_form) else: return super(CheckoutDetail, self).form_invalid(validation_form) def post(self,request,*args,**kwargs): self.object = self.get_object() form = self.get_form() validation_form = self.validation_form_class(request.POST) #validation_form = self.get_form(self.validation_form_class) if form.is_valid(): return self.form_valid(form) elif validation_form.is_valid(): return self.form_valide(validation_form) else: return self.form_valid(form) def get_success_url(self): return reverse('checkout:checkout_detail',kwargs={"slug":self.object.slug})formsclass LaunchSugargroupForm(forms.ModelForm): def __init__(self,*args,**kwargs): super(LaunchSugargroupForm, self).__init__(*args,**kwargs) self.helper = FormHelper() self.helper.form_method="post" self.helper.layout = Layout( Field("is_active",css_class="single-input"), ) self.helper.add_input(Submit('submit','Launch the channel',css_class="btn btn-primary single-input textinput textInput form-control")) class Meta: model = Sugargroup fields = [ 'is_active' ] | Try this:validation_form.is_active = Truevalidation_form.save() |
Python / Pandas / PuLP optimization on a column I'm trying to optimize a column of data in a Pandas dataframe. I've looked through past posts but couldn't find one that addressed the issue of optimizing values in a column in a dataframe. This is my first post and relatively new to coding so apologizes upfront. Below is the code I'm usingfrom pandas import DataFrameimport numpy as npfrom pulp import *heading = [184, 153, 140, 122, 119]df = DataFrame (heading, columns=['heading'])df['speed'] = 50 df['ratio'] = df.speed/df.headingconditions = [ (df['ratio'] < 0.1), (df['ratio'] >= 0.1 ) & (df['ratio'] < 0.2), (df['ratio'] >= 0.2 ) & (df['ratio'] < 0.3), (df['ratio'] >= 0.3 ) & (df['ratio'] < 0.4), (df['ratio'] > 0.4 )]choices = [3, 1, 8, 5, 2]df['choice'] = np.select(conditions, choices)df['final_column'] = df.choice * df.headingprint(np.sum(df.final_column))I use np.select to search through 'conditions' and return the appropriate 'choices'. This is functioning like a vlookup I use in excel.I'm trying to get PuLP or any other appropriate optimization tool or maybe even just a loop to find the optimal values for df.speed (which I start with temporary value of 50) to maximize the sum of values in the 'final_column.' Below is the code I've tried but its not working.prob = LpProblem("Optimal Values",LpMaximize)speed_vars = LpVariable("Variable",df.speed,lowBound=0,cat='Integer')prob += lpSum(df.new_column_final)prob.solve()Below is the error I'm getting:speed_vars = LpVariable("Variable",df.speed,lowBound=0,cat='Integer')TypeError: init() got multiple values for argument 'lowBound'Thanks so much for your help. Any help would be appreciated! | First of all the specific error message you are getting:TypeError: __init__() got multiple values for argument 'lowBound'In python when calling a function you can pass arguments either by 'position' - which means the order in which you pass the arguments tells the function what each of them is - or by naming them. If you look up the documentation for the pulp.LpVariable method you'll see the second position argument is 'lowbound' which you then also pass as a named argument - hence the error message.I think you might also be slighly misunderstanding how a dataframe works. It is not like excel where you set a 'formula' in a column and it stays updated to that formula as other elements on that row change. You can assign values to columns but if the input data change - the cell would only be updated if that bit of code was run again.In terms of solving your problem - I'm not convinced I've understood what you're trying to do but I've understood the following.We want to select values of df['speed'] to maximise the sum-product of heading and choices columnsThe value of the choices column depends on the ratio of speed to heading (as per the given 5 ranges)Heading column is fixedBy inspection the optimum will be achieved by setting all of the speeds so that the ratios are in the [0.2 - 0.3] range, and where they fall in that range doesn't matter. Code to do this in PuLP within pandas dataframes below. It relised on using binary variables to keep track of which range the ratios fall in.The syntax is a little awkward though - I'd recommend doing the optimisation completely outside of dataframes and just loading results in at the end - using the LpVariable.dicts method to create arrays of variables instead.from pandas import DataFrameimport numpy as npfrom pulp import *headings = [184.0, 153.0, 140.0, 122.0, 119.0]df = DataFrame (headings, columns=['heading'])df['speed'] = 50max_speed = 500.0max_ratio = max_speed / np.min(headings)df['ratio'] = df.speed/df.headingconditions_lb = [0, 0.1, 0.2, 0.3, 0.4]conditions_ub = [0.1, 0.2, 0.3, 0.4, max_speed / np.min(headings)]choices = [3, 1, 8, 5, 2]n_range = len(choices)n_rows = len(df)# Create primary ratio variables - one for each variable:df['speed_vars'] = [LpVariable("speed_"+str(j)) for j in range(n_rows)]# Create auxilary variables - binaries to control# which bit of range each speed is indf['aux_vars'] = [[LpVariable("aux_"+str(i)+"_"+str(j), cat='Binary') for i in range(n_range)] for j in range(n_rows)]# Declare problemprob = LpProblem("max_pd_column",LpMaximize)# Define objective functionprob += lpSum([df['aux_vars'][j][i]*choices[i]*headings[j] for i in range(n_range) for j in range(n_rows)])# Constrain only one range to be selected for each rowfor j in range(n_rows): prob += lpSum([df['aux_vars'][j][i] for i in range(n_range)]) == 1# Constrain the value of the speed by the ratio range selectedfor j in range(n_rows): for i in range(n_range): prob += df['speed_vars'][j]*(1.0/df['heading'][j]) <= \ conditions_ub[i] + (1-df['aux_vars'][j][i])*max_ratio prob += df['speed_vars'][j]*(1.0/df['heading'][j]) >= \ conditions_lb[i]*df['aux_vars'][j][i]# Solve problem and print resultsprob.solve()# Dislay the optimums of each var in problemfor v in prob.variables (): print (v.name, "=", v.varValue)# Set values in dataframe and print:df['speed_opt'] = [df['speed_vars'][j].varValue for j in range(n_rows)]df['ratio_opt'] = df.speed_opt/df.headingprint(df)The last bit of which prints out: heading speed_vars b spd_opt rat_opt0 184.0 speed_0 [b_0_0, b_1_0, b_2_0, b_3_0, b_4_0] 36.8 0.21 153.0 speed_1 [b_0_1, b_1_1, b_2_1, b_3_1, b_4_1] 30.6 0.22 140.0 speed_2 [b_0_2, b_1_2, b_2_2, b_3_2, b_4_2] 28.0 0.23 122.0 speed_3 [b_0_3, b_1_3, b_2_3, b_3_3, b_4_3] 24.4 0.24 119.0 speed_4 [b_0_4, b_1_4, b_2_4, b_3_4, b_4_4] 23.8 0.2 |
Access "upload_to" of a Model's FileFIeld in Django? I have a Model with a FileField like that:class Video(MediaFile): """ Model to store Videos """ file = FileField(upload_to="videos/") [...]I'm populating the DB using a cron script.Is it possible to somehow access the "upload_to" value of the model?I could use a constant, but that seems messy. Is there any way to access it directly? | You can access this with:Video.file.field.upload_to # 'videos/'or through the _meta object:Video._meta.get_field('file').upload_to # 'videos/'The upload_to=… parameter [Django-doc] can however also be given a function that takes two parameters, and thus in that case it will not return a string, but a reference to that function. |
Which is the maximum number of variables that Gekko library support? I am trying to solve a problem that has more than one million variables with the Gekko library for python? Does anyone know how many variables can manage that library? | Gekko is not limited by a certain number of variables. Each mode (IMODE) takes a base model and then applies it to each time point (for IMODE>4) or for every data set (IMODE=2). The base model does have a limit of 10,000,000 but that is mostly just as a large upper bound. A problem with 10M simultaneous differential equations x 100 time points would be 1,000,000,000 (1B) variables and this is allowed in Gekko. The developers can increase the 10M limit if a user ever runs into that. It is there as a check just in case someone has an error in their model and didn't intend to spawn a very large problem. Here is a case study that shows the scale-up comparison with number of differential equations for simulation with MATLAB (ode15s), SciPy (ODEINT), and APMonitor (engine for Gekko).The results show that APMonitor / Gekko isn't as fast for small problems but has good scale-up potential for larger scale problems. The plot only shows up to 3000 simultaneous differential equations. Gekko's current arbitrary limit is set to 10M. |
py2exe + pywin32 MemoryLoadLibrary import fail when bundle_files=1 I have created a simple program which uses pywin32. I want to deploy it as an executable, so I py2exe'd it. I also didn't want a huge amount of files, so I set bundle_files to 1 (meaning bundle everything together). However, when I attempt running it, I get:Traceback (most recent call last): File "pshelper.py", line 4, in <module> File "zipextimporter.pyc", line 82, in load_module File "win32.pyc", line 8, in <module> File "zipextimporter.pyc", line 98, in load_moduleImportError: MemoryLoadLibrary failed loading win32ui.pydIn my setup script, I tried doing packages=["win32ui"] and includes=["win32ui"] as options, but that didn't help. How can I get py2exe to include win32ui.pyd?I don't have this problem if I don't ask it to bundle the files, so I can do that, for now, but I'd like to know how to get it to work properly. | The work-around that has worked best so far is to simply re-implement the pywin32 functions using ctypes. That doesn't require another .pyd or .dll file so the issue is obviated. |
Pandas behaviour on stack Lets Suppose I have ID A1 B1 A2 B21 3 4 5 62 7 8 9 10I want to use pandas stack and wants to achieve something like thisID A B1 3 41 5 62 7 82 9 10but what I got is ID A B1 3 42 7 81 5 62 9 10this is what i am usingdf.stack().reset_index().Is it possible to achieve something like this using Stack? append() method in pandas does this, but if possible I want to achieve using pandas stack() Any idea ? | You can use pd.wide_to_long:pd.wide_to_long(df, ['A','B'], 'ID', 'value', sep='', suffix='.+')\ .reset_index()\ .sort_values('ID')\ .drop('value', axis=1)Output: ID A B0 1 3 42 1 5 61 2 7 83 2 9 10 |
How to loop through a percentage investment? I'm working on this simple task where a financial advisor suggests to invest in a stock fund that is guaranteed to increase by 3 percent over the next five years. Here's my code:while True: investment = float(input('Enter your initial investment: ')) if 1000 <= investment <= 100000: break else: print("Investment must be between $1,000 and $100,000")#Annual interest rate apr = 3 / 100amount = investmentfor yr in range(5): amount = (amount) * (1. + apr) print('After {:>2d} year{} you have: $ {:>10.2f}'.format(yr, 's,' if yr > 1 else ', ', amount)) | You got it. The only problem is that apr is runing integer math. Use floating point numbers instead, so apr does not round to zero:apr = 3.0 / 100.0By changing that line your program will probably workThis is the whole code changes (as requested in comments):while True: investment = float(input('Enter your initial investment: ')) if 1000 <= investment <= 100000: break else: print("Investment must be between $1,000 and $100,000")#Annual interest rate apr = 3.0 / 100.0amount = investmentfor yr in range(5): amount = (amount) * (1. + apr) print('After {:>2d} year{} you have: $ {:>10.2f}'.format(yr, 's,' if yr > 1 else ', ', amount))The output I get is:Enter your initial investment: 1002 After 0 year, you have: $ 1032.06After 1 year, you have: $ 1063.02After 2 years, you have: $ 1094.91After 3 years, you have: $ 1127.76After 4 years, you have: $ 1161.59 |
Python variable memory management I just wrote this primitive script: from sys import getsizeof as gx = 0s = ''while s != 'q': x = (x << 8) + 0xff print(str(x) + " [" + str(g(x)) + "]") s = input("Enter to proceed, 'q' to quit ")The output is as follows - and quite surprising, as I perceive it: 255 [28]65535 [28]16777215 [28]4294967295 [32]1099511627775 [32]281474976710655 [32]72057594037927935 [32]18446744073709551615 [36]And so on. My point is: it seems that the variable x has some sort of 'overhead' with a size of 25 bytes. Where does this come from? Thanks in advance for any attempt to help me. | A python int is an object, so it's not surprising that it has a small overhead.If this overhead starts to become meaningful for you then this implies you're manipulating substantial collections of ints, which suggests to me that the numpy library is probably something you should consider. |
Django, RestAPI, Microsoft Azure, website, virtual machine, ubuntu I have developed a website and REST api using Django and Django REST Framework. On local machine they are working perfectly so my next step is trying to publish it on remote server. I chose Microsoft Azure.I created a virtual machine with Ubuntu server 18.04 and installed everything to run my project there. While I run it locally on virtual machine it's working perfectly, at localhost:8000; my website and rest-api are showing.Now I want it to publish to the world so it can be accessed under the IP of my virtual machine or some different address so everybody can access it. I was looking through azure tutorials on Microsoft website and google, but i cannot find anything working.I don't want to use their Web App solution or Windows Server. It needs to be working with Ubuntu Virtual machine from Azure. Is it possible to do and if yes then how? | (Optional) Set your web application listen publicly on 80 port for http or 443 port for https. You may refer to: About IP 0.0.0.0 in DjangoIn Ubuntu OS, if firewall is enabled, you need to open port 80 and 443, so that others can access your server.In Azure portal, if NSG is enabled, you need to add inbound rules for 80 and 433. (Optional) Buy a domain, and add an A record to your VM's IP. In this way, people would be able to access your website via friendly URL. |
python: How to pass parameter into SQL query I have a function with a parameter. This parameter must be replaced in a SQL query and then execute it by pandasql. Here is my function:def getPolypsOfPaitentBasedOnSize(self,size): smallPolypQuery = """ select * from polyp where polyp.`Size of Sessile in Words` == """ +size smallPolyps = ps.sqldf(smallPolypQuery)When i run the code, i get the below error: raise PandaSQLException(ex)pandasql.sqldf.PandaSQLException: (sqlite3.OperationalError) no such column: Small[SQL: select * from polyp where polyp.`Size of Sessile in Words` == Small]it seems that, i have to somehow make it like where polyp.`Size of Sessile in Words` == 'Small'but, i don't know, how to do it!Update:I have tired the below solution and also there is no error but the query does not return anything""" select * from polyp where polyp.`Size of Sessile in Words` == " """ +size+ """ " """I am sure (if the size="Small")the statement like below will work for me:where polyp.`Size of Sessile in Words` == "Small" | format can be used.size = 'Small'smallPolypQuery = """ select *from polyp where polyp.`Size of Sessile in Words` == {0}""".format(size)print(smallPolypQuery)The resule is:select *from polyp where polyp.`Size of Sessile in Words` == SmallIf you need quote then put it to the smallPolypQuery such as:smallPolypQuery = """ select *from polyp where polyp.`Size of Sessile in Words` == "{0}" """.format(size) |
Completely Unable to Run pip on Windows 10 I have installed Python 3.7.4 on Windows 10. The scripts folder is empty. I have all paths added to environment variable PATH. Python is working on running scripts. PIP is not yet recognized and even using python get-pip.py execution is not working.I have read all possible fixes online but it does not help.Anyone who can assist? Any help will be much appreciated.C:\Program Files\Python37> python get-pip.py | Add the following directory to your path. C:\Program Files\Python37\ScriptsThen, try to download pip again. If this not working, download the get-pip.py manually and install it through the CMD as an Admin. Here is a website: https://bootstrap.pypa.io/get-pip.py |
How to prevent 2 threads from overwriting value? I was trying to run a operation which has varied wait time to be executed parallely in threads. In the operation i am setting a value and waiting for operation to finish, and calling another function. But the thread that started after waiting is overwriting the value for all other threads. I tried using thread.local method but not workingimport threadingclass temp: def __init__(self): self.temp = {} def set_data(self,data): self.temp['data'] = data def get_data(self): return self.temp['data']def process(t): # print(t) # mydata = threading.local() print('before sleep',threading.current_thread(),t.get_data()) # sleep(random.randint(0,1)*10) print('after sleep',threading.current_thread(),t.get_data())if __name__=='__main__': threads = [] test = [] for i in range(0,4): t = temp() t.set_data(i) threads.append(threading.Thread(target=process, args=(t,))) threads[-1].start() for t in threads: t.join()I expect the value that i sent to the thread remain the same after wait time. But the threads are interfering and giving random output | Make temp an instance variable of class temp. Put it in __init__ as self.temp = {}. |
Save frames of live video with timestamps I want to capture the frames of video with timestamps in real time using Raspberry pi. The video is made by USB webcam using ffmpeg() function in python code. How do I save the frames of video which is currently made by USB webcam in Raspberry pi?I tried using three functions of opencv. cv2.VideoCapture to detect the video, video.read() to capture the frame and cv2.imwrite() to save the frame.Here is the code, the libraries included is not mentioned for conciseness. os.system('ffmpeg -f v4l2 -r 25 -s 640x480 -i /dev/video0 out.avi') video=cv2.VideoCapture('out.avi') ret, frame=video.read() cv2.imwrite('image'+str(i)+'.jpg',frame) i+=1The code saves the frames of video which was previously made by webcam. It is not saving the frames of video which is currently being recorded by webcam. | As you can read here, you can access the camera with camera=cv2.VideoCapture(0). 0 is an index of the connected camera. You may have to try a different index, but 0 usually works.Similar as a video file you can use ret, frame = camera.read() to grab a frame. Always check the ret value before continuing processing a frame.Next you can add text to the frame as described here.You can use time or datetime to obtain a timestamp.Finally save the frame.Note: if you use imwrite you will quicky get a LOT of images. Depending on your project you could also consider saving the frames as video-file. Explained here.Edit after comment:This is how you can use time.time(). First import the time module at the top of your code. time.time() returns the number of seconds since January 1, 1970, 00:00:00.So to get a timestamp, you have to store the starttime - when the program/video starts running.Then, on every frame, you call time.time() and subtract the starttime. The result is the time your program/video has been running. You can use that value for a timestamp.import timestarttime = time.time()# get frametimestamp = time.time() - starttimecv2.putText(frame,timestamp,(10,500), font, 4,(255,255,255),2,cv2.CV_AA) |
How to fix a datepicker in python with Selenium I'm trying to make an Auto-Reg bot with python and selenium. I'm getting the most things to work, as they aren't that hard. But atm i'm stuck at a datepicker. The code is able to open the date-box but it doesn't select a date. Another problem is, you cant write anything in the date box, you HAVE to select a date in the date box.I tried various methods i found on stackoverflow but nothing works for this site.Site: https://mobilepanel2.nielsen.com/enrol/home?l=de_de&pid=9from selenium import webdriverfrom selenium.webdriver.common.keys import Keysfrom selenium.webdriver.support.ui import Selectfrom selenium.webdriver.support.ui import WebDriverWaitb = webdriver.Chrome(r'''C:\Users\Florian\PycharmProjects\Auto_Reg\chromedriver''')b.get('https://mobilepanel2.nielsen.com/enrol/home?l=de_de&pid=9')b.find_element_by_xpath("//select[@id='platform']/option[contains(text(),'Android')]").click()b.find_element_by_xpath("//select[@id='deviceType']/option[contains(text(),'Smartphone')]").click()b.find_element_by_xpath("//label[contains(text(),'Männlich')]").click()## until here, everything works fine select = Select(b.find_element_by_name('birthDate'))select.select_by_visible_text("13") | Here you go:# click calendar to appearbrowser.find_element_by_id('birthDateCalendar').click()# get calendar elementscalendar = browser.find_elements_by_xpath('//*[@id="ui-datepicker-div"]/table/tbody/tr/td')# click selected dayselection = '15'for item in calendar: day = item.get_attribute("innerText") if day == selection: item.click() |
Python (Numpy Array) - Flipping an image pixel-by-pixel I have written a code to flip an image vertically pixel-by-pixel. However, the code makes the image being mirrored along the line x = height/2.I have tried to correct the code by setting the range of "i" from (0, h) to (0, h//2) but the result is still the same.Original Photo Resulted Photo#import librariesimport numpy as npimport matplotlib.pyplot as pltfrom PIL import Image#read image (set image as m)m = Image.open('lena.bmp')#change image to array (set array as np_array)np_array = np.array(m)#define the width(w) and height(h) of the imageh, w = np_array.shape#make the image upside downfor i in range(0,h): for j in range(0,w): np_array[i,j] = np_array[h-1-i,j] #change array back to image (set processed image as pil_image)pil_image = Image.fromarray(np_array)#open the processed imagepil_image.show()#save the processed imagepil_image.save('upsidedown.bmp') | The above given code is replacing the image pixels inplace, that is why the result is a mirrored image.If you want to flip the image pixel by pixel, just create a new array with same shape and then replace pixels in this new array. For example:#import librariesimport numpy as npimport matplotlib.pyplot as pltfrom PIL import Image#read image (set image as m)m = Image.open('A-Input-image_Q320.jpg')#change image to array (set array as np_array)np_array = np.array(m)new_np_array = np.copy(np_array)#define the width(w) and height(h) of the imageh, w = np_array.shape#make the image upside downfor i in range(0,h): for j in range(0,w): new_np_array[i,j] = np_array[h-1-i,j] #change array back to image (set processed image as pil_image)pil_image = Image.fromarray(new_np_array)#open the processed imagepil_image.show()#save the processed imagepil_image.save('upsidedown.bmp') |
Pandas and Dictionary: How to get all unique values for each key? I want to build a dictionary such that the value in the key-value pair is every unique value for that key.Consider this example:df = pd.DataFrame({'id': [1, 2, 3, 1, 2, 3], 'vals': ['a1', 'a2', 'a3', 'a2', 'a2a', 'a3a']})# only yields last entrydict(zip(df['id'], df['vals']))# results{1: 'a2', 2: 'a2a', 3: 'a3a'}# expected value{1: ['a1', 'a2'], 2: ['a2', 'a2a'], 3: ['a3', 'a3a']} | Use:result = df.groupby("id")["vals"].agg(list).to_dict()print(result)Output{1: ['a1', 'a2'], 2: ['a2', 'a2a'], 3: ['a3', 'a3a']} |
Filter multiple columns based on row values in pandas dataframe i have a pandas dataframe structured as follow:In[1]: df = pd.DataFrame({"A":[10, 15, 13, 18, 0.6], "B":[20, 12, 16, 24, 0.5], "C":[23, 22, 26, 24, 0.4], "D":[9, 12, 17, 24, 0.8 ]})Out[1]: df A B C D 0 10.0 20.0 23.0 9.0 1 15.0 12.0 22.0 12.0 2 13.0 16.0 26.0 17.0 3 18.0 24.0 24.0 24.0 4 0.6 0.5 0.4 0.8From here my goal is to filter multiple columns based on the last row (index 4) values. More in detail i need to keep those columns that has a value < 0.06 in the last row. The output should be a df structured as follow: B C 0 20.0 23.01 12.0 22.0 2 16.0 26.0 3 24.0 24.0 4 0.5 0.4 I'm trying this:In[2]: df[(df[["A", "B", "C", "D"]] < 0.6)]but i get the as follow:Out[2]: A B C D 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN NaN 4 NaN 0.5 0.4 NaNI even try:df[(df[["A", "B", "C", "D"]] < 0.6).all(axis=0)]but It gives me error, It doesn't work.Is there anybody whom can help me? | Use DataFrame.loc with : for return all rows by condition - compare last row by DataFrame.iloc:df1 = df.loc[:, df.iloc[-1] < 0.6]print (df1) B C0 20.0 23.01 12.0 22.02 16.0 26.03 24.0 24.04 0.5 0.4 |
Django sessions expiring despite calling set_expiry(0) I'm trying to implement a "remember me" checkbox into django's builtin LoginView, as suggested on this question, but even though I call set_expiry(0), the sessions still expire after SESSION_COOKIE_AGE, regardless of the cookie expire date (which is correctly set to 1969).I'm using django 2.1.7 with python 3.7.2, and the only session-related settings on my settings.py is SESSION_COOKIE_AGE, which is set to 5 seconds for resting purposes.Django seems to use a database backend as default. I'm using sqlite for development.This is my view class:class UserLoginView(LoginView): form_class = registration.UserLoginForm def form_valid(self, form): remember = form.data.get('remember_me', False) if remember: self.request.session.set_expiry(0) return super(UserLoginView, self).form_valid(form)And this is the original LoginView form_valid method (being overriden above)class LoginView(SuccessURLAllowedHostsMixin, FormView):... def form_valid(self, form): """Security check complete. Log the user in.""" auth_login(self.request, form.get_user()) return HttpResponseRedirect(self.get_success_url())...As you noticed, I'm using a custom form_class. A very simple override of the default form:class UserLoginForm(AuthenticationForm): remember_me = BooleanField(required=False)If I use a debugger right after the set_expiry call, I can see that the sesion expiry age is still the default 5 seconds:> /project/app/views/accounts.py(64)form_valid()-> return super(UserLoginView, self).form_valid(form)(Pdb) self.request.session.get_expiry_age()5I get similar results if I let the request complete and redirect, reach the next view and finally render a template where I have:...{{ request.session.get_expiry_age }}...The rendered result is also 5 (the current default).Sure enough, after 5 seconds, if you refresh the page, django will take you back to the login screen.What am I doing wrong here? It would be nice if someone could clarify what does "Web browser is closed" means here? https://docs.djangoproject.com/en/2.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.set_expiry | TL; DR; Seems like Django does not offer support for infinite or truly undefined expiry session times. Set it to 30 days or greater if you need to extend its validity.From Django documentation: If value is 0, the user’s session cookie will expire when the user’s Web browser is closed.Although it's not clear here, seems like setting the expiry time to 0 has a similar behavior when setting it to None: it will fallback to the default session expiry policy. The difference here is that when setting it to 0, we're also inferring that the session should be expired right after the user closes the browser. In both cases, SESSION_COOKIE_AGE works like a session max-age value.I believe you could set a greater number to turn around this problem, for example, something equivalent to 100 years or more. My personal suggestion is to specify an expiry time of 30 days when the user checks the "remember me" field. When you specify a positive integer greater than zero, Django won't fallback to the SESSION_COOKIE_AGE setting.If you're curious about why you're getting 5 seconds even after specifying an expiry of 0 seconds, here's the source code extracted from the get_expiry_age function:if not expiry: # Checks both None and 0 cases return settings.SESSION_COOKIE_AGEif not isinstance(expiry, datetime): return expiryFinal considerations:there's some room for improvements in the Django documentationseems like refreshing a tab could also invalidate the session |
ipyparallel strange overhead behavior Im trying to understand how to do distributed processing with ipyparallel and jupyter notebook, so i did some test and got odd results.from ipyparallel import Client%px import numpy as nprc = Client()dview = rc[:]bview = rc.load_balanced_view()print(len(dview))print(len(bview))data = [np.random.rand(10000)] * 4%time np.sin(data)%%time #45.7msresults = dview.map(np.sin, data)results.get()%%time #110msdview.push({'data': data})%px results = np.sin(data)results%%time #4.9msresults = np.sin(data)results%%time #93msresults = bview.map(np.sin, data)results.get()What is the matter with the overhead?Is the task i/o bound in this case and just 1 core can do it better?I tried larger arrays and still got better times with no parallel processing.Thanks for the advice! | The problem seems to be the io. Push pushes the whole set of data to every node. I am not sure about the map function, but most likely it splits the data in chunks that are sent to nodes. So smaller chunks - faster processing. Load balancer most likely sends the data and the task two time to the same node, which significantly hits performance.And how did you manage to send the data in 40 ms? I am used to http protocol where only the handshake takes about a second. For me 40 ms in the network is lightning fast.EDIT About long times (40ms):In local networks the ping time of 1-10ms is considered a normal situation. Taking into account that you first need to make a handshake (minimum 2 signals) and only then send the data (minimum 1 signal) and wait for the response (another signal) you already talk about 20ms just for connecting two computers. Of course you can try to minimize the ping time to 1ms and then use a faster MPI protocol. But as I understand it does not improve the situation significantly. Only one order of magnitude faster.Therefore the general recommendations are to use larger jobs. For example, a pretty fast dask distributed framework (faster than Celery based on benchmarks) recommends tasks times to be more than 100ms. Otherwise the overhead of the framework starts overweighting the time of the execution and the parallelization benefits are disappearing. Efficiency on Dask Distributed |
Countdown Timer doesn't work The target is: if there is a motion, the recording starts and the counter (x) begins to decrement every 1 second, but if in the meantime there is another motion, the counter restart to x (for example: 5 second).Actually this doesn't work, more specifically, the counter doesn't reset if there's a motion during the recording, so every video has 5secs lenght.from picamera import PiCamerafrom time import sleep camera = PiCamera()sensor = MotionSensor(7)camera.hflip = True name = "video.h264"x = 5 #seconds of recorddef countdown(count): while (count >= 0): print (count) count -= 1 sleep(1) if sensor.when_motion == True: count = xdef registra_video(): print ("recording started") #camera.start_preview() camera.start_recording(name) countdown(x)def stop_video(): camera.stop_recording() #camera.stop_preview() print ("recording stopped")print("Waiting...")sensor.when_motion = registra_videosensor.when_no_motion = stop_videopause()P.s i know that i have to do a function that name every video differently, but i will do it subsequently. | INTROTo begin with, I am pretty sure that this problem is best solved with a multi-threaded approach for two reasons. First of all, event handlers in general are intended to be small snippets of code that run very quickly in a single thread. Secondly, your specific code is blocking itself in the manner I will describe below.Current BehaviorBefore presenting a solution, let's take a look at your code to see why it does not work.You have a motion sensor that is outputting events when it detects the start and end of a motion. These events happen regardless of anything your code is doing. As you correctly indicated, a MotionSensor object will call when_motion every time it goes into active state (i.e., when a new motion is detected). Similarly, it will call when_no_motion whenever the motion stops. The way that these methods are called is that events are added to a queue and processed one-by-one in a dedicated thread. Events that can not be queued (because the queue is full) are dropped and never processed. By default, the queue length is one, meaning that any events that occur while another event is waiting to be processed are dropped.Given all that, let's see what happens when you get a new motion event. First, the event will be queued. It will then cause registra_video to be called almost immediately. registra_video will block for five seconds no matter what other events occurred. Once it is done, another event will be popped off the queue and processed. If the next event is a stop-motion event that occurred during the five second wait, the camera will be turned off by stop_video. The only way stop_video will not be called is if the sensor continuously detects motion for more than five seconds. If you had a queue length of greater than one, another event could occur during the blocking time and still get processed. Let's say this is another start-motion event that occurred during the five second block. It will restart the camera and create another five second video, but increasing the queue length will not alter the fact that the first video will be exactly five seconds long.Hopefully by now you get the idea of why it is not a good idea to wait for the entire duration of the video within your event handler. It prevents you from reacting to the following events on time. In your particular case, you have no way to restart the timer when it is still running since you do not allow any other code to run while the timer is blocking your event processing thread.DesignSo here is a possible solution:When a new motion is detected (when_motion gets called), start the camera if it is not already running.When a stop-motion is detected (when_no_motion gets called), you have two options:If a countdown is not running, start it. I would not recommend starting a countdown in when_motion, since the motion will be in progress until when_no_motion is called.If the countdown is already running, restart it.The timer will run in a background thread, which will not interfere with the event processing thread. The "timer" thread can just set the start time, sleep for five seconds and check the start time again. If it is more than five seconds past the start time when it wakes up, it turns off the camera. If the start time was reset by another when_motion call, the thread will go back to sleep for new_start_time + five seconds - current_time. If the timer expires before another when_motion is called, turn off the camera.Some Threading ConceptsLet's go over some of the building blocks you will need to get the designed solution working.First of all, you will be changing values and reading them from at least two different threads. The values I am referring to is the state of the camera (on or off), which will tell you when the timer has expired and needs to be restarted on motion, and the start time of your countdown.You do not want to run into a situation when you have set the "camera is off" flag, but are not finished turning off the camera in your timer thread, while the event processing thread gets a new call to when_motion and decides to restart the camera as you are turning it off. To avoid this, you use locks.A lock is an object that will make a thread wait until it can obtain it. So you can lock the entire camera-off operation as a unit until it completes before allowing the event processing thread to check the value of the flag.I will avoid using anything besides basic threads and locks in the code.CodeHere is an example of how you can modify your code to work with the concepts I have been ranting about ad nauseum. I have kept the general structure as much as I could, but keep in mind that global variables are generally not a good idea. I am using them to avoid going down the rabbit hole of having to explain classes. In fact, I have stripped away as much as I could to present just the general idea, which will take you long enough to process as it is if threading is new to you:from picamera import PiCamerafrom time import sleepfrom datetime import datetimefrom threading import Thread, RLockcamera = PiCamera()sensor = MotionSensor(7)camera.hflip = True video_prefix = "video"video_ext = ".h264"record_time = 5# This is the time from which we measure 5 seconds.start_time = None# This tells you if the camera is on. The camera can be on# even when start_time is None if there is movement in progress.camera_on = False# This is the lock that will be used to access start_time and camera_on.# Again, bad idea to use globals for this, but it should work fine# regardless.thread_lock = RLock()def registra_video(): global camera_on, start_time with thread_lock: if not camera_on: print ("recording started") camera.start_recording('{}.{:%Y%m%d_%H%M%S}.{}'.format(video_prefix, datetime.now(), video_ext)) camera_on = True # Clear the start_time because it needs to be reset to # x seconds after the movement stops start_time = Nonedef stop_video(): global camera_on with thread_lock: if camera_on: camera.stop_recording() camera_on = False print ("recording stopped")def motion_stopped(): global start_time with thread_lock: # Ignore this function if it gets called before the camera is on somehow if camera_on: now = datetime.now() if start_time is None: print('Starting {} second count-down'.format(record_time)) Thread(target=timer).start() else: print('Recording to be extended by {:.1f} seconds'.format((now - start_time).total_seconds())) start_time = nowdef timer(): duration = record_time while True: # Notice that the sleep happens outside the lock. This allows # other threads to modify the locked data as much as they need to. sleep(duration) with thread_lock: if start_time is None: print('Timer expired during motion.') break else: elapsed = datetime.now() - start_time if elapsed.total_seconds() >= record_time: print('Timer expired. Stopping video.') stop_video() # This here is why I am using RLock instead of plain Lock. I will leave it up to the reader to figure out the details. break else: # Compute how much longer to wait to make it five seconds duration = record_time - elapsed print('Timer expired, but sleeping for another {}'.format(duration))print("Waiting...")sensor.when_motion = registra_videosensor.when_no_motion = motion_stoppedpause()As an extra bonus, I threw in a snippet that will append a date-time to your video names. You can read all you need about string formatting here and here. The second link is a great quick reference. |
LSTM time series - strange val_accuarcy, which normalizing method to use and what to do in production after model is fited I am making LSTM time series prediction. My data looks like thisSo basically what I have isIDTime: Int for each dayTimePart: 0 = NightTime, 1 = Morning, 2 = AfternoonAnd 4 columns for values I am trying to predictI have 2686 values, 3x values per day, so around 900 values in total + added new missing valuesI read and did something like https://www.tensorflow.org/tutorials/structured_data/time_series ReplacedMissingData - Added missing IDTimes 0-Max, each containing TimePart 0-3 with 0 values (if missing). And replaced all NULL values with 0. I also removed Date parameter, because I have IDTimeSet Data (Pandas DataFrame) index as IDTime and TimePartCopied features that I wantfeatures_considered = ['TimePart', 'NmbrServices', 'LoggedInTimeMinutes','NmbrPersons', 'NmbrOfEmployees']features = data[features_considered]features.index = data.indexUsed Mean/STD on Trained data. I am creating 4 different models for each feature I am trying to predict. I this current one I have set currentFeatureIndex = 1, which is NmbServices currentFeatureIndex = 1 TRAIN_SPLIT = int(dataset[:,currentFeatureIndex].size * 80 / 100) tf.random.set_seed(13) dataset = features.values data_mean = dataset[:TRAIN_SPLIT].mean(axis=0) data_std = dataset[:TRAIN_SPLIT].std(axis=0)I then created dataset. Previous X values with next 3 Future values I want to predict. I am using multivariate_data from tensorflow example, with removed steps x_train_multi, y_train_multi = multivariate_data(dataset, dataset[:,currentFeatureIndex], 0,TRAIN_SPLIT, past_history,future_target) x_val_multi, y_val_multi = multivariate_data(dataset, dataset[:,currentFeatureIndex],TRAIN_SPLIT, None, past_history,future_target) print ('History shape : {}'.format(x_train_multi[0].shape)) print ('\n Target shape: {}'.format(y_train_multi[0].shape)) BATCH_SIZE = 1024 BUFFER_SIZE = 8096 train_data_multi = tf.data.Dataset.from_tensor_slices((x_train_multi, y_train_multi)) train_data_multi =train_data_multi.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat() val_data_multi = tf.data.Dataset.from_tensor_slices((x_val_multi, y_val_multi)) val_data_multi = val_data_multi.batch(BATCH_SIZE).repeat() multi_step_model = tf.keras.models.Sequential() multi_step_model.add(tf.keras.layers.LSTM(32, activation='relu')) multi_step_model.add(tf.keras.layers.Dropout(0.1)) multi_step_model.add(tf.keras.layers.Dense(future_target)) multi_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae', metrics=['accuracy']) EVALUATION_INTERVAL = 200 EPOCHS = 25 currentName = 'test' csv_logger = tf.keras.callbacks.CSVLogger(currentName + '.log', separator=',', append=False) multi_step_history = multi_step_model.fit(train_data_multi, epochs=EPOCHS, steps_per_epoch=EVALUATION_INTERVAL, validation_data=val_data_multi, validation_steps=50, callbacks = [csv_logger])In this example I also removed first 800 values with data[600:], because data is not as it should be, after replacing missing values.And I get this final value after 25 ecphoes 200/200 [==============================] - 12s 61ms/step - loss: 0.1540 - accuracy: 0.9505 - val_loss: 0.1599 - val_accuracy: 1.0000Questions:Why is it that the val_accuracy is always 1.0? This happens for most of the featuresI also tried normalizing values from 0-1 with:features.loc[:,'NmbrServices'] / features.loc[:,'NmbrServices'].max() and I get:200/200 [==============================] - 12s 60ms/step - loss: 0.0461 - accuracy: 0.9538 - val_loss: 0.0434 - val_accuracy: 1.0000For this feature, I use here, it looks better using feature/featureMax, but for other features I can get: Using mean/std:loss: 0.1461 - accuracy: 0.9338 - val_loss: 0.1634 - val_accuracy: 1.0000And when using feature / featureMax, I get:loss: 0.0323 - accuracy: 0.8523 - val_loss: 0.0463 - val_accuracy: 1.0000In this case, which one is better? The one with higher accuracy or the one with lower losses?If I get some good Val_loss and Train_loss at around 8 epochs and then it goes up, can I then just train model until 8 epochs an save it?In the end I save model in H5 format and load it, because I want to predict new values for the next day, using last 45 values for prediction. How can I then fit this new data to the model. Do you just call model.fit(newDataX, newDataY)? Or do you need to compile it again on new data?4.1 How many times should you rerun this model then if you ran it on Year 2016-2018 and u are currently in year 2020, should you for example recompile it once per year with data from 2017-2019?Is it possible to predict multiple features for next day or is it better to use multiple models? | I would suggest you to use batch normalizationand it completely depends on you if you want to use Vanilla LSTM or Stacked LSTM.I would recommend you to go through this. |
How to alter return function/change variable using a function? I was wondering how I can have a changing variable from a function.I attempted:class Text(): File=open("SomeFile.txt", "r") MyText=(File.read()+MoreText) def AddMoreText(): MoreText=("This is some more text")before realising that I needed to run the MyText variable again which I'm not sure how to do.I intend to call this text by running something along the lines of print(Text.MyText) which doesn't update after running Text.AddMoreText()I then tried:class Text(): global MoreText File=open("SomeFile.txt", "r") def ChangeTheText(): return(File.read()+MoreText) MyText=ChangeTheText() def AddMoreText(): MoreText=("This is some more text")What I didn't know was that the return function preserves its value so when I ran print(Text.MyText) Text.AddMoreText()print(Text.MyText) it displayed the same text twice. | I think you want something like:class Text: def __init__(self): self.parts = [] with open('SomeFile.txt', 'r') as contents: self.parts.append(contents.read()) self.parts.append('More text') def add_more_text(self, text): self.parts.append(text) @property def my_text(self): return ''.join(self.parts)This makes .my_text a dynamic property that will be re-computed each time .my_text is retreived. |
Logging into EventBrite with Scrapy I'm looking to learn more about how Scrapy can be used to login to websites. I looked at some documentations and tutorials and ended up at Using FormRequest.from_response() to simulate a user login. Using Chrome dev tools, I look at the "login" response after logging in from the page https://eventbrite.ca/signin/login. Some things that may be important to note is that when attempting to login in browser, the web page will direct you to https://eventbrite.ca/signin, where you enter your email and submit the form. This sends a POST request to https://www.eventbrite.ca/api/v3/users/lookup/ with just the email provided, and if all is dandy, the webpage will use JS to "redirect" you to https://eventbrite.ca/signin/login and generate the "password" input element. Once you fill your password and hit the form button, if successful, it will then redirect+generate the login response as a result of POST sent to https://www.eventbrite.ca/ajax/login/ with email, pw, and some other info (which can be found in my code snippet). First I tried doing it step by step: going from .ca/signup, sending a POST with my email to the lookup endpoint, but I get a 401 error. Next I tried directly going to .ca/signup/login, and submitting all the info found in the login response, but receive 403.I'm sure I must be missing something, though it seems I am POSTing to the correct URLs and finding the correct form, but can't figure out what's left. Also after trying this for a while, wondering if Selenium would provide a better alternative for logging in and doing some automation on a web page that has loads of JS. Any help appreciated.def login(self, response): yield FormRequest.from_response( response, formxpath="//form[(@novalidate)]", url='https://www.eventbrite.ca/ajax/login/', formdata={ 'email': '[email protected]', 'password': 'password', 'forward':'', 'referrer': '/', 'pckg': '', 'stld': '' }, callback=self.begin_event_parse ).ca/signup/login attempt (403): [scrapy.core.engine] DEBUG: Crawled (403) <POST https://www.eventbrite.ca/ajax/login/> (referer: https://www.eventbrite.ca/signin/login).ca/signup attempt (401):[scrapy.core.engine] DEBUG: Crawled (401) <POST https://www.eventbrite.ca/api/v3/users/lookup/> (referer: https://www.eventbrite.ca/signin/login) | It looks like you are missing the X-CSRFToken in your headers. This token is used to protect the resource from Cross-site Request Forgery.In this case, it is provided in the cookies, and you need to store it and pass it along.A simple implementation that works for me:import reimport scrapyclass DarazspidySpider(scrapy.Spider): name = 'darazspidy' def start_requests(self): yield scrapy.Request('https://www.eventbrite.ca/signin/?referrer=%2F%3Finternal_ref%3Dlogin%26internal_ref%3Dlogin%26internal_ref%3Dlogin', callback=self.lookup) def lookup(self, response): yield scrapy.FormRequest( 'https://www.eventbrite.ca/api/v3/users/lookup/', formdata={"email":"[email protected]"}, headers={'X-CSRFToken': self._get_xcsrf_token(response),}, callback=self.login, ) def _get_xcsrf_token(self, response): cookies = response.headers.getlist('Set-Cookie') cookie, = [c for c in cookies if 'csrftoken' in str(c)] self.token = re.search(r'csrftoken=(\w+)', str(cookie)).groups()[0] return self.token def login(self, response): yield scrapy.FormRequest( url='https://www.eventbrite.ca/ajax/login/', formdata={ 'email': '[email protected]', 'password': 'pwd', 'forward':'', 'referrer': '/?internal_ref=login&internal_ref=login', 'pckg': '', 'stld': '' }, callback=self.parse, headers={'X-CSRFToken': self.token} ) def parse(self, response): self.logger.info('Logged in!')Ideally, you'd want to create a middleware to do that for you.Generally, when you face this kind of behavior, you want to try to mimic what the browser is sending as close as possible, so look at the headers closely and try to replicate them. |
Find area with content and get its bouding rect I'm using OpenCV 4 - python 3 - to find an specific area in a black & white image.This area is not a 100% filled shape. It may hame some gaps between the white lines.This is the base image from where I start processing:This is the rectangle I expect - made with photoshop -:Results I got with hough transform lines - not accurate -So basically, I start from the first image and I expect to find what you see in the second one.Any idea of how to get the rectangle of the second image? | I'd like to present an approach which might be computationally less expensive than the solution in fmw42's answer only using NumPy's nonzero function. Basically, all non-zero indices for both axes are found, and then the minima and maxima are obtained. Since we have binary images here, this approach works pretty well.Let's have a look at the following code:import cv2import numpy as np# Read image as grayscale; threshold to get rid of artifacts_, img = cv2.threshold(cv2.imread('images/LXSsV.png', cv2.IMREAD_GRAYSCALE), 0, 255, cv2.THRESH_BINARY)# Get indices of all non-zero elementsnz = np.nonzero(img)# Find minimum and maximum x and y indicesy_min = np.min(nz[0])y_max = np.max(nz[0])x_min = np.min(nz[1])x_max = np.max(nz[1])# Create some outputoutput = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)cv2.rectangle(output, (x_min, y_min), (x_max, y_max), (0, 0, 255), 2)# Show resultscv2.imshow('img', img)cv2.imshow('output', output)cv2.waitKey(0)cv2.destroyAllWindows()I borrowed the cropped image from fmw42's answer as input, and my output should be the same (or most similar):Hope that (also) helps! |
coverage of django application deployed on production server Can anyone please tell how to find the coverage of django application deployed in apache.I want to hook in coverage.py in the deployed django application. | I think you are referring to Ned Batchelder's excellent coverage.py. http://nedbatchelder.com/code/coverage/Why dont you make use of https://pypi.python.org/pypi/django-coverage? |
Keep Getting ZeroDivisonError Whenever using module So I am working on a problem which need me to get factors of a certain number. So as always I am using the module % in order to see if a number is divisible by a certain number and is equal to zero. But when ever I am trying to do this I keep getting an error saying ZeroDivisionError . I tried adding a block of code like this so python does not start counting from zero instead it starts to count from one for potenial in range(number + 1): But this does not seem to work. Below is the rest of my code any help will be appreciated.def Factors(number):factors = []for potenial in range(number + 1): if number % potenial == 0: factors.append(potenial) return factors | In your for loop you are iterating from 0 (range() assumes starting number to be 0 if only 1 argument is given) up to "number". There is a ZeroDivisionError since you are trying to calculate number modulo 0 (number % 0) at the start of the for loop. When calculating the modulo, Python tries to divide number by 0 causing the ZeroDivisionError. Here is the corrected code (fixed the indentation):def get_factors(number): factors = [] for potential in range(1, number + 1): if number % potential == 0: factors.append(potential) return factorsHowever, there are betters ways of calculating factors. For example, you can iterate only up to sqrt(n) where n is the number and then calculate "factor pairs" e.g. if 3 is a factor of 15 then 15/3 which is 5 is also a factor of 15.I encourage you to try an implement a more efficient algorithm.Stylistic note: According to PEP 8, function names should be lowercase with words separated by underscores. Uppercase names generally indicate class definitions. |
Determining whether a number is prime or not I know it's been discussed many times; I've read it, but somehow I can't get it.I want to write a program that determines if the entered number is prime or not.One of the implementations I found somewhere on the Internet:from math import *def main(): n = abs(input("Enter a number: ")) i = 2 msg = 'is a prime number.' while i <= sqrt(n): if n % i == 0: msg = 'is not a prime number.' i = i + 1 print n, msgmain()A couple of questions here:In the above, what is i, and why does it have a starting value of 2?What does i = i + 1 do in this program?How does the interpreter know when to print 'is a prime number.' even though it is out of the body loop? | A prime number is a number that's only divisible by 1 and itself. The method it's using is to try dividing your candidate number n by every other number from 2 up to itself; however if any number i is a divisor of your number n then so is n / i and at least one of them is less than or equal to sqrt(n) therefore we need only test up to sqrt(n) inclusive. In practice we need only test the divisors that are actually prime themselves but since we don't have a list of primes to hand we'll test every one. what in the above i is? and why it got a 2 starting value?i is the potential factor of n we're testing. It starts with 2 because we don't care if 1 divides n (and trivially it will) because the prime definition allows / expects that. what is the i = i + 1 statement, in this concrete example for? Can't see its use in the program.It's incrementing the i value at the end of the loop defined by the while i <= sqrt(n); it means we advance i to test the next candidate divisor of n. and finally, how python knows when to print 'is a prime number.' although it is out of the body loop?We initialise msg to "is a prime number" and if we find any divisor then we change it to "is not a prime number" inside the loop. If the loop doesn't find a divisor, or if the loop never runs, we'll use the initial value we set which is "is a prime number". Incidentally you could break out of the loop when you find a divisor; there's no point carrying on the test after that.As another aside you probably want to compute sqrt(n) outside the while and store than in a variable to use in the while - you may be recalculating the square root for every iteration, which is relatively expensive. |
How to access Django Test database to debug? Django tests are very helpful.However, when it's time to debug it's more complicated.I would like to:The test database does not disapear at the end of the tests suite to analyse itBe able to read in this database, using my graphical DB Manager (Navicat, pgAdmin, etc.) (which is more friendly than command line)How to do this? Thanks! | The django-test-utils app includes a Persistent Database Test Runner to achieve this. I haven't tested the app myself though. |
MemoryError while counting edges in graph using Networkx My initial goal was to do some structural property analysis (diameter, clustering coefficient etc.) using Networkx. However, I stumbled already by simply trying to count how many edges there are present in the given graph. This graph, which can be downloaded from over here (beware: 126 MB zip file) consists of 1,632,803 nodes and 30,622,564 edges. Please note, if you want to download this file, make sure to remove the comments from it (including the #) which are placed on top of the fileI have 8 GB of memory in my machine. Are my plans (diameter/clustering coefficient) too ambitious for a graph of this size? I hope not, because I like networkx due to its simplicity and it just seems complete.. If it is ambitious however, could you please advice another library that I can use for this job? import networkx as nxgraph = nx.Graph()graph.to_directed()def create_undirected_graph_from_file(path, graph): for line in open(path): edges = line.rstrip().split() graph.add_edge(edges[0], edges[1])print(create_undirected_graph_from_file("C:\\Users\\USER\\Desktop\\soc-pokec-relationships.txt", graph).g.number_of_edges())Error:Traceback (most recent call last): File "C:/Users/USER/PycharmProjects/untitled/main.py", line 12, in <module> print(create_undirected_graph_from_file("C:\\Users\\USER\\Desktop\\soc-pokec-relationships.txt", graph).g.number_of_edges()) File "C:/Users/User/PycharmProjects/untitled/main.py", line 8, in create_undirected_graph_from_file edges = line.rstrip().split()MemoryError | One potential problem is that strings have a large memory footprint. Since all of your edges are integers you can benefit by converting them to ints before creating the edges. You'll benefit from faster tracking internally and also have a lower memory footprint! Specifically:def create_undirected_graph_from_file(path, graph): for line in open(path): a, b = line.rstrip().split() graph.add_edge(int(a), int(b)) return graphI'd recommend also changing your open to use contexts and ensure the file gets opened:def create_undirected_graph_from_file(path, graph): with open(path) as f: for line in f: a, b = line.rstrip().split() graph.add_edge(int(a), int(b)) return graphOr the magic one-liner:def create_undirected_graph_from_file(path, graph): with open(path) as f: [graph.add_edge(*(int(point) for point in line.rstrip().split())) for line in f] return graphOne more thing to keep in mind. Graph.to_directed returns a new graph. So be sure you set graph to the result of this instead of throwing out the result. |
Matplotlib tripcolor bug? I want to use tripcolor from matplotlib.pyplot to view the colored contours of some of my data.The data is extracted from an XY plane at z=cst using Paraview. I directly export the data in csv from Paraview which triangulates the plane for me.The problem is that depending on the plane position (ie the mesh) tripcolor gives me sometimes good or bad results. Here is a simple example code and results to illustrate it:Codeimport matplotlib.pyplot as pltimport numpy as npp,u,v,w,x,y,z = np.loadtxt('./bad.csv',delimiter=',',skiprows=1,usecols=(0,1,2,3,4,5,6),unpack=True)NbLevels = 256plt.figure()plt.gca().set_aspect('equal')plt.tripcolor(x,y,w,NbLevels,cmap=plt.cm.hot_r,edgecolor='black')cbar = plt.colorbar()cbar.set_label('Velocity magnitude',labelpad=10)plt.show()Results with tripcolorHere is the file that causes the problem. I've heard that matplotlib's tripcolor is sometimes buggy, so is it a bug or not ? | As highlighted by @Hooked this is the normal behaviour for a Delaunay triangulation.To remove unwanted triangles you should provide your own Triangulation by passing explicitly the triangles.This is quite easy in your case as your data is almost structured: I suggest performing a Delaunay triangulation in the plane (r, theta) then passing these triangles to the initial (x, y) arrays. You can make use of the the built-in TriAnalyzer class to remove very flat triangles from the (r, theta) triangulation (they might exists due to round-off errors).import matplotlib.pyplot as pltimport numpy as npimport matplotlib.tri as mtrip,u,v,w,x,y,z = np.loadtxt('./bad.csv',delimiter=',',skiprows=1,usecols=(0,1,2,3,4,5,6),unpack=True)r = np.sqrt(y**2 + x**2)tan = (y / x)aux_tri = mtri.Triangulation(r/np.max(r), tan/np.max(tan))triang = mtri.Triangulation(x, y, aux_tri.triangles)triang.set_mask(mtri.TriAnalyzer(aux_tri).get_flat_tri_mask())NbLevels = 256plt.figure()plt.gca().set_aspect('equal')plt.tripcolor(triang, w, NbLevels, cmap=plt.cm.jet, edgecolor='black')cbar = plt.colorbar()cbar.set_label('Velocity magnitude',labelpad=10)plt.show() |
How to make my python integration faster? Hi i want to integrate a function from 0 to several different upper limits (around 1000). I have written a piece of code to do this using a for loop and appending each value to an empty array. However i realise i could make the code faster by doing smaller integrals and then adding the previous integral result to the one just calculated. So i would be doing the same number of integrals, but over a smaller interval, then just adding the previous integral to get the integral from 0 to that upper limit. Heres my code at the moment:import numpy as np #importing all relevant modules and functionsfrom scipy.integrate import quadimport pylab as pltimport datetimet0=datetime.datetime.now() #initial timenum=np.linspace(0,10,num=1000) #setting up array of values for tLt=np.array([]) #empty array that values for L(t) are appended todef L(t): #defining function for L return np.cos(2*np.pi*t)for g in num: #setting up for loop to do integrals for L at the different values for t Lval,x=quad(L,0,g) #using the quad function to get the values for L. quad takes the function, where to start the integral from, where to end the integration Lv=np.append(Lv,[Lval]) #appending the different values for L at different values for tWhat changes do I need to make to do the optimisation technique I've suggested? | Basically, we need to keep track of the previous values of Lval and g. 0 is a good initial value for both, since we want to start by adding 0 to the first integral, and 0 is the start of the interval. You can replace your for loop with this:last, lastG = 0, 0for g in num: Lval,x = quad(L, lastG, g) last, lastG = last + Lval, g Lv=np.append(Lv,[last])In my testing, this was noticeably faster.As @askewchan points out in the comments, this is even faster:Lv = []last, lastG = 0, 0for g in num: Lval,x = quad(L, lastG, g) last, lastG = last + Lval, g Lv.append(last)Lv = np.array(Lv) |
Shared memory cache for non-serialized data I have a (Django) web app that needs to construct large (numpy) arrays, let's say 1MB per vector. It works on several processes (spawned by Apache/mod_wsgi).For the moment I am using in-memory cache, which simplest version is a global variable. Retrieving the data from cache is instantaneous - all I need. However, each process needs to replicate the cache in its own memory, and it is unpredictable which process has the data loaded and which hasn't (I want to load it once and for all at startup).I tried Memcached and Redis to have a shared cache among processes. Both need the data to be serialized first: strings and ints only. Now, de-serializing when I want to read a vector takes about 10s, a bit long for a user waiting after clicking a button. Isn't there any solution that can at the same time store some arbitrary data in RAM without serializing to string, and have it shared among different processes ? (I am not interested in persistence after restart). | Redis supports many data types, including raw bytes Strings are the most basic kind of Redis value. Redis Strings are binary safe, this means that a Redis string can contain any kind of data, for instance a JPEG image or a serialized Ruby object.Redis is proven to be fast, so maybe your focus should be on an efficient serialization format that deserializes quickly, e.g.https://github.com/lebedov/msgpack-numpyhttps://developers.google.com/protocol-buffers/docs/pythontutorial#why-use-protocol-buffershttp://slides.zetatech.org/haenel-bloscpack-talk-2014-PyDataBerlin.pdf |
Reading CSV with comma at last line i'm using Python to read in a series of CSVs that were obtained via a web scraper (there's thousands so editing by hand is a no go). The data looks like this:"Client: Secret Client""G/L Account: (#-#-#) Secret Type of Account""Process Date: MM/DD/YYYY""Export Date: MM/DD/YYYY""Unit Name ","Description","Pay. Type ","Amount","Tran. Date ""last, first","some note (dates with commas like 17 Aug, 2018 could be here)","Credit Card ","$AMNT.CHANGE","Date and Timestamp""Total","","","$AMNT.CHANGE","If you count carefully you'll see a final comma followed by a rogue ". The code I'm trying to use is here:import osimport pandas as pdimport csvdef read_temp(file): tmp = pd.read_csv(file, header=None, error_bad_lines=False, quotechar='"', skiprows=5, quoting=csv.QUOTE_ALL,skipinitialspace=True, skipfooter=1) gl = pd.read_csv(file, header=None, error_bad_lines=False, quotechar='"', skiprows=1, nrows=1, quoting=csv.QUOTE_ALL,skipinitialspace=True) proc_date = pd.read_csv(file, header=None, error_bad_lines=False, quotechar='"', skiprows=2, nrows=1, quoting=csv.QUOTE_ALL,skipinitialspace=True) cols = ['NAME', 'DESCRIPTION', 'PAY_TYP', 'AMOUNT', 'TRAN_DATE'] tmp.columns = cols # print(tmp.columns) # print(file) tmp['G/L_ACCOUNT'] = gl[0][0].split(':')[1] tmp['PROCESS_DATE'] = proc_date[0][0].split(':')[1] for col in tmp.columns: tmp[col] = tmp[col].str.strip('"') return tmpmaster = "C:\\path\\to\\master\\"want=[]flag = 0for direc in os.listdir(master): for file in os.listdir(master+direc): temp = read_temp(master+direc+'\\'+file) want.append(temp)df = pd.concat(want)the error is: ',' expected after '"'I think if I could use a CSV Reader and regular expressions (which I have zero experience with) to read each line before hand and find everything that's surrounded by " " then I could change it somehow or posisbly delete that ending comma and double quote.Any ideas would be appreciated! | A quick test with the csv module does not failimport csvdata = """"Client: Secret Client""G/L Account: (#-#-#) Secret Type of Account""Process Date: MM/DD/YYYY""Export Date: MM/DD/YYYY""Unit Name ","Description","Pay. Type ","Amount","Tran. Date ""last, first","some note (dates with commas like 17 Aug, 2018 could be here)","Credit Card ","$AMNT.CHANGE","Date and Timestamp""Total","","","$AMNT.CHANGE",""""reader = csv.reader(data.split("\n"), delimiter=',', quotechar='"')for row in reader: print(', '.join(row))but also get "confused" by the last, incomplete element:Client: Secret ClientG/L Account: (#-#-#) Secret Type of AccountProcess Date: MM/DD/YYYYExport Date: MM/DD/YYYYUnit Name , Description, Pay. Type , Amount, Tran. Date last, first, some note (dates with commas like 17 Aug, 2018 could be here), Credit Card , $AMNT.CHANGE, Date and TimestampTotal, , , $AMNT.CHANGE, But you could just remove the offending characters from your data, e.g. with rfind and "slicing":pos = data.rfind(',"', -5)if pos != -1: data = data.strip()[:pos]print( data[-15:] )should print ,"$AMNT.CHANGE". It searches for ," on the last 5 characters of the string. If it is found, the position is returned, which is used to remove the respective characters (or rather, return a string without them).The strip() is just to remove any newline (introduced by embedding your data with a string literal """).Alternatively, if the problem is always those two extra characters, you could slice them off by providing a negative slice index, e.g. data[:-2]No real need for a regular expression, howeverimport redata = re.sub(",\"?$", "", data, 1)would do the trick, and it also works in case there is just a trailing ,.You can play with this on regex101.com which also explains what the expression does.Now pandas should not have any trouble parsing the data. |
how to strip the beginning of a file with python library re.sub? I'm happy to ask my first python question !!! I would like to strip the beginning (the part before the first occurrence of the article) of the sample file below. To do this I use re.sub library.below this is my file sample.txt:fdasfdadfaadfadfasdfafdafdsfasadfadfadfadfadsfafdafarticle: name of the first articleaaaaaaaaaaaaaaaaaaaaaarticle: name of the first articlebbbbbbbbbbbbbbbbbbbbbarticle: name of the first articlecccccccccccccccccccccAnd my Python code to parse this file:for line in open('sample.txt'): test = test + lineresult = re.sub(r'.*article:', 'article', test, 1, flags=re.S)print resultSadly this code only displays the last article. The output of the code:article: name of the first articlecccccccccccccccccccccDoes someone know how to strip only the beginning of the file and display the 3 articles? | You can use itertools.dropwhile to get this effectfrom itertools import dropwhilewith open('filename.txt') as f: articles = ''.join(dropwhile(lambda line: not line.startswith('article'), f))print(articles)printsarticle: name of the first articleaaaaaaaaaaaaaaaaaaaaaarticle: name of the first articlebbbbbbbbbbbbbbbbbbbbbarticle: name of the first articleccccccccccccccccccccc |
Why do I have empty rows when I create a CSV file? Im trying to create a new csv file which evaluates data about a construction site operation from an ASCII table in CSV format file. I have figured out how to create a CSV file, but I always get a blank line between the lines. Why is that?import csvheader = ['name', 'area', 'country_code2', 'country_code3']data = ['Afghanistan', 652090, 'AF', 'AFG']file_object = open("new_file.csv", "w")writer = csv.writer(file_object, delimiter=";")writer.writerow(header)writer.writerow(data)file_object.close()that how my csv file looks like:name area country_code2 country_code3Afghanistan 652090 AF AFG | Specify newline='' to eliminate the extra new line.If newline='' is not specified on platforms that use \r\n linendings on write an extra \r will be added. It should always be safe to specify newline='', since the csv module does its own (universal) newline handling. [1]with open('new_file.csv', 'w', newline='') as file_object: writer = csv.writer(file_object, delimiter=";") writer.writerow(header) writer.writerow(data) |
How to filter choices in fields(forms) in Django admin? I have model Tech, with name(Charfield) and firm(ForeignKey to model Firm), because one Tech(for example, smartphone) can have many firms(for example Samsung, apple, etc.)How can I create filter in admin panel for when I creating model, If I choose 'smartphone' in tech field, it show me in firm field only smartphone firms? Coz if I have more than one value in firm field (for example Apple, Samsung, IBM), it show me all of it. But IBM must show only if in tech field I choose 'computer'. How release it? | class MyModelName(admin.ModelAdmin):list_filter = (field1,field3,....)refer:- https://docs.djangoproject.com/en/2.1/ref/contrib/admin/ |
Python3 threading combining .start() doesn't create the join attribute This works fine:def myfunc(): print('inside myfunc')t = threading.Thread(target=myfunc)t.start()t.join()print('done')However this, while apparently creating and executing the thread properly:def myfunc(): print('inside myfunc')t = threading.Thread(target=myfunc).start()t.join()print('done')Generates the following fatal error when it hits join(): AttributeError: 'NoneType' object has no attribute 'join'I would have thought that these statements are equivalent. What is different? | t = threading.Thread(target=myfunc).start()threading.Thread(target=myfunc) returns a thread object, However object.start() returns None. That's why there is an AttributeError. |
regex storing matches in wrong capture group I am trying to build a python regex with optional capture group. My regex works for most case but fails to put the matches in the right group in one of the test case.I want to match and capture the following cases:namespace::tool_name::1.0.1namespace::tool_nametool_name::1.0.1tool_nameHere is the regex I have so far:(?:(?P<namespace>^[^:]+)::)?(?P<name>[^:]*)(?:::(?P<version>[0-9\.]+))?This regex works fine for all my 4 test cases but the problem I have is in case 3, the tool_name is capture in the namespace group and the 1.0.1 is captured in the name group. I would like them to be captured in the right groups, name and version respectivelyThanks | You may make tool_name regex part obligatory by replacing * with + (it looks like it always is present) and restrict this pattern from matching three dot-separated digit chunks with a negative lookahead:^(?:(?P<namespace>[^:]+)::)?(?!\d+(?:\.\d+){2})(?P<name>[^:]+)(?:::(?P<version>\d+(?:\.\d+){2}))?See the regex demoDetails^ - start of string(?:(?P<namespace>[^:]+)::)? - an optional non-capturing group matching any 1+ chars other than : into Group "namespace" and then just matches ::(?!\d+(?:\.\d+){2}) - a negative lookahead that does not allow digits.digits.digits pattern to appear right after the current position(?P<name>[^:]+) - Group "name": any 1 or more chars other than :(?:::(?P<version>\d+(?:\.\d+){2}))? - an optional non-capturing group matching :: and then Group "version" captures 1+ digits and 2 repetitions of . and 1+ digits. |
Extracting html using beautifulsoup I am trying to extract data from the html of the following site: http://www.irishrugby.ie/guinnesspro12/results_and_fixtures_pro_12_section.php I want to be able to extract the team names and the score for example the first fixture is Connacht vs Newport Gwent Dragons. I want my python program too print the result, i.e Connacht Rugby 29 - 23 Newport Gwent Dragons.Here is the html I want too extract it from:<!-- 207974 sfms --><tr class="odd match-result group_celtic_league" id="fixturerow0" onclick="if( clickpriority == 0 ) { redirect('/guinnesspro12/35435.php') }" onmouseout="className='odd match-result group_celtic_league';" onmouseover="clickpriority=0; className='odd match-result group_celtic_league rollover';" style=""> <td class="field_DateShort" style=""> Fri 4 Sep </td> <td class="field_TimeLong" style=""> 19:30 </td> <td class="field_CompStageAbbrev" style=""> PRO12 </td> <td class="field_LogoTeamA" style=""> <img alt="Connacht Rugby" height="50" src="http://cdn.soticservers.net/tools/images/teams/logos/50x50/16.png" width="50"/> </td> <td class="field_HomeDisplay" style=""> Connacht Rugby </td> <td class="field_Score" style=""> 29 - 23 </td> <td class="field_AwayDisplay" style=""> Newport Gwent Dragons </td> <td class="field_LogoTeamB" style=""> <img alt="Newport Gwent Dragons" height="50" src="http://cdn.soticservers.net/tools/images/teams/logos/50x50/19.png" width="50"/> </td> <td class="field_HA" style=""> H </td> <td class="field_OppositionDisplay" style=""> <br/> </td> <td class="field_ResScore" style=""> W 29-23 </td> <td class="field_VenName" style=""> Sportsground </td> <td class="field_BroadcastAttend" style=""> 3,624 </td> <td class="field_Links" style=""> <a href="/guinnesspro12/35435.php" onclick="clickpriority=1"> Report </a> </td></tr>This is my program so far:from httplib2 import Httpfrom bs4 import BeautifulSoup# create a "web object"h = Http()# Request the specified web pageresponse, content = h.request('http://www.irishrugby.ie/guinnesspro12/results_and_fixtures_pro_12_section.php')# display the response statusprint(response.status)# display the text of the web pageprint(content.decode())soup = BeautifulSoup(content)# check the responseif response.status == 200: #print(soup.get_text()) rows = soup.find_all('tr')[1:-2] for row in rows: data = row.find_all('td') #print(data)else: print('Unable to connect:', response.status) print(soup.get_text()) | Instead of finding all the <td> tags you should be more specific. I would convert this:for row in rows: data = row.find_all('td')to this:for row in rows: home = row.find("td",attrs={"class":"field_HomeDisplay") score = row.find("td",attrs={"class":"field_Score") away = row.find("td",attrs={"class":"field_AwayDisplay") print(home.get_text() + " " + score.get_text() + " " + away.get_text()) |
segment em with known data opencv I use opencv EM algorithm for segment an image into 2 shapes.One shape is always inside. I use EM segment.I want to use some known model of RGB colors: I have input table of 30*3 which is common colors for the background. How to input this to EM? should I calculate means and std and input to the constructor?Python: cv2.EM.trainE(samples, means0[, covs0[, weights0[, logLikelihoods[, labels[, probs]]]]]) → retval, logLikelihoods, labels, probsPython: cv2.EM.trainM(samples, probs0[, logLikelihoods[, labels[, probs]]]) thanks !! | You may use the cv.EM.trainE interface and provide the algorithm with you initial 30x3 values as means0 input argument. |
Why isn't a class's __new__ method in its __dict__? Brief context: I'm attempting to edit a class's default arguments to its __new__ method. I need access to the method, and I was attempting to get access in the same way I accessed its other methods - through its __dict__. But here, we can see that its __new__ method isn't in its __dict__.Is this related to __new__ being a Static Method? If so, why aren't those in a class's __dict__? Where are they stored in the object model?class A(object): def __new__(cls, a): print(a) return object.__new__(cls) def f(a): print(a) ....: In [12]: A.__dict__['f']Out[12]: <function __main__.A.f>In [13]: A.__dict__['__new__']Out[13]: <staticmethod at 0x103a6a128>In [14]: A.__new__Out[14]: <function __main__.A.__new__>In [16]: A.__dict__['__new__'] == A.__new__Out[16]: FalseIn [17]: A.__dict__['f'] == A.fOut[17]: True | A.__dict__['new'] is the staticmethod descriptor, where as A.__new__ is the actual underlying function.https://docs.python.org/2/howto/descriptor.html#static-methods-and-class-methodsif you need to call the function, or get it by using a string (at runtime), use getattr(A, '__new__')>>> A.__new__<function A.__new__ at 0x02E69618>>>> getattr(A, '__new__')<function A.__new__ at 0x02E69618>Python 3.5.1class A(object):def __new__(cls, a): print(a) return object.__new__(cls)def f(a): print(a)>>> A.__dict__['__new__']<staticmethod object at 0x02E66B70>>>> A.__new__<function A.__new__ at 0x02E69618>>>> object.__new__<built-in method __new__ of type object at 0x64EC98E8>>>> A.__new__(A, 'hello')hello<__main__.A object at 0x02E73BF0>>>> A.__dict__['__new__'](A, 'hello')Traceback (most recent call last): File "<pyshell#7>", line 1, in <module>TypeError: 'staticmethod' object is not callable>>> getattr(A, '__new__')<function A.__new__ at 0x02E69618> |
Issue running advertools crawler I'm relative newbie to python. I've been using advertools however I've run into the following errorimport advertools as advadv.crawl('https://sandpipercomms.com', 'my_output_file.jl', follow_links=True)import pandas as pdcrawl_df = pd.read_json('my_output_file.jl', lines=True)Traceback (most recent call last): File "c:\users\tom\mu_code\vampire.py", line 2, in <module> adv.crawl('https://sandpipercomms.com', 'my_output_file.jl', follow_links=True) File "C:\Users\Tom\AppData\Local\python\mu\mu_venv-38-20220808-225806\lib\site-packages\advertools\spider.py", line 971, in crawl subprocess.run(command) File "C:\Users\Tom\AppData\Local\Programs\Mu Editor\Python\lib\subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "C:\Users\Tom\AppData\Local\Programs\Mu Editor\Python\lib\subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Users\Tom\AppData\Local\Programs\Mu Editor\Python\lib\subprocess.py", line 1311, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args,FileNotFoundError: [WinError 2] The system cannot find the file specifiedI'm currently running windows 10, python 3 and recently installed julia.Any suggestions on what the issue might be would be appreciated.Cheers, | The code is correct, and there may be issues in the setup of your machine.As a quick solution, you can run the same code from this notebook, and you can do all the following work there:https://colab.research.google.com/drive/1fXLx9dIBVBB5Due6VjDV947bsc7hii5xDo you know how to setup a virtual environment? This might help isolate the issue, and provide some insight, to understand the problem and come up with a solution.Hope this helps. |
Python - can't extract data from statsmodel STL plot I produce the following plot using statsmodels STL:The output is displayed using matplotlib.pyplot.I would like to get the data from the lines but can't figure out how to extract them, even after trying the recommended solutions here.How can I 'extract' the underlying data for each of the 4 lines?I need to do this to actually use the output.Code: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from pandas.plotting import register_matplotlib_converters from statsmodels.tsa.seasonal import STL register_matplotlib_converters() sns.set_style('darkgrid') plt.rc('figure',figsize=(8,8)) plt.rc('font',size=13) raw = [ 315.58, 316.39, 316.79, 317.82, 318.39, 318.22, 316.68, 315.01, 314.02, 313.55, 315.02, 315.75, 316.52, 317.10, 317.79, 319.22, 320.08, 319.70, 318.27, 315.99, 314.24, 314.05, 315.05, 316.23, 316.92, 317.76, 318.54, 319.49, 320.64, 319.85, 318.70, 316.96, 315.17, 315.47, 316.19, 317.17, 318.12, 318.72, 319.79, 320.68, 321.28, 320.89, 319.79, 317.56, 316.46, 315.59, 316.85, 317.87, 318.87, 319.25, 320.13, 321.49, 322.34, 321.62, 319.85, 317.87, 316.36, 316.24, 317.13, 318.46, 319.57, 320.23, 320.89, 321.54, 322.20, 321.90, 320.42, 318.60, 316.73, 317.15, 317.94, 318.91, 319.73, 320.78, 321.23, 322.49, 322.59, 322.35, 321.61, 319.24, 318.23, 317.76, 319.36, 319.50, 320.35, 321.40, 322.22, 323.45, 323.80, 323.50, 322.16, 320.09, 318.26, 317.66, 319.47, 320.70, 322.06, 322.23, 322.78, 324.10, 324.63, 323.79, 322.34, 320.73, 319.00, 318.99, 320.41, 321.68, 322.30, 322.89, 323.59, 324.65, 325.30, 325.15, 323.88, 321.80, 319.99, 319.86, 320.88, 322.36, 323.59, 324.23, 325.34, 326.33, 327.03, 326.24, 325.39, 323.16, 321.87, 321.31, 322.34, 323.74, 324.61, 325.58, 326.55, 327.81, 327.82, 327.53, 326.29, 324.66, 323.12, 323.09, 324.01, 325.10, 326.12, 326.62, 327.16, 327.94, 329.15, 328.79, 327.53, 325.65, 323.60, 323.78, 325.13, 326.26, 326.93, 327.84, 327.96, 329.93, 330.25, 329.24, 328.13, 326.42, 324.97, 325.29, 326.56, 327.73, 328.73, 329.70, 330.46, 331.70, 332.66, 332.22, 331.02, 329.39, 327.58, 327.27, 328.30, 328.81, 329.44, 330.89, 331.62, 332.85, 333.29, 332.44, 331.35, 329.58, 327.58, 327.55, 328.56, 329.73, 330.45, 330.98, 331.63, 332.88, 333.63, 333.53, 331.90, 330.08, 328.59, 328.31, 329.44, 330.64, 331.62, 332.45, 333.36, 334.46, 334.84, 334.29, 333.04, 330.88, 329.23, 328.83, 330.18, 331.50, 332.80, 333.22, 334.54, 335.82, 336.45, 335.97, 334.65, 332.40, 331.28, 330.73, 332.05, 333.54, 334.65, 335.06, 336.32, 337.39, 337.66, 337.56, 336.24, 334.39, 332.43, 332.22, 333.61, 334.78, 335.88, 336.43, 337.61, 338.53, 339.06, 338.92, 337.39, 335.72, 333.64, 333.65, 335.07, 336.53, 337.82, 338.19, 339.89, 340.56, 341.22, 340.92, 339.26, 337.27, 335.66, 335.54, 336.71, 337.79, 338.79, 340.06, 340.93, 342.02, 342.65, 341.80, 340.01, 337.94, 336.17, 336.28, 337.76, 339.05, 340.18, 341.04, 342.16, 343.01, 343.64, 342.91, 341.72, 339.52, 337.75, 337.68, 339.14, 340.37, 341.32, 342.45, 343.05, 344.91, 345.77, 345.30, 343.98, 342.41, 339.89, 340.03, 341.19, 342.87, 343.74, 344.55, 345.28, 347.00, 347.37, 346.74, 345.36, 343.19, 340.97, 341.20, 342.76, 343.96, 344.82, 345.82, 347.24, 348.09, 348.66, 347.90, 346.27, 344.21, 342.88, 342.58, 343.99, 345.31, 345.98, 346.72, 347.63, 349.24, 349.83, 349.10, 347.52, 345.43, 344.48, 343.89, 345.29, 346.54, 347.66, 348.07, 349.12, 350.55, 351.34, 350.80, 349.10, 347.54, 346.20, 346.20, 347.44, 348.67 ] co2 = pd.Series(raw, index=pd.date_range('1-1-1959', periods=len(raw), freq='M'), name = 'co2') co2 = co2.interpolate(method='spline', order=3) stl = STL(co2, period = 12, seasonal=13) stl.fit().plot() plt.show() | I don't have much experience with this, but I looked it up and found the following SO answers You can get it with the following code.import statsmodels.api as smres = sm.tsa.seasonal_decompose(co2, freq=12)trend = res.trend1959-01-31 NaN1959-02-28 NaN1959-03-31 NaN1959-04-30 317.1242861959-05-31 317.042857 ... 1987-08-31 348.3742861987-09-30 347.9928571987-10-31 NaN1987-11-30 NaN1987-12-31 NaNFreq: M, Name: trend, Length: 348, dtype: float64seasonal = res.seasonal1959-01-31 -0.1081461959-02-28 0.5341311959-03-31 1.3146221959-04-30 2.4081491959-05-31 2.920247 ... 1987-08-31 -1.1748131987-09-30 -2.9129231987-10-31 -3.1740241987-11-30 -2.0274761987-12-31 -0.964634Freq: M, Name: seasonal, Length: 348, dtype: float64residual = res.resid1959-01-31 NaN1959-02-28 NaN1959-03-31 NaN1959-04-30 NaN1959-05-31 NaN ..1987-08-31 NaN1987-09-30 NaN1987-10-31 NaN1987-11-30 NaN1987-12-31 NaNFreq: M, Name: resid, Length: 348, dtype: float64observed = res.observed1959-01-31 315.581959-02-28 316.391959-03-31 316.791959-04-30 317.821959-05-31 318.39 ... 1987-08-31 347.541987-09-30 346.201987-10-31 346.201987-11-30 347.441987-12-31 348.67Freq: M, Name: co2, Length: 348, dtype: float64 |
Python and pylint in VSCode On my VSCode editor I run a venv on conda.Python version is 3.8 in the venv.Importing package OpenCV asimport cv2spits out pylint error likeModule 'cv2' has no 'xyz' memberBut importing the package using from cv2 import cv2 runs perfectly well. Why is that and what is the way to correct that permanently on vscode on my ubuntu machine? | According to the information you provided, I installed the module "opencv" on my computer, and VSCode did not display the pylint error when using it:The way I installed the module "opencv": pip install opencv-pythonMy settings.json:{ "terminal.integrated.shell.windows": "C:\\windows\\System32\\cmd.exe", "workbench.iconTheme": "vscode-icons", "files.autoSave": "afterDelay", "files.autoSaveDelay": 1000, "python.linting.enabled": true, "python.linting.pylintEnabled": true, "python.languageServer": "Pylance",}Reference: Opencv-python. |
PermissionError: [Errno 13] Permission denied on mac When I try to run the code below, the mac refuse the connection.from http.server import BaseHTTPRequestHandler, HTTPServerclass RequestHandler(BaseHTTPRequestHandler): def do_GET(self): message = "Welcome to COE 550!" self.protocol_version = "HTTP/1.1" self.send_response(200) self.send_header("Content-Length",len(message)) self.end_headers() self.wfile.write(bytes(message, "utf8")) returnserver = ('localhost', 80)httpd = HTTPServer(server, RequestHandler)httpd.serve_forever()The output message isPermissionError: [Errno 13] Permission denied | The port 80 is considered as privileged port(TCP/IP port numbers below 1024) so the process using them must be owned by root. When you run a server as a test from a non-priviliged account, you have to test it on other ports, such as 2784, 5000, 8001 or 8080.You could either run the python process as root or you have to use any non privileged port to fix this issue.server = ('localhost', 8001)httpd = HTTPServer(server, RequestHandler)httpd.serve_forever() |
Import python from sibling folder without -m or syspath hacks So I've spent the last three days trying to figure out a workable solution to this problem with imports.I have a subfolder in my project where I have scripts for database control, which has sibling folders that would like to call it. I have tried many online solutions but couldn't find anything that properly works. It seems some changes in Python 3.3/4 nullify a lot of solutions, or something.So I made a very simple test case.IMPORTS/├─ folder1/│ ├─ script1.py│ ├─ __init__.py├─ folder2/│ ├─ script2.py│ ├─ __init__.py├─ __init__.pyHow do I, from script1.py, call a function inside script2.py? | I generally prefer to install my module as a dependency so I can import from the project root. This seems to be the correct approach, though I've rarely seen it talked about online.E.g. from IMPORTS you would run pip install -e . (install the package in this folder in editable mode). This will require that you have a setup.py:from setuptools import setup, find_packagessetup( name='IMPORTS', version='x.x.x', description='What the package does.', author='Your Name', author_email='[email protected]', install_requires=[], packages=find_packages())Here is an example from one of my personal packages.Then you can import from the root folder (where setup.py is). Following your example:from folder1 import script1Or vice versa.In summary:Write a setup.py.Install your package in editable mode with pip install -e .Write import statements from the package root. |
Problem using itertools and zip in combination to create dictionary from two lists of different lengths I want keys to repeat the same way in each dictionary. I.e. start from A and go till E. But it seems itertools.cycle is skipping one every time it cycles over. I also want the values to follow the order in the list (i.e. start from 1 in the first dictionary and end with 15 in the last dictionary). Please see code below:import itertoolsallKeys=['A','B','C','D','E']a=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]g=itertools.cycle(allKeys)b=[]for i in range(3): dishDict=dict(zip(g,a)) b.append(dishDict)bGenerates:[{'A': 11, 'B': 12, 'C': 13, 'D': 14, 'E': 15}, {'B': 11, 'C': 12, 'D': 13, 'E': 14, 'A': 15}, {'C': 11, 'D': 12, 'E': 13, 'A': 14, 'B': 15}]As you see, keys in the second dictionary start from B (instead of A, as I would like). Also the values are the same in all three dictionaries in the list.This is what I want the output to look like:[{'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5}, {'A': 6, 'B': 7, 'C': 8, 'D': 9, 'E': 10}, {'A': 11, 'B': 12, 'C': 13, 'D': 14, 'E': 15}]I'd really appreciate it if someone could shed some light on what's happening and what I should do to fix it. I have already spent quite a bit of time to solve it myself and also checked the documentation on itertools.cycle. But haven't been able to figure it out yet. | For the required output, you don't need cycle():allKeys=['A','B','C','D','E']a=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]it = iter(a)b=[]for i in range(3): dishDict=dict(zip(allKeys,it)) b.append(dishDict)print(b)Prints:[{'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5}, {'A': 6, 'B': 7, 'C': 8, 'D': 9, 'E': 10}, {'A': 11, 'B': 12, 'C': 13, 'D': 14, 'E': 15}] |
Multiple bitrates HLS with ffmpeg-python I am currently using ffmpeg-python library to convert a .mp4 video into HLS format with output looking like this:ffmpeg.output( mp4_input, m3u8_name, format='hls', start_number=0, hls_time=5, hls_list_size=0,),How do I make ffmpeg-python output HLS with in multiple bitrates and create a master playlist for them? | Actually you can achieve the same without ffmpeg-python. I'm the creator of VidGear Video Processing Python Project that contains StreamGear API for this very purpose. The example code is as follows:# import required librariesfrom vidgear.gears import StreamGear# activate Single-Source Mode and also define various streamsstream_params = { "-video_source": "foo.mp4", "-streams": [ {"-resolution": "1920x1080", "-video_bitrate": "4000k"}, # Stream1: 1920x1080 at 4000kbs bitrate {"-resolution": "1280x720", "-framerate": 30.0}, # Stream2: 1280x720 at 30fps framerate {"-resolution": "640x360", "-framerate": 60.0}, # Stream3: 640x360 at 60fps framerate {"-resolution": "320x240", "-video_bitrate": "500k"}, # Stream3: 320x240 at 500kbs bitrate ],}# describe a suitable master playlist location/name and assign paramsstreamer = StreamGear(output="hls_out.m3u8", format = "hls", **stream_params)# trancode sourcestreamer.transcode_source()# terminatestreamer.terminate()and that's it. Goodluck! |
Validation loss curve is flat and training loss curve is higher than validation error curve I'm building a LSTM model for prediction senario. My dataset has around 248000 piece of data and I use 24000 (around 10%) as validation set, others are training set. My model learning curve is the following:learning curveThe validation error is always 0.00002 from scratch, and the training error decreased to 0.013533 at epoch 20.I've read this carefully: https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/Is my validation set is unrepresentative? Is the solution to use larger validation set? | It might be that, first, your underlying concept is very simple which leads to extremely low validation error early on. Second, your data augmentation makes it harder to learn, which yields higher training error.Yet, I would still run a couple of experiments in your case. First: divide data as 10/90 instead of 90/10 and see how does your validation error changes then - hopefully, you would see some sort of a curve between (now shorter and harder) epochs. Second, I would run validation before training (or after an epoch of 1 batch) to produce a random result. |
PyOWM installed but not recognized? Disclaimer - I am quite new to Python.I wanted to use the OWM API to make a simple Python weather program. I found some guides to using this key on the web, and they said to use the PyOWM library. I DuckDuckGoed how to install it and I downloaded Pip. I put it in C:/pip and tried to run 'python get-pip.py' (yes, I was in the directory in CMD). It didn't work, and it sent me to the Microsoft Store page for Python.I installed it (even though i had the normal ver installed) and tried again. Pip installed. I ran pip install pyowm and it installed. Everything seemed fine.When I went back into PyCharm, it wouldn't work. This is the code from the tutorial I am watching:import pyowmowm = pyowm.OWM('<api_key>') # TODO: Replace <api_key> with your API keyla = owm.three_hours_forecast('Los Angeles, US')print(la.will_have_clouds())Any ideas? | In Pycharm, you have to install your library in the project interpreter.In your Pycharm go to File -> settings -> Project:test(In my case test means my project name) -> select project interpreter -> click add buttonAfter clicking add button and search pyowm then install it. |
SSL Error : Python Multiprocessing, PostgresSQL and Psycopg2 How can I call with more than one processes?Below code is working fine for processes = 1.Definition:def origin_and_url_from_url(url): ori_url = url.strip() cursor = connection.cursor() cursor.execute("SELECT DISTINCT url, id FROM origin where url = %s",[ori_url]) rows = cursor.fetchall() cursor.close() for a_row in rows: with open('TopListwith100CardsWithID.csv','a') as file: file.write(str(a_row[1])+ ", ") file.write(str(a_row[0])) file.write('\n')Call:with open('SEMethodologies/TopListwith100Cards.csv', 'r') as f: reader = csv.reader(f) top_list = list(reader)p = multiprocessing.Pool(1, initializer, ())logger.info("Pool Started for ids")results = p.starmap(origin_and_url_from_url, top_list)print(results)p.close()But if I call by changing this line p = multiprocessing.Pool(2, initializer, ()) for two processes, Its shows this error psycopg2.OperationalError: SSL error: decryption failed or bad record Mac | I had a very pesky bug that sounds similar - my service would restart with this errorCorruption detected. Cipher functions:OPENSSL_internal:BAD_DECRYPTroutines:OPENSSL_internal:DECRYPTION_FAILED_OR_BAD_RECORD_MACDecryption error: TSI_DATA_CORRUPTEDRunning a Gunicorn service in Google Cloud with a Postgres DB. Ended up debugging a lot of multiprocessing configurations, Postgres settings, etc.The thing that fixed it for me was freezing the grpcio python package at 1.29.0, based on this answer which says that the decryption failure error happens after GRPC v1.3.x. |
Using asyncio in python to run two infinitely running functions I am trying to run two infinitely looping functions concurrently and will later implement this into a socket chatroom application for each client that is connected to my server. The problem is, whenever the function that I am trying to gather is run in an infinite while loop, my program will only run the first function that is gathered.Here is my code:async def increment(): global money while True: money += 1async def displayMoney(): global money while True: input(money)async def main(): global money await asyncio.gather(increment(), displayMoney())asyncio.run(main())I am new to asynchronous programming, apologies. | If you add await asyncio.sleep(0) at the end of the loop, it allows the loops to give each other time to run. However, this means you cannot run anything that stops the main event loop, such as time.sleep(1) or input(), like I was trying to do. This is fine though as I do not need to use any of these in my main program as it utilises tkinter gui. |
Timeout expired pgadmin Unable to connect to server I am following the step by step instructions from this link https://www.postgresqltutorial.com/connect-to-postgresql-database/ here to create a simple server on pgadmin. Please check the picture What am I doing wrong, I installed pgadmin on my macOS but I don't see why I am getting this error. Please help | It's an issue with AWS inbound rules not pgAdmin. Follow this guide to solve it. It works. |
Python "ModuleNotFoundError:", but module does appear to be installed per Command Prompt I am very new to Python/programming, having recently installed Python 3.10. I have already installed the Openpyxl module, i.e. when I check on CMD I get this:C:\Users\hadam>pip install openpyxlRequirement already satisfied: openpyxl in c:\users\hadam\appdata\local\programs\python\python310\lib\site-packages (3.0.9)Requirement already satisfied: et-xmlfile in c:\users\hadam\appdata\roaming\python\python310\site-packages (from openpyxl) (1.1.0)I am trying to run some code which I have just copied from here (i.e. I have just edited the file path names):https://www.geeksforgeeks.org/python-how-to-copy-data-from-one-excel-sheet-to-another/However, when I try to run this script (via the Mu editor), I get the following error message:Traceback (most recent call last): File "c:\users\hadam\appdata\local\programs\python\python310\scripts\test1.py", line 2, in <module> import openpyxl as xl;ModuleNotFoundError: No module named 'openpyxl'>>> Can anyone tell me why the Mu editor cannot find Openpyxl, or what I can do to execute this programme?Thanks | Try to open python from the command line, e.g. C:\users\you> pythonorC:\users\you> python3orC:\users\you> path\to\pythonthen when python is open>>> import openpyxl as xlIf the problem is not present anymore, your Mu editor might be using a different python interpreter/environment: check for its configurations and change it to the one you opened from the terminal. |
Saving images in a loop faster than multithreading / multiprocessing Here's a timed example of multiple image arrays of different sizes being saved in a loop as well as concurrently using threads / processes:import tempfilefrom concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completedfrom pathlib import Pathfrom time import perf_counterimport numpy as npfrom cv2 import cv2def save_img(idx, image, dst): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image)if __name__ == '__main__': l1 = np.random.randint(0, 255, (100, 50, 50, 1)) l2 = np.random.randint(0, 255, (1000, 50, 50, 1)) l3 = np.random.randint(0, 255, (10000, 50, 50, 1)) temp_dir = tempfile.mkdtemp() workers = 4 t1 = perf_counter() for ll in l1, l2, l3: t = perf_counter() for i, img in enumerate(ll): save_img(i, img, temp_dir) print(f'Time for {len(ll)}: {perf_counter() - t} seconds') for executor in ThreadPoolExecutor, ProcessPoolExecutor: with executor(workers) as ex: futures = [ ex.submit(save_img, i, img, temp_dir) for (i, img) in enumerate(ll) ] for f in as_completed(futures): f.result() print( f'Time for {len(ll)} ({executor.__name__}): {perf_counter() - t} seconds' )And I get these durations on my i5 mbp:Time for 100: 0.09495482999999982 secondsTime for 100 (ThreadPoolExecutor): 0.14151873999999998 secondsTime for 100 (ProcessPoolExecutor): 1.5136184309999998 secondsTime for 1000: 0.36972280300000016 secondsTime for 1000 (ThreadPoolExecutor): 0.619205703 secondsTime for 1000 (ProcessPoolExecutor): 2.016624468 secondsTime for 10000: 4.232915643999999 secondsTime for 10000 (ThreadPoolExecutor): 7.251599262 secondsTime for 10000 (ProcessPoolExecutor): 13.963426469999998 secondsAren't threads / processes expected to require less time to achieve the same thing? and why not in this case? | The timings in the code are wrong because the timer t is not reset before testing the Pools. Nevertheless, the relative order of the timings are correct. A possible code with a timer reset is:import tempfilefrom concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completedfrom pathlib import Pathfrom time import perf_counterimport numpy as npfrom cv2 import cv2def save_img(idx, image, dst): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image)if __name__ == '__main__': l1 = np.random.randint(0, 255, (100, 50, 50, 1)) l2 = np.random.randint(0, 255, (1000, 50, 50, 1)) l3 = np.random.randint(0, 255, (10000, 50, 50, 1)) temp_dir = tempfile.mkdtemp() workers = 4 for ll in l1, l2, l3: t = perf_counter() for i, img in enumerate(ll): save_img(i, img, temp_dir) print(f'Time for {len(ll)}: {perf_counter() - t} seconds') for executor in ThreadPoolExecutor, ProcessPoolExecutor: t = perf_counter() with executor(workers) as ex: futures = [ ex.submit(save_img, i, img, temp_dir) for (i, img) in enumerate(ll) ] for f in as_completed(futures): f.result() print( f'Time for {len(ll)} ({executor.__name__}): {perf_counter() - t} seconds' )Multithreading is faster specially for I/O bound processes. In this case, compressing the images is cpu-intensive, so depending on the implementation of OpenCV and of the python wrapper, multithreading can be much slower. In many cases the culprit is CPython's GIL, but I am not sure if this is the case (I do not know if the GIL is released during the imwrite call). In my setup (i7 8th gen), Threading is as fast as the loop for 100 images and barely faster for 1000 and 10000 images. If ThreadPoolExecutor reuses threads, there is an overhead involved in assigning a new task to an existing thread. If it does not reuses threads, there is an overhead involved in launching a new thread.Multiprocessing circumvents the GIL issue, but has some other problems. First, pickling the data to pass between processes takes some time, and in the case of images it can be very expensive. Second, in the case of windows, spawning a new process takes a lot of time. A simple test to see the overhead (both for processes and threads) is to change the save_image function by one that does nothing, but still need pickling, etc:def save_img(idx, image, dst): if idx != idx: print("impossible!")and by a similar one without parameters to see the overhead of spawning the processes, etc.The timings in my setup show that 2.3 seconds are needed just to spawn the 10000 processes and 0.6 extra seconds for pickling, which is much more than the time needed for processing.A way to improve the throughput and keep the overhead to a minimum is to break the work on chunks, and submit each chunk to the worker:import tempfilefrom concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completedfrom pathlib import Pathfrom time import perf_counterimport numpy as npfrom cv2 import cv2def save_img(idx, image, dst): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image)def multi_save_img(idx_start, images, dst): for idx, image in zip(range(idx_start, idx_start + len(images)), images): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image)if __name__ == '__main__': l1 = np.random.randint(0, 255, (100, 50, 50, 1)) l2 = np.random.randint(0, 255, (1000, 50, 50, 1)) l3 = np.random.randint(0, 255, (10000, 50, 50, 1)) temp_dir = tempfile.mkdtemp() workers = 4 for ll in l1, l2, l3: t = perf_counter() for i, img in enumerate(ll): save_img(i, img, temp_dir) print(f'Time for {len(ll)}: {perf_counter() - t} seconds') chunk_size = len(ll)//workers ends = [chunk_size * (_+1) for _ in range(workers)] ends[-1] += len(ll) % workers starts = [chunk_size * _ for _ in range(workers)] for executor in ThreadPoolExecutor, ProcessPoolExecutor: t = perf_counter() with executor(workers) as ex: futures = [ ex.submit(multi_save_img, start, ll[start:end], temp_dir) for (start, end) in zip(starts, ends) ] for f in as_completed(futures): f.result() print( f'Time for {len(ll)} ({executor.__name__}): {perf_counter() - t} seconds' )This should give you a significant boost over a simple for, both for a multiprocessing and multithreading approach. |
How to convert to log base 2? How can i convert the following code to log base 2?df["col1"] = df["Target"].map(lambda i: np.log(i) if i > 0 else 0) | I think you just want to use np.log2 instead of np.log. |
Get cumulative sum Pandas conditional on other column I want to create a column that shows the cumulative count (rolling sum) of previous purchases (per customer) that took place in department 99My data frame looks like this ; where each row is a separate transaction. id chain dept category company brand date productsize productmeasure purchasequantity purchaseamount sale0 86246 205 7 707 1078778070 12564 2012-03-02 12.00 OZ 1 7.59 268.901 86246 205 63 6319 107654575 17876 2012-03-02 64.00 OZ 1 1.59 268.902 86246 205 97 9753 1022027929 0 2012-03-02 1.00 CT 1 5.99 268.903 86246 205 25 2509 107996777 31373 2012-03-02 16.00 OZ 1 1.99 268.904 86246 205 55 5555 107684070 32094 2012-03-02 16.00 OZ 2 10.38 268.905 86246 205 97 9753 1021015020 0 2012-03-02 1.00 CT 1 7.80 268.906 86246 205 99 9909 104538848 15343 2012-03-02 16.00 OZ 1 2.49 268.907 86246 205 59 5907 102900020 2012 2012-03-02 16.00 OZ 1 1.39 268.908 86246 205 9 921 101128414 9209 2012-03-02 4.00 OZ 2 1.50 268.90I did this : shopdata6['transactions_99'] = 0shopdata6['transactions_99'] = shopdata6[shopdata6['dept'] == 99].groupby(['id', 'dept'])['transaction_99'].cumsum()Update : id dept date purchase purchase_count_dept99(desired)id1 199 date1 $10 0 id1 99 date1 $10 1id1 100 date1 $50 1id1 99 date2 $30 2id2 100 date1 $10 0id2 99 date1 $10 1id3 99 date3 $10 1Applied this :shopdata6['transaction_99'] = np.where(shopdata6['dept']==99, 1, 0)shopdata6['transaction_99'] = shopdata6.groupby(['id'])['transaction_99'].transform('cumsum')The result does look okay, but is it correct ? | Your code should be simplify:s = (shopdata6['dept']==99).astype(int)shopdata6['transaction_99'] = s.groupby(shopdata6['id']).cumsum()print (shopdata6) id dept date purchase purchase_count_dept99(desired) transaction_990 id1 199 date1 $10 0 01 id1 99 date1 $10 1 12 id1 100 date1 $50 1 13 id1 99 date2 $30 2 24 id2 100 date1 $10 0 05 id2 99 date1 $10 1 16 id3 99 date3 $10 1 1 |
random error message popping up, I am very confused onto why this is happening? numbers = range(1,10)for number in numbers: if number == 1: print(number + "st") elif number == 2: print(number + "nd") elif number == 3: print(number + "rd") elif number: print(number + "th")There is an unexpected error that keeps on popping up. It keeps on saying "unsupported operand type(s) for +: 'int' and 'str'". I tried changing some things but nothing seems to work!If you can possibly help me, please give me an aswer. :) | In Python, strings can only be concatenated with other strings. You can't add a string and an integer. Instead, you would convert the integer to a string and then perform concatenation.Like so:print(str(number) + "st") |
How to add custom HTML elements and images to a blog in Django? I am trying to create a blog in Django. Most of the tutorials and examples available shows just retrieving some content from the database and displaying it dynamically on the predefined HTML structure.After looking at some solution I found something called flatpages in Django which provide the facility to write HTML. But its recommended to use it for About Us and Contact Us kind of pages. Should I use this?I want to do it as I can write my own HTML data for each blog and add some images so that the structure of HTML should not be similar in each blog. For example, In the case of WordPress, it allows the user to completely write each part of the blog except the heading part and the structure of HTML is not constant always. I want such functionality. Please help. | What you are looking for is to upload images and embed them as html in your content field. This can be done using a WYSIWYG Editor such as CKEditor. In CK you can write your text, format it and upload files. You could use django-ckeditor to do the heavy lifting for you: https://github.com/django-ckeditor/django-ckeditorIn your template you then have to render your content with safe filter so that the content will be rendered as html: {{ post.content |safe }} |
splitting words into syllables python I have a function called syllable_split(word_input) that receives a word and counts the number of Syllables and returns only a list containing the syllables of the given word.e.g.pandemonium ----> ['pan', 'de', 'mo', 'ni', 'um']self-righteously ---> ['self', 'right','eous', 'ly']hello ---> ['hel','lo]diet ----> ['di','et]seven ---> ['sev','en']my function counts the syllables correctly but I'm having trouble splitting the word to its corresponding syllables. I only managed to split the word to its first correspondent syllable but it tends not to work for some words.For example I enter in 'seven' and I only get 'se' instead of 'sev'. I was thinking of following the syllable division pattern(vc/cv,c/cv,vc/v,v/v) but I'm having trouble implementing that into my function.def syllable_split(word_input):count = 0word = word_input.lower()vowels = set("aeiou")syll = list()temp = 0for letter in word: if letter in vowels: count += 1if count == 1: return wordfor index in range(count, len(word)): if word[index] in vowels and word[index - 1] not in vowels: w = word[temp: index - 1] if len(w) != 0: syll.append(w) temp = index - 1return sylluser_input = input()print(syllable_split(user_input)) | While I agree with the comments that your approach will have many failings, but if that's okay, based on your implementation you could write a function that splits the words exactly how you describe:vowels = 'AEIOU'consts = 'BCDFGHJKLMNPQRSTVWXYZ'consts = consts + consts.lower()vowels = vowels + vowels.lower()def is_vowel(letter): return letter in vowels def is_const(letter): return letter in consts# get the syllables for vc/cvdef vc_cv(word): segment_length = 4 # because this pattern needs four letters to check pattern = [is_vowel, is_const, is_const, is_vowel] # functions above split_points = [] # find where the pattern occurs for i in range(len(word) - segment_length): segment = word[i:i+segment_length] # this will check the four letter each match the vc/cv pattern based on their position # if this is new to you I made a small note about it below if all([fi(letter) for letter, fi in zip(segment, pattern)]): split_points.append(i+segment_length/2) # use the index to find the syllables - add 0 and len(word) to make it work split_points.insert(0, 0) split_points.append(len(word)) syllables = [] for i in range(len(split_points) - 1): start = split_points[i] end = split_points[i+1] syllables.append(word[start:end]) return syllablesword = 'vortex'print(vc_cv(word))# ['vor', 'text']You can do something similar for the other patterns, for example, c/cv will be patterns=[is_const, is_const, is_vowel] with a segment length of 3NoteYou can put functions in list:def linear(x): return xdef squared(x): return x * x def cubed(x): return x * x * x funcs = [linear, squared, cubed]numbers = [2, 2, 2]transforms = [fi(ni) for ni, fi in zip(numbers, funcs)]# results -> [2, 4, 8] |
Print only the numbers in the string in python I need to print only the numbers in the string and I don't know how to do itI mean for example mystring="ab543", How to get 543 as int?I tried something like thatmy_string="ab543"numlst=["0","1","2","3","4","5","6","7","8","9"]countfinish=0whichnum=""for charr in my_string: for num in numlst: if num==charr: whichnum=whichnum+str(num) break countfinish=countfinish+int(whichnum)print(countfinish) | You can try:>>> my_string="ab543">>> "".join([str(s) for s in my_string if s.isdigit()])'543'>>> int("".join([str(s) for s in my_string if s.isdigit()]))543You also can use filter :>>> my_string="ab543">>> int(''.join(filter(str.isdigit, my_string)))543 |
how i can open zoom window directly using python? I use subprocess library but didnt workimport subprocesssubprocess.Popen("C:\Users\STUDENT\AppData\Roaming\Zoom\bin\Zoom.exe")show this error message^SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3 truncated \UXXXXXXXX escape | I would recommend first confirming that you can run Zoom from the command line from the specified path:C:\Users\STUDENT\AppData\Roaming\Zoom\bin\Zoom.exeMy installation of Zoom on Windows 10 uses this path:C:\Program Files (x86)\zoom\bin\Zoom.exeIf you can open Zoom from the command line (such as powershell or cmd prompt), you should be able to open with subprocess using python.The error message associated is likely the path string, specifically PEP 8: W605 invalid escape sequence '\'. If you don't escape the backlashes, python incorrectly parses the string.Try:import subprocess def main(): subprocess.Popen("C:\\Program Files (x86)\\zoom\\bin\\Zoom.exe") if __name__ == '__main__': main() |
Scrapy: ValueError: need more than 0 values to unpack I am using scrapy to axtract some data, last time I had a problem in the ligne of regx. the error message is like this one : **File "ProjetVinNicolas3\spiders\nicolas_spider3.py", line 70, in parse_wine_pageclassement, appelation, couleur = res.select('.//div[@class="pro_col_right"]/div[@class="pro_blk_trans"] div[@class="pro_blk_trans_titre"]/text()').re(r'^(\d\w+\s*Vin)\S\s+(\w+-\w+|\w+)\S\s+(\w+)\s*$')exceptions.ValueError: need more than 0 values to unpack**link program | The call to .re is returning a zero-length tuple. You cannot perform a sequence assignment to n variables using a sequence which is not of exactly length n. |
python sqlite3 insert command any idea what I'm doing wrong?I'm creating a table called General: conn = sqlite3.connect(self.dbLocation) c = conn.cursor() sql = "create table if not exists General (id integer NOT NULL,current char[20] NOT NULL,PRIMARY KEY (id))" c.execute(sql) c.close() conn.close()I'm then using max(id) to see if the table is empty. If it is, I create a table called Current1 and insert a row in General (id, 'Current1'). id is autoincrementing integer: self.currentDB = "Current1" self.currentDBID = "1" #create the table sql = "create table %s (id integer NOT NULL,key char[90] NOT NULL,value float NOT NULL,PRIMARY KEY (id))" % (str(self.currentDB)) c.execute(sql) c.close() conn.close() conn = sqlite3.connect(self.dbLocation) c = conn.cursor() sql = "insert into General(current) values('%s')" % (str(self.currentDB)) print "sql = %s" % (str(sql)) ---> *sql = insert into General(current) values('Current1')* c.execute(sql) print "executed insert Current" c.execute ("select max(id) from General") temp = c.next()[0] print "temp = %s" % (str(temp)) ---> *temp = 1* c.close() conn.close()The problem is that if I open the database, I do not find any rows in the General table. Current1 table is being created, but the insert statement into General does not seem to be doing anything. What am I doing wrong? Thanks. | You have to commit the changes before closing the connection:conn.commit()check the example in the docs : http://docs.python.org/2/library/sqlite3.html |
tensorflow dataset from_generator() out of range error I'm trying to use tf.data.Dataset.from_generator() to generate training and validation data.I have my own data generator which does feature preparation on the fly:def data_iterator(self, input_file_list, ...): for f in input_file_list: X, y = get_feature(f) yield X, yInitially I was feeding this directly to tensorflow keras model but I encounter data out of range error after the first batch. Then I decided to wrap this within tensorflow data generator:train_gen = lambda: data_iterator(train_files, ...)valid_gen = lambda: data_iterator(valid_files, ...)output_types = (tf.float32, tf.float32)output_shapes = (tf.TensorShape([499, 13]), tf.TensorShape([2]))train_dat = tf.data.Dataset.from_generator(train_gen, output_types=output_types, output_shapes=output_shapes)valid_dat = tf.data.Dataset.from_generator(valid_gen, output_types=output_types, output_shapes=output_shapes)train_dat = train_dat.repeat().batch(batch_size=128)valid_dat = valid_dat.repeat().batch(batch_size=128)Then fit:model.fit(x=train_dat, validation_data=valid_dat, steps_per_epoch=train_steps, validation_steps=valid_steps, epochs=100, callbacks=callbacks)However, I'm still getting the error despite having .repeat() in the generator: BaseCollectiveExecutor::StartAbort Out of range: End of sequenceMy question is:why is .repeat() not working here?should I add a while True in my own iterator to avoid this? I feel like this can fix it but it doesn't look like the proper way of doing it. | I added a while True in my own generator so that it never run out and I'm not getting error any more:def data_iterator(self, input_file_list, ...): while True; for f in input_file_list: X, y = get_feature(f) yield X, yHowever, I don't know why .repeat() is not working for .from_generator() |
Maximum length of consecutive ones in binary representation Trying to find maximum length of ones in a binary representation including negative numbers. In the following code input_file is a text file where:first line is a number of lines with sample integersevery line staring from the second line has just one sample integerAn example file:4 - number of samples3 - sample0 - ...1 - ...2 - ...Result: 2Task: print the maximum number of ones found among all sample integers in input file. Find solution that takes O(n) time and makes just one pass through all samples.How to modify solution to work with negative integers of arbitrary (or at least for n ≤ 10000) size?Update: As I understand binary representation of negative numbers is based on Two's complement (https://en.wikipedia.org/wiki/Two's_complement). So, for example:+3 -> 011-3 -> 101How to convert integer to binary string representation taking its sign into account in general case?def maxConsecutive(input): return max(map(len,input.split('0'))) def max_len(input_file): max_len = 0 with open(input_file) as file: first_line = file.readline() if not first_line: return 0 k = int(first_line.strip()) # number of tests for i in range(k): line = file.readline().strip() n = int(line) xs = "{0:b}".format(n) n = maxConsecutive(xs) if n > max_len: max_len = n return max_lenprint(max_len('input.txt'))Update 2:This is a second task B from Yandex contest training page:https://contest.yandex.ru/contest/8458/enter/?lang=enYou need to register there to test your solution.So far All solutions given here fail at test 9.Update 3: Solution in Haskell that pass all Yandex testsimport Control.Monad (replicateM)onesCount :: [Char] -> IntonesCount xs = onesCount' xs 0 0 where onesCount' "" max curr | max > curr = max | otherwise = curr onesCount' (x:xs) max curr | x == '1' = onesCount' xs max $ curr + 1 | curr > max = onesCount' xs curr 0 | otherwise = onesCount' xs max 0getUserInputs :: IO [Char]getUserInputs = do n <- read <$> getLine :: IO Int replicateM n $ head <$> getLinemain :: IO ()main = do xs <- getUserInputs print $ onesCount xs | For negative numbers, you will either have to decide on a word length (32 bits, 64 bits, ...) or process them as absolute values (i.e. ignoring the sign) or use the minimum number of bits for each value.An easy way to control the word length is to use format strings. you can obtain the negative bits by adding the value to the power 2 corresponding to the selected word size. This will give you the appropriate bits for positive and for negative numbers. For example:n = 123f"{(1<<32)+n:032b}"[-32:] --> '00000000000000000000000001111011'n = -123f"{(1<<32)+n:032b}"[-32:] --> '11111111111111111111111110000101'Processing that to count the longest series of consecutive 1s is just a matter of string manipulation:If you choose to represent negative numbers using a varying word size you can use one bit more than the minimal representation of the positive number. For example -3 is represented as two bits ('11') when positive so it will need a minimum of 3 bits to be represented as a negative number: '101'n = -123wordSize = len(f"{abs(n):b}")+1bits = f"{(1<<wordSize)+n:0{wordSize}b}"[-wordSize:]maxOnes = max(map(len,bits.split("0")))print(maxOnes) # 1 ('10000101') |
Django: How can I filtering a foreign key of a class in models from users.forms I create a Patient model in patient appfrom django.contrib.auth.models import User# Create your models here.class Patient(models.Model): doctor = models.ForeignKey(User, on_delete=models.CASCADE) first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) sex = models.CharField(max_length=20) phone = models.IntegerField() birth_date = models.DateField()I want filtering the doctor fields which is foreign key from Users for just groups='Docteur', so when I want to add a patient I can find the users with 'Docteur' groups only not the other account's group.This is the forms.py in users app:from django import formsfrom django.contrib.auth.forms import UserCreationFormimport datetimeclass RegisterForm(UserCreationForm): BIRTH_YEAR_CHOICES = [] for years in range(1900,2021): BIRTH_YEAR_CHOICES.append(str(years)) sex_choice = [('1', 'Men'), ('2', 'Women')] groups_choice = [('1','Docteur'), ('2','Docteur remplaçant'), ('3','Secrétaire')] first_name = forms.CharField(max_length=200) last_name = forms.CharField(max_length=200) sex = forms.ChoiceField(widget=forms.Select, choices=sex_choice) date_of_birth = forms.DateField(widget=forms.SelectDateWidget(years=BIRTH_YEAR_CHOICES)) email = forms.EmailField() phone = forms.IntegerField() cin = forms.IntegerField() groups = forms.ChoiceField(widget=forms.Select, choices=groups_choice) password1 = forms.CharField(widget=forms.PasswordInput(), label='Password') password2 = forms.CharField(widget=forms.PasswordInput(), label='Repeat Password') class Meta(UserCreationForm.Meta): fields = UserCreationForm.Meta.fields + ('username','first_name','last_name','sex','date_of_birth','email','phone','cin','groups')So what am I suppose to do to add this condition? | If I understand correctly, you want a form for creating Patients in which you can select a User for the doctor foreign key, but restrict the choices to users that have selected the ('1', 'Docteur') choice as groups field.In that case you can use a ModelChoiceField and provide a filtered queryset:from django.contrib.auth.models import Userfrom django import formsfrom .models import Patientclass AddPatientForm(forms.ModelForm): doctor = forms.ModelChoiceField(queryset=User.objects.filter(groups='1')) class Meta: model = Patient fields = ['first_name', 'last_name', ...] |
conditions inside conditions pandas below is my DF in which I want to create a column based on other columnstest = pd.DataFrame({"Year_2017" : [np.nan, np.nan, np.nan, 4], "Year_2018" : [np.nan, np.nan, 3, np.nan], "Year_2019" : [np.nan, 2, np.nan, np.nan], "Year_2020" : [1, np.nan, np.nan, np.nan]}) Year_2017 Year_2018 Year_2019 Year_20200 NaN NaN NaN 11 NaN NaN 2 NaN2 NaN 3 NaN NaN3 4 NaN NaN NaNThe aim will be to create a new column and take value of the columns which is notna()Below is what I tried without success..test['Final'] = np.where(test.Year_2017.isna(), test.Year_2018, np.where(test.Year_2018.isna(), test.Year_2019, np.where(test.Year_2019.isna(), test.Year_2020, test.Year_2019))) Year_2017 Year_2018 Year_2019 Year_2020 Final0 NaN NaN NaN 1 NaN1 NaN NaN 2 NaN NaN2 NaN 3 NaN NaN 33 4 NaN NaN NaN NaNThe expected output: Year_2017 Year_2018 Year_2019 Year_2020 Final0 NaN NaN NaN 1 11 NaN NaN 2 NaN 22 NaN 3 NaN NaN 33 4 NaN NaN NaN 4 | You can forward or back filling missing values and then select last or first column:test['Final'] = test.ffill(axis=1).iloc[:, -1]test['Final'] = test.bfill(axis=1).iloc[:, 0]If there is only one non missing values per rows and numeric use:test['Final'] = test.min(1)test['Final'] = test.max(1)test['Final'] = test.mean(1)test['Final'] = test.sum(1, min_count=1) |
Call a class with append that targets class variable For example:class Foo: def __init__(self): self.bar = ["baz", "qux", "quux", "quuz", "corge", "grault", "garply", "waldo", "fred", "plugh", "xyzzy", "thud"]How can I call Foo().append() that appends to Foo().bar?Ex:x = Foo()x.append("asd")# What I want to happen:# self.bar now is [..., "asd"]# What actually happens:# AttributeError: 'Foo' object has no attribute 'append'Is this possible? | I added an append function myself:# ... in the Foo() class def append(self, value): return self.bar.append(value)Edit: A simpler method that would also work# ... in Foo().__init__(self) self.append = self.bar.append(Thank you @RaySteam) |
How to open another window in and take user input in Pyqt5 Python I am trying to create a GUI using pyqt5. I have one main window with pushbutton. When i click on pushbutton it should open another window which is having input form to take first name and last name. Below is my code. I am able to open another window but when i am submitting the details on opened window and clicking on Submit Details button, nothing is happening.Please note, if i directly call Child_ui in Main_Ui then then I am able to see the output form PrintInput function but same is not happening when I converted ui files in class.Main_ui.py:from PyQt5 import QtCore, QtGui, QtWidgetsclass Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(299, 148) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(90, 70, 75, 23)) self.pushButton.setObjectName("pushButton") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.pushButton.setText(_translate("MainWindow", "Register user"))if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_())I have converted this Qt designed file to class file:Main.py:from PyQt5 import QtCore, QtGui, QtWidgetsfrom Main_ui import *from Child import *class Main(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.openChild) def openChild(self): self.child = QtWidgets.QMainWindow() self.ui = userRegistation() self.ui.setupUi(self.child) self.child.show() if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Main() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_())Below is my Child_ui.py qt designer script:from PyQt5 import QtCore, QtGui, QtWidgetsclass Ui_ChildWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(284, 141) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(20, 30, 71, 16)) self.label.setObjectName("label") self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(20, 60, 71, 16)) self.label_2.setObjectName("label_2") self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(20, 100, 251, 23)) self.pushButton.setObjectName("pushButton") self.lineEdit = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit.setGeometry(QtCore.QRect(100, 30, 171, 20)) self.lineEdit.setObjectName("lineEdit") self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit_2.setGeometry(QtCore.QRect(100, 60, 171, 20)) self.lineEdit_2.setObjectName("lineEdit_2") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.label.setText(_translate("MainWindow", "First Name")) self.label_2.setText(_translate("MainWindow", "Last Name")) self.pushButton.setText(_translate("MainWindow", "Submit"))if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_())Child.py : Class file of Child_ui.pyfrom PyQt5 import QtCore, QtGui, QtWidgetsfrom Child_ui import *class userRegistation(QtWidgets.QMainWindow, Ui_ChildWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.PrintInput) def PrintInput(self): print (self.lineEdit.text()) print (self.lineEdit_2.text()) | Try it:from PyQt5 import QtCore, QtGui, QtWidgetsclass Ui_ChildWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(284, 141) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(20, 30, 71, 16)) self.label.setObjectName("label") self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(20, 60, 71, 16)) self.label_2.setObjectName("label_2") self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(20, 100, 251, 23)) self.pushButton.setObjectName("pushButton") self.lineEdit = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit.setGeometry(QtCore.QRect(100, 30, 171, 20)) self.lineEdit.setObjectName("lineEdit") self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit_2.setGeometry(QtCore.QRect(100, 60, 171, 20)) self.lineEdit_2.setObjectName("lineEdit_2") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.label.setText(_translate("MainWindow", "First Name")) self.label_2.setText(_translate("MainWindow", "Last Name")) self.pushButton.setText(_translate("MainWindow", "Submit"))#from Main_ui import Ui_MainWindowclass Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(299, 148) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(90, 70, 75, 23)) self.pushButton.setObjectName("pushButton") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.pushButton.setText(_translate("MainWindow", "Register user")) #from Child import *class UserRegistation(QtWidgets.QMainWindow, Ui_ChildWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.PrintInput) def PrintInput(self): print (self.lineEdit.text()) print (self.lineEdit_2.text())class Main(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.openChild) def openChild(self):# self.child = QtWidgets.QMainWindow() self.ui = UserRegistation() # <---# self.ui.setupUi(self.ui) # (self.child)# self.child.show() self.ui.show() # <--- if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv)# MainWindow = QtWidgets.QMainWindow() ui = Main() # <---# ui.setupUi(MainWindow) ui.show() # <--- sys.exit(app.exec_()).. yes it is working and i was also able to do it this way. I want to do it using two different file. Also I don't want to write logic in Qt designed file because if i do any change in Qt designer then whole script needs to changeUpdateMain.pyfrom PyQt5 import QtCore, QtGui, QtWidgetsfrom Main_ui import Ui_MainWindowfrom Child import UserRegistationclass Main(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.openChild) def openChild(self):# self.child = QtWidgets.QMainWindow() self.ui = UserRegistation() # <---# self.ui.setupUi(self.ui) # (self.child)# self.child.show() self.ui.show() # <--- if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv)# MainWindow = QtWidgets.QMainWindow() ui = Main() # <---# ui.setupUi(MainWindow) ui.show() # <--- sys.exit(app.exec_())Main_ui.pyfrom PyQt5 import QtCore, QtGui, QtWidgetsclass Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(299, 148) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(90, 70, 75, 23)) self.pushButton.setObjectName("pushButton") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.pushButton.setText(_translate("MainWindow", "Register user"))Child.pyfrom PyQt5 import QtCore, QtGui, QtWidgetsfrom Child_ui import Ui_ChildWindowclass UserRegistation(QtWidgets.QMainWindow, Ui_ChildWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.PrintInput) def PrintInput(self): print (self.lineEdit.text()) print (self.lineEdit_2.text())Child_ui.pyfrom PyQt5 import QtCore, QtGui, QtWidgetsclass Ui_ChildWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(284, 141) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(20, 30, 71, 16)) self.label.setObjectName("label") self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(20, 60, 71, 16)) self.label_2.setObjectName("label_2") self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(20, 100, 251, 23)) self.pushButton.setObjectName("pushButton") self.lineEdit = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit.setGeometry(QtCore.QRect(100, 30, 171, 20)) self.lineEdit.setObjectName("lineEdit") self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit_2.setGeometry(QtCore.QRect(100, 60, 171, 20)) self.lineEdit_2.setObjectName("lineEdit_2") MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.label.setText(_translate("MainWindow", "First Name")) self.label_2.setText(_translate("MainWindow", "Last Name")) self.pushButton.setText(_translate("MainWindow", "Submit")) |
Shuffling rows in pandas but orderly Let's say that I have a data frame of three columns: age, gender, and country. I want to randomly shuffle this data but in an ordered fashion according to gender. There are n males and m females, where n could be less than, greater than, or equal to m. The shuffling should happen in such a way that we get the following results for a size of 8 people:male, female, male, female, male, female, female, female,.... (if there are more females: m > n) male, female, male, female, male, male, male, male (if there are more males: n > m) male, female, male, female, male, female, male, female, male, female (if equal males and females: n = m)df = pd.DataFrame({'Age': [10, 20, 30, 40, 50, 60, 70, 80], 'Gender': ["Male", "Male", "Male", "Female", "Female", "Male", "Female", "Female"], 'Country': ["US", "UK", "China", "Canada", "US", "UK", "China", "Brazil"]}) | First add the sequence numbers within each group:df['Order'] = df.groupby('Gender').cumcount()Then sort:df.sort_values('Order')It gives you: Age Gender Country Order0 10 Male US 03 40 Female Canada 01 20 Male UK 14 50 Female US 12 30 Male China 26 70 Female China 25 60 Male UK 37 80 Female Brazil 3If you want to shuffle, do that at the very beginning, e.g. df = df.sample(frac=1), see: Shuffle DataFrame rows |
Why is my Binary Search slower than Linear Search? I was trying to code a Binary Search and Linear Search and I was shocked by seeing that binary search is slower than Linear Search by sometimes even by 2 times. Please help me. Here is my code.Binary Search Code: def binary_search(array, target, n=0): l = len(array)-1 i = l//2 try: ai = array[i] except: return False if ai == target: n += i return (True, n) elif target >= ai: array = array[i+1:l+1] n += i + 1 return binary_search(array, target, n) elif target <= ai: array = array[0: i] return binary_search(array, target, n)Linear Search Codedef linear_search(array, target): for i, num in enumerate(array): if num == target: return True, i return FalseTest Case Code:import random import timen = 10000000num = sorted([random.randint(0, n) for x in range(n)])start = time.time()print(linear_search(num, 1000000))print(f'Linear Search: {time.time() - start}')start_new = time.time()print(binary_search(num, 1000000))print(f'Binary Search: {time.time() - start_new}') | As @khelwood said, your code will be much faster with no slicing.def binary_search_no_slice(array, target, low, high): if low > high: return False mid = (low + high) // 2 if array[mid] == target: return True elif array[mid] > target: return binary_search_no_slice(array, target, low, mid - 1) else: return binary_search_no_slice(array, target, mid + 1, high)Added below to your test code.start_new2 = time.time()print(binary_search_no_slice(num, 1000000, 0, len(num) - 1))print(f'Binary Search no slice: {time.time() - start_new2}')Here is the result on my machine(macOS Catalina, 2.8GHz Corei7, 8GB RAM)FalseLinear Search: 2.172485113143921FalseBinary Search: 0.56640625FalseBinary Search no slice: 2.8133392333984375e-05 |
My text in my clock python is not aligning properly My text in my turtle module is not aligning properly, it is aligned up and to the left. I want it to align exactly where the turtle is. Can anyone help? I tried setting the xcor and ycor of the turtle up and to the left by 5 units and that did not work. Any help would be greatly appreciated.Code:import timefrom datetime import datetime,dateimport turtlet = turtle.Pen()while True: turtle.tracer(0, 0) hour_hand = float(datetime.today().hour) minute_hand = float(datetime.today().minute) second_hand = float(datetime.today().second) # Draw circle t.hideturtle() t.circle(150) t.left(90) t.up() t.forward(150) t.down() # Draw hands t.right(float(float(minute_hand) * 6)) t.forward(100) t.backward(100) t.left(float(float(minute_hand) * 6)) t.right(int(float(hour_hand) * 30 + float(minute_hand) / 60 * 30)) t.forward(50) t.backward(50) t.left(int(float(hour_hand) * 30 + float(minute_hand) / 60 * 30)) t.right(second_hand * 6) t.forward(125) t.backward(125) t.left(second_hand * 6) # Draw ticks for x in range(0, 12): t.up() t.forward(130) t.down() t.forward(20) t.backward(20) t.up() t.backward(130) t.down() t.right(30) for y in range(0, 60): t.up() t.forward(140) t.down() t.forward(10) t.backward(10) t.up() t.backward(140) t.down() t.right(6) t.up() # Draw numbers t.right(32.5) for z in range(1, 12): t.forward(130) t.sety(t.ycor() - 5) t.setx(t.xcor() - 5) t.write(z, align = 'center', font = ('Times New Roman', 16)) t.sety(t.ycor() + 5) t.setx(t.xcor() + 5) t.backward(130) t.right(30) t.forward(130) t.write(12, align = 'center', font = ('Times New Roman', 16)) turtle.update() t.hideturtle() time.sleep(0.85) t.reset()I don't really want to use tkinter, it is too complicated. | A simpler, though potentially less accurate, way to do this completely within turtle:FONT_SIZE = 16FONT = ('Times New Roman', FONT_SIZE)t.color('red')t.dot(2) # show target of where we want to center text, for debuggingt.color('black')t.sety(t.ycor() - FONT_SIZE/2)t.write(12, align='center', font=FONT)Now let's address your program as a whole. The primary issues I see is that it flickers and is more complicated than necessary. The first thing to do is to switch turtle into Logo mode, which makes positive angles clockwise and makes 0 degrees at the top (not unlike a clock!).Then we split the dial drawing onto it's own turtle to be drawn once an we put the hands on their own turtle to be erased and redraw over and over. We all toss the while True: and sleep(), which have no place in an event-driven world like turtle, and use a turtle timer event instead:from datetime import datetimefrom turtle import Screen, TurtleOUTER_RADIUS = 150LARGE_TICK = 20SMALL_TICK = 10FONT_SIZE = 16FONT = ('Times New Roman', FONT_SIZE)def draw_dial(): dial = Turtle() dial.hideturtle() dial.dot() dial.up() dial.forward(OUTER_RADIUS) dial.right(90) dial.down() dial.circle(-OUTER_RADIUS) dial.up() dial.left(90) dial.backward(OUTER_RADIUS) for mark in range(60): distance = LARGE_TICK if mark % 5 == 0 else SMALL_TICK dial.forward(OUTER_RADIUS) dial.down() dial.backward(distance) dial.up() dial.backward(OUTER_RADIUS - distance) dial.right(6) dial.sety(-FONT_SIZE/2) dial.setheading(30) # starting at 1 o'clock for z in range(1, 13): dial.forward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE/2)) dial.write(z, align='center', font=FONT) dial.backward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE/2)) dial.right(30)def tick(): hour_hand = datetime.today().hour minute_hand = datetime.today().minute second_hand = datetime.today().second hands.reset() hands.hideturtle() # redo as undone by reset() hands.right(hour_hand * 30 + minute_hand / 60 * 30) hands.forward(1/3 * OUTER_RADIUS) hands.backward(1/3 * OUTER_RADIUS) hands.left(hour_hand * 30 + minute_hand / 60 * 30) hands.right(minute_hand * 6) hands.forward(2/3 * OUTER_RADIUS) hands.backward(2/3 * OUTER_RADIUS) hands.left(minute_hand * 6) hands.right(second_hand * 6) hands.forward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE)) hands.backward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE)) hands.left(second_hand * 6) screen.update() screen.ontimer(tick, 1000)screen = Screen()screen.mode('logo') # make 0 degrees straight up, positive angles clockwise (like a clock!)screen.tracer(False)draw_dial()hands = Turtle()tick()screen.mainloop() |
Applying functions to DataFrame columns in plots I'd like to apply functions to columns of a DataFrame when plotting them. I understand that the standard way to plot when using Pandas is the .plot method. How can I do math operations within this method, say for example multiply two columns in the plot? Thanks! | Series actually have a plot method as well, so it should work to apply(df['col1'] * df['col2']).plot()Otherwise, if you need to do this more than once it would be the usual thing to make a new column in your dataframe:df['newcol'] = df['col1'] * df['col2'] |
Remove PIL from raspberry Pi Hi i am getting an error"IOError: decoder jpeg not available" when trying to implement some functions from the PIL.What i would like to do is remove PIL, install the jpeg decoder then re-install the PIL, but im lost as to how to uninstall the PIL?Any help would be greatly appreciated | You can do this to re-install PILpip install -I PIL |
Can method operating on array of class object use array methods? I'm new here, and new in Python. I had some C/C++ in colleague. I'm doing course from udemy and I'm wonderig if there is some better idea of the issiue of finding element of an array of class object based on one value. The course task was to find "the oldest cat". Solution there is just using no Lists/arrays but I wanna know how to operate on arrays of objects and if there is better option than my static method getoldest, becouse for me it seems like I'm trying to "cheat" python. class Cat: def getoldest(Cat=[]): age_table=[] for one in Cat: age_table.append(one.age) return Cat[age_table.index(max(age_table))] def __init__(self, name, age): self.name = name self.age = age# 1 Instantiate the Cat object with few catskotki3=[]kotki3.append(Cat("zimka", 5))kotki3.append(Cat("korek", 9))kotki3.append(Cat("oczko", 10))kotki3.append(Cat("kotek", 1))kotki3.append(Cat("edward", 4))# 2 Create a function that finds the oldest catoldest = Cat.getoldest(kotki3)# 3 Print out: "The oldest cat is x years old.". x will be the oldest cat age by using the function in #2print(f'The oldest cat is {oldest.name} and it\'s {oldest.age} years old')Thanks a lot. | I think this example could help you see a better way of doing that class Cat: def __init__(self, name, age): self.name = name self.age = age def get_details(self): return self.name, self.agecats = [Cat("zimka", 5), Cat("oczko", 10), Cat("kotek", 1), Cat("edward", 4) ]results = []for cat in cats: (name, age) = cat.get_details() results.append((name,age))print(sorted(results, key = lambda x: -x[1])) |
Can someone answer why this Tkinter doesn't work? I'M SO CONFUSEDKeep in mind that I'm a beginner at programming/python so if my code is unorganized or badly worded, ignore it, I'm getting better lolI'm just playing with tkinter and I'm trying to get a login screen that has a checkbox that toggles the visibility of the password. I just don't understand anymore. The "show" argument won't change based on the variable it was assigned and I don't know why.showPassword = IntVar()show = Nonedef apply(): print(showPassword.get()) sspass = showPassword.get() print(type(sspass)) if sspass == 1: show = None elif sspass == 0: show = "*"spB = Checkbutton(root, text="Toggle Show Password", variable=showPassword).grid(row=10, column=1)applyButton = Button(root, text="Apply", command=apply).grid(column=1, row=5)Password = entry(root, show=show) | I have arranged a snippet of code (that is not perfect for you to follow but...) based on yours that at least works for you to progress. Your code is incomplete I suppose, it has also some errors. You have to configure the show parameter in the widget. Changing your show var won't do anything to the widget.You'll have to use the form widget['show'] = somevalue. Or the .configurewidget method. For both you'll need a widget reference. If you grid a widget in the same line you create it, grid will return nothing so you loose it. Break that in two steps and keep the reference for the widget at creation (first step). entry is actually called Entry. These where the most prominet errors I saw.from tkinter import Button, Checkbutton, Entry, Tk, IntVarroot = Tk()showPassword = IntVar()show = Nonedef apply(): print(showPassword.get()) sspass = showPassword.get() print(type(sspass)) if sspass == 1: Password['show'] = "" elif sspass == 0: Password['show'] = "*" Password.update()spB = Checkbutton(root, text="Show Password", variable=showPassword).grid(row=10, column=1)applyButton = Button(root, text="Apply", command=apply).grid(column=1, row=5)Password = Entry(root, show=show)Password.grid(row=3, column=1)root.mainloop() |
Generating nested lists from XML doc Working in python, my goal is to parse through an XML doc I made and create a nested list of lists in order to access them later and parse the feeds. The XML doc resembles the following snippet:<?xml version="1.0'><sources> <!--Source List by Institution--> <sourceList source="cbc"> <f>http://rss.cbc.ca/lineup/topstories.xml</f> </sourceList> <sourceList source="bbc"> <f>http://feeds.bbci.co.uk/news/rss.xml</f> <f>http://feeds.bbci.co.uk/news/world/rss.xml</f> <f>http://feeds.bbci.co.uk/news/uk/rss.xml</f> </sourceList> <sourceList source="reuters"> <f>http://feeds.reuters.com/reuters/topNews</f> <f>http://feeds.reuters.com/news/artsculture</f> </sourceList></sources>I would like to have something like nested lists where the inner most list would be the content between the <f></f> tags and the list above that one would be created with the names of the sources ex. source="reuters" would be reuters. Retrieving the info from the XML doc isn't a problem and I'm doing it with elementtree with loops retrieving with node.get('source') etc. The problem is I'm having trouble generating the lists with the desired names and different lengths required from the different sources. I have tried appending but am unsure how to append to list with the names retrieved. Would a dictionary be better? What would be the best practice in this situation? And how might I make this work? If any more info is required just post a comment and I'll be sure to add it. | From your description, a dictionary with keys according to the source name and values according to the feed lists might do the trick.Here is one way to construct such a beast:from lxml import etreefrom pprint import pprintnews_sources = { source.attrib['source'] : [feed.text for feed in source.xpath('./f')] for source in etree.parse('x.xml').xpath('/sources/sourceList')}pprint(news_sources)Another sample, without lxml or xpath:import xml.etree.ElementTree as ETfrom pprint import pprintnews_sources = { source.attrib['source'] : [feed.text for feed in source] for source in ET.parse('x.xml').getroot()}pprint(news_sources)Finally, if you are allergic to list comprehensions:import xml.etree.ElementTree as ETfrom pprint import pprintxml = ET.parse('x.xml')root = xml.getroot()news_sources = {}for sourceList in root: sourceListName = sourceList.attrib['source'] news_sources[sourceListName] = [] for feed in sourceList: feedName = feed.text news_sources[sourceListName].append(feedName)pprint(news_sources) |
How to use minAreaRect() function without getting error? I have a problem that has ruined my project:def extract_candidate_rectangles(image, contours): rectangles = [] for i, cnt in enumerate(contours): min_rect = cv.minAreaRect(cnt) if validate_contour(min_rect): x, y, w, h = cv.boundingRect(cnt) plate_img = image[y:y+h, x:x+w] if is_max_white(plate_img): copy = image.copy() cv.rectangle(copy, (x, y), (x + w, y + h), (0, 255, 0), 2) rectangles.append(plate_img) cv.imshow("candidates", copy) cv.waitKey(0) return rectanglesand the error is:Using TensorFlow backend.Traceback (most recent call last): File "/home/muhammad/Coding/Python/PlateDetectionCodes/PlateDetection/main.py", line 43, in <module> plates = extract_candidate_rectangles(resized.copy(), contours) File "/home/muhammad/Coding/Python/PlateDetectionCodes/PlateDetection/extractor.py", line 65, in extract_candidate_rectangles min_rect = cv.minAreaRect(cnt)cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/convhull.cpp:137: error: (-215:Assertion failed) total >= 0 && (depth == CV_32F || depth == CV_32S) in function 'convexHull'I'll be glad if anyone can help! | The stack trace shows you the line in which the error occured:min_rect = cv.minAreaRect(cnt)Now, you want to take a look at this line of the error:cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/convhull.cpp:137: error: (-215:Assertion failed) total >= 0 && (depth == CV_32F || depth == CV_32S) in function 'convexHull'especially this part:Assertion failed) total >= 0 && (depth == CV_32F || depth == CV_32S) in function 'convexHull'I assume that cv.minAreaRect internally calls convexHull.OpenCV uses the Assert function to make sure that the parameters passed into a function are in the correct format.Here, either the cnt is empty (total >= 0 is not satisfied) or the format of the points inside the contour array is neither CV_32F (32 bit float) or CV_32S (32 bit signed integer). |
Another Python Scope Question - losing information going into if statement Not sure if I'm missing something obvious, but here's what is happening:I have a python 2.4.3 script that contains several RegEx objects. Below one of the regex objects is searching for all matches in a string (tMatchList). Even if tMatchList is not null, it is printing an empty set after the 'if p:' step. This behavior occurs even if it prints correctly before the 'if p:' step. I thought it may have been a scope issue, but everything is declared & contained within one function. I'm not quite seeing how the 'if p:' step is not able to see tMatchList. I am able to print tMatchList after the if statement as well.tMatchList = []for lines in r: linecount += 1 tMatchList = self._testReplacePDFTag.findall(lines) p = self._pdfPathRegex.search(lines) print tMatchList #tMatchList is printing just fine here if it has any elements if p: print tMatchList #now it's empty, #even if it printed elements in prior statement lines = ..... else: <something else gets done> print tMatchList #now it prints againIncluding entire function definition for those who would like to see it....def FindFilesAndModifyPDFTag(self, inRootDirArg, inRollBackBool): for root, dirs, files in os.walk(inRootDirArg): for d in dirs: if d.startswith('.'):#excludes directories that start with '.' continue for file in files: if os.path.splitext(file)[1] == self._fileExt: #Backup original. just do it shutil.copy2(os.path.join(root, file), os.path.join(root, file)+"~") r = open(os.path.join(root, file)+"~", "r") f = open(os.path.join(root, file), "w") linecount = 0 tMatchList = [] for lines in r: linecount += 1 tMatchList = self._testReplacePDFTag.findall(lines) t = self._testReplacePDFTag.search(lines) #find pdf path(s) in line pMatchList = self._pdfPathRegex.findall(lines) p = self._pdfPathRegex.search(lines) #fix the pdf tracking code print id(tMatchList), "BEFORE" if p: print id(tMatchList), "INSIDE" lines = self.processPDFTagLine(pMatchList, lines, linecount, file, tMatchList) else: lines = self.processCheckMetaTag(lines, linecount, file) #print id(tMatchList), "INSIDE ELSE" print id(tMatchList), "AFTER" f.writelines(lines) f.close() r.close() os.remove(os.path.join(root, file)+"~")enter code here | The findall may not create a list object. If it is some kind of generator function, then it has a value which is "consumed" by traversing the results once.After consuming the results yielded by this function, there are no more results.tMatchList = self._testReplacePDFTag.findall(lines)p = self._pdfPathRegex.search(lines)print tMatchList #tMatchList is printing just fine here if it has any elementsif p: print tMatchList #now it's empty, Try this.tMatchList = list( self._testReplacePDFTag.findall(lines) ) |
Python + GPG (edit-key change password) I'm looking for a gpg Python library that let me change password for my key.I saw python-gnupg but there aren't that function :(Anyone can help me please? If is possibile i wish have also some examples from docs | Python gnupg module already has a method (GPG._handle_io) to invoke gpg command and pass input to it and parse output. It may solve the portability issue.gpg = gnupg.GPG()result = gnupg.Verify(gpg)gpg._handle_io(['--command-fd', '0', '--edit-key', keyname], StringIO(u'\n'.join(commands)), result)commands is your command sequence to execute in edit-key mode. Note, some commands behave little differently when issuing them in --no-tty mode, eg. save command asks a y for confirmation.result is an arbitrary gpg class and needed only to capture the output. See machine-readable output in result.stderr. |
Pickling a class definition Is there a way to pickle a class definition?What I'd like to do is pickle the definition (which may created dynamically), and then send it over a TCP connection so that an instance can be created on the other end.I understand that there may be dependencies, like modules and global variables that the class relies on. I'd like to bundle these in the pickling process as well, but I'm not concerned about automatically detecting the dependencies because it's okay if the onus is on the user to specify them. | If you use dill, it enables you to treat __main__ as if it were a python module (for the most part). Hence, you can serialize interactively defined classes, and the like. dill also (by default) can transport the class definition as part of the pickle.>>> class MyTest(object):... def foo(self, x):... return self.x * x... x = 4... >>> f = MyTest() >>> import dill>>>>>> with open('test.pkl', 'wb') as s:... dill.dump(f, s)... >>> Then shut down the interpreter, and send the file test.pkl over TCP. On your remote machine, now you can get the class instance.Python 2.7.9 (default, Dec 11 2014, 01:21:43) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import dill>>> with open('test.pkl', 'rb') as s:... f = dill.load(s)... >>> f<__main__.MyTest object at 0x1069348d0>>>> f.x4>>> f.foo(2)8>>> But how to get the class definition? So this is not exactly what you wanted. The following is, however.>>> class MyTest2(object):... def bar(self, x):... return x*x + self.x... x = 1... >>> import dill>>> with open('test2.pkl', 'wb') as s:... dill.dump(MyTest2, s)... >>>Then after sending the file… you can get the class definition.Python 2.7.9 (default, Dec 11 2014, 01:21:43) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import dill>>> with open('test2.pkl', 'rb') as s:... MyTest2 = dill.load(s)... >>> print dill.source.getsource(MyTest2)class MyTest2(object): def bar(self, x): return x*x + self.x x = 1>>> f = MyTest2()>>> f.x1>>> f.bar(4)17So, within dill, there's dill.source, and that has methods that can detect dependencies of functions and classes, and take them along with the pickle (for the most part).>>> def foo(x):... return x*x... >>> class Bar(object):... def zap(self, x):... return foo(x) * self.x... x = 3... >>> print dill.source.importable(Bar.zap, source=True)def foo(x): return x*xdef zap(self, x): return foo(x) * self.xSo that's not "perfect" (or maybe not what's expected)… but it does serialize the code for a dynamically built method and it's dependencies. You just don't get the rest of the class -- but the rest of the class is not needed in this case.If you wanted to get everything, you could just pickle the entire session.>>> import dill>>> def foo(x):... return x*x... >>> class Blah(object):... def bar(self, x):... self.x = (lambda x:foo(x)+self.x)(x)... x = 2... >>> b = Blah()>>> b.x2>>> b.bar(3)>>> b.x11>>> dill.dump_session('foo.pkl')>>> Then on the remote machine...Python 2.7.9 (default, Dec 11 2014, 01:21:43) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwinType "help", "copyright", "credits" or "license" for more information.>>> import dill>>> dill.load_session('foo.pkl')>>> b.x11>>> b.bar(2)>>> b.x15>>> foo(3)9Lastly, if you want the transport to be "done" for you transparently, you could use pathos.pp or ppft, which provide the ability to ship objects to a second python server (on a remote machine) or python process. They use dill under the hood, and just pass the code across the wire.>>> class More(object):... def squared(self, x):... return x*x... >>> import pathos>>> >>> p = pathos.pp.ParallelPythonPool(servers=('localhost,1234',))>>> >>> m = More()>>> p.map(m.squared, range(5))[0, 1, 4, 9, 16]The servers argument is optional, and here is just connecting to the local machine on port 1234… but if you use the remote machine name and port instead (or as well), you'll fire off to the remote machine -- "effortlessly".Get dill, pathos, and ppft here: https://github.com/uqfoundation |
Tail file into message queue I launch a process on a linux machine via python's subprocess (specifically on AWS EC2) which generates a number of files. I need to "tail -f" these files and send each of the resulting jsonified outputs to their respective AWS SQS queues. How would I go about such a task?EditAs suggested by this answer, asyncproc, and PEP3145, I can do this with the following:from asyncproc import Processimport Queueimport osimport time# Substitute AWS SQS for Queuesta_queue = Queue.Queue()msg_queue = Queue.Queue()running_procs = {'status':(Process(['/usr/bin/tail', '--retry', '-f','test.sta']),sta_queue),'message':(Process(['/usr/bin/tail', '--retry', '-f', 'test.msg' ]),msg_queue)}def handle_proc(p,q): latest = p.read() if latest: # If nothing new, latest will be an empty string q.put(latest) retcode = p.wait(flags=os.WNOHANG) return retcodewhile len(running_procs): proc_names = running_procs.keys() for proc_name in proc_names: proc, q = running_procs[proc_name] retcode = handle_proc(proc, q) if retcode is not None: # Process finished. del running_procs[proc_name] time.sleep(1.0)print("Status queue")while not sta_queue.empty(): print(sta_queue.get())print("Message queue")while not msg_queue.empty(): print(msg_queue.get())This should be sufficient, I think, unless others can provide a better answer.More EditsI'm overthinking the problem. Although the above works nicely, I think the simplest solution is:-check for the existence of the files-if the files exist, copy them to a bucket on AWS S3 and send a message through AWS SQS that files have been copied. Repeat every 60 seconds-consumer app polls SQS and eventually receives message that files have been copied-consumer app downloads files from S3 and replaces the previous contents with the latest contents. Repeat until job completesAlthough the whole issue of asynchronous IO in subprocess is still an issue. | You can use the subprocess.Popen class to run tail and read its output.try: process = subprocess.Popen(['tail', '-f', filename], stdout=PIPE)except (OSError, ValueError): pass # TODO: handle errorsoutput = process.stdout.read()The subprocess.check_output function provides this functionality in a one-liner. It is new in Python version 2.7.try: output = subprocess.check_output(['tail', '-f', filename], stdout=PIPE)except CalledProcessError: pass # TODO: handle errorsFor non-blocking I/O, see this question. |
Algorithm in Python To Solve This Problem I have a list of lists such as: [[foo,1],[baz,1],[foo,0],[bar,3],[foo,1],[bar,2],[baz,2]].I want to get all the different items in the inner lists and find the total number of them.I mean the result should be like: [[foo,2],[bar,5],[baz,3]]. How can I do this task?Thanks in advance. | Create a dictionaryD = {}for item in list: left,right=item D[left] = D.get(left, 0) + rightThere may be faster ways to do this though.As suggested in the comments by Joce, Gnibbler and Blair you coud do this to get a list again.# To get a list of listspairs = map(list, D.items()) # To get a list of tuplespairs = D.items() |
Should I return an empty dict instead of None? I have a method that currently returns None or a dict.result,error = o.apply('grammar')The caller currently has to check for the existence of two keys to decide what kind of object was returned.if 'imperial' in result: # yayelif 'west' in result: # yahooelse: # something wrong?Because result can be None, I'm thinking of returning an empty dict instead, so the caller does not need to check for that. What do you think ?For comparison, in the re module, the result of calling match can result in None.p = re.compile('\w+')m = p.match( 'whatever' )But in this case, m is an object instance. In my case, I am returning a dict which should either be empty or have some entries. | Yes I think returning an empty dict (or where applicable an empty list) is preferable to returning None as this avoids an additional check in the client code.EDIT:Adding some code sample to elaborate:def result_none(choice): mydict = {} if choice == 'a': mydict['x'] = 100 mydict['y'] = 1000 return mydict else: return Nonedef result_dict(choice): mydict = {} if choice == 'a': mydict['x'] = 100 mydict['y'] = 1000 return mydicttest_dict = result_dict('b')if test_dict.get('x'): print 'Got x'else: print 'No x'test_none = result_none('b')if test_none.get('x'): print 'Got x'else: print 'No x'In the above code the check test_none.get(x) throws an AttributeError asresult_none method can possibly return a None. To avoid that I have to add anadditional check and might rewrite that line as:if test_none is not None and test_none.get('x') which is not at all neededif the method were returning an empty dict. As the example shows the check test_dict.get('x') works fine as the method result_dict returns an empty dict. |
Slight difference in objective function of linear programming makes program extremely slow I am using Google's OR Tool SCIP (Solving Constraint Integer Programs) solver to solve a Mixed integer programming problem using Python. The problem is a variant of the standard scheduling problem, where there are constraints limiting that each worker works maximum once per day and that every shift is covered by only one worker. The problem is modeled as follows:Where n represents the worker, d the day and i the specific shift in a given day.The problem comes when I change the objective function that I want to minimize fromTo:In the first case an optimal solution is found within 5 seconds. In the second case, after 20 minutes running, the optimal solution was still not reached. Any ideas to why this happens?How can I change the objective function without impacting performance this much?Here is a sample of the values taken by the variables tier and acceptance used in the objective function. | You should ask the SCIP team.Have you tried using the SAT backend with 8 threads ? |
Tkinter: Changing value of a Textbox after calculation to avoid duplicates from tkinter import *class HHRG: def __init__(self, root): self.root = root self.RnReg = 50 self.RnResump = 80 self.RnCert = 80 self.RnDC = 70 self.RnSOC = 90 self.LvnReg = 40 self.LvnOut = 35 self.Hha = 25 self.Pt = 75 self.Ot = 75 self.St = 75 self.HHRGValue = IntVar() self.RnRegValue = IntVar() self.RnResumpValue = IntVar() self.RnCertValue = IntVar() self.RnDCValue = IntVar() self.RnSOCValue = IntVar() self.LvnRegValue = IntVar() self.LvnOutValue = IntVar() self.HhaValue = IntVar() self.PtValue = IntVar() self.OtValue = IntVar() self.StValue = IntVar() ###LABELS### self.HHRGLabel = Label(self.root, text="HHRG") self.RnRegLabel = Label(self.root, text="Regular Rn Visits") self.RnResumpLabel = Label(self.root, text="Rn Resumption Visits") self.RnCertLabel = Label(self.root, text="Rn recertification Visits") self.RnDCLabel = Label(self.root, text="Rn D/C Visits") self.RnSOCLabel = Label(self.root, text="Rn SOC Visits") self.LvnRegLabel = Label(self.root, text="Regular Lvn Visits") self.LvnOutLabel = Label(self.root, text="Lvn Outlier Visits") self.HhaLabel = Label(self.root, text="HHA visits") self.PtLabel = Label(self.root, text="Pt Visits") self.OtLabel = Label(self.root, text="Ot Visits") self.StLabel = Label(self.root, text="St Visits") self.TotalLabel = Label(self.root, text="Net Total") ###ENTRY BOXES### self.HHRGEntry = Entry(self.root, textvariable=self.HHRGValue) self.RnRegEntry = Entry(self.root, textvariable=self.RnRegValue) self.RnResumpEntry = Entry(self.root, textvariable=self.RnResumpValue) self.RnCertEntry = Entry(self.root, textvariable=self.RnCertValue) self.RnDCEntry = Entry(self.root, textvariable=self.RnDCValue) self.RnSOCEntry = Entry(self.root, textvariable=self.RnSOCValue) self.LvnRegEntry = Entry(self.root, textvariable=self.LvnRegValue) self.LvnOutEntry = Entry(self.root, textvariable=self.LvnOutValue) self.HhaEntry = Entry(self.root, textvariable=self.HhaValue) self.PtEntry = Entry(self.root, textvariable=self.PtValue) self.OtEntry = Entry(self.root, textvariable=self.OtValue) self.StEntry = Entry(self.root, textvariable=self.StValue) self.TotalEntry = Text(root, height=2, width=10) self.clearButton = Button(root, text="Clear") self.clearButton.bind("<Button-1>", self.clear) self.calculatebutton = Button(root, text="Calculate", width=10) self.calculatebutton.bind("<Button-1>", self.clear) self.calculatebutton.bind("<Button-1>", self.calculate) ####LABEL GRIDS### self.HHRGLabel.grid(row=0, column=0) self.RnRegLabel.grid(row=1, column=0) self.RnResumpLabel.grid(row=2, column=0) self.RnCertLabel.grid(row=3, column=0) self.RnDCLabel.grid(row=4, column=0) self.RnSOCLabel.grid(row=5, column=0) self.LvnRegLabel.grid(row=6, column=0) self.LvnOutLabel.grid(row=7, column=0) self.HhaLabel.grid(row=8, column=0) self.PtLabel.grid(row=9, column=0) self.OtLabel.grid(row=10, column=0) self.StLabel.grid(row=11, column=0) self.TotalLabel.grid(row=12, column=0) ###ENTRY GRIDS### self.HHRGEntry.grid(row=0, column=1) self.RnRegEntry.grid(row=1, column=1) self.RnResumpEntry.grid(row=2, column=1) self.RnCertEntry.grid(row=3, column=1) self.RnDCEntry.grid(row=4, column=1) self.RnSOCEntry.grid(row=5, column=1) self.LvnRegEntry.grid(row=6, column=1) self.LvnOutEntry.grid(row=7, column=1) self.HhaEntry.grid(row=8, column=1) self.PtEntry.grid(row=9, column=1) self.OtEntry.grid(row=10, column=1) self.StEntry.grid(row=11, column=1) self.TotalEntry.grid(row=12, column=1) self.calculatebutton.grid(columnspan=2, pady=10) self.clearButton.grid(row=13, column=1) def calculate(self, event): values = [(self.RnRegValue.get() * self.RnReg), (self.RnResumpValue.get() * self.RnResump), (self.RnCertValue.get() * self.RnCert), (self.RnDCValue.get() * self.RnDC), (self.RnSOCValue.get() * self.RnSOC), (self.LvnRegValue.get() * self.LvnReg), (self.LvnOutValue.get() * self.LvnOut), (self.HhaValue.get() * self.Hha), (self.PtValue.get() * self.Pt), (self.OtValue.get() * self.Ot), (self.StValue.get() * self.St)] self.total = 0 for i in values: self.total += i result = self.HHRGValue.get() - self.total self.TotalEntry.insert(END, result) def clear(self, event): self.TotalEntry.delete("1.0", END)root = Tk()a = HHRG(root)root.mainloop()So i've got this modified calculator of mine and the problem with it is everytime you calculate. it returns outputs as desired but if you click it twice it'll duplicateI tried binding the self.calculatebutton to my clear() method but it wouldn't prevent the duplication of the resultsmy question is. How can we make it calculate the desired output but wipe the previous output at the same time to prevent duplicates? so if someone presses the calculate button multiple times it'll only output one total not multiple ones like the picture above | This code is where the problem lies: self.calculatebutton = Button(root,text="Calculate",width=10)self.calculatebutton.bind("<Button-1>",self.clear)self.calculatebutton.bind("<Button-1>",self.calculate)When you call bind, it will replace any previous binding of the same event to the same widget. So, the binding to self.clear goes away when you add the binding to self.calculate. While there are ways to bind multiple functions to an event, usually that is completely unnecessary and leads to difficult-to-maintain code. The simple solution is for your calculate function to call the clear function before adding a new result:def calculate(self,event): ... result = self.HHRGValue.get() - self.total self.clear(event=None) self.TotalEntry.insert(END,result)Note: if this is the only time you'll call clear, you can remove the event parameter from the function definition, and remove it from the call. On a related note: generally speaking you should not use bind on buttons. The button has built-in bindings that normally work better than your custom binding (they handle keyboard traversal and button highlighting, for example).The button widget has a command attribute which you normally use instead of a binding. In your case it would look like this:self.calculatebutton = Button(..., command=self.calculate)When you do that, your calculate method no longer needs the event parameter, so you'll need to remove it. If you want to use the calculate function both from a command and from a binding, you can make the event optional:def calculate(self, event=None) |
Numpy: Conserving sum in average over two arrays of integers I have two arrays of positive integers A and B that each sum to 10:A = [1,4,5]B = [5,5,0]I want to write a code (that will work for a general size of the array and the sum) to calculate the array C who is also a array of positive integers that also sums to 10 that is the closest to the element-wise average as possible:Pure average C = (A + B) / 2: C=[3,4.5,2.5]Round C = np.ceil((A + B) / 2).astype(int): C=[3,5,3], (sum=11, incorrect!)Fix the sum C = SOME CODE: c=[3,4,3], (sum=10, correct!)Any value can be adjusted to make the sum correct, as long as all elements remain positive integers.What should C = SOME CODE be?Minimum reproducible example:A = np.array([1,4,5])B = np.array([5,5,0])C = np.ceil((A + B) / 2).astype(int)print(np.sum(C))11This should give 10. | You can ceil/floor every other non-int element. This works for any shape/size and any sum value (in fact you do not need to know the sum at all. It is enough if A and B have same sum):C = (A + B) / 2C_c = np.ceil(C)C_c[np.flatnonzero([C!=C.astype(int)])[::2]] -= 1print(C_c.sum())#10.0print(C_c.astype(int))#[3 4 3] |
Camera Behavior In Pyglet I would like to know from you how I can make sure that the camera in pyglet (2D) always follows the player keeping it always in the middle of the screen. Also, I would like to know how I can make a linear zoom, with the mouse wheel, always holding the player in the middle of the screen. To be clear, if anyone knows Factorio, I would like the camera to behave the same way. Around I found only examples on how to do it by moving the mouse etc. Unfortunately, I have not found anything that interests me.This is the script I'm currently using:Main class (I do not report all the script, but the parts related to the camera):def on_resize(self, width, height): self.camera.init_gl(width, height)def on_mouse_scroll(self, x, y, dx, dy): self.camera.scroll(dy)def _world(self): self.camera = camera(self) self.player = player(self, 0, 0) self.push_handlers(self.player.keyboard)Camera script:class camera(object): zoom_in_factor = 1.2 zoom_out_factor = 1 / zoom_in_factor def __init__(self, game): self.game = game self.left = 0 self.right = self.game.width self.bottom = 0 self.top = self.game.height self.zoom_level = 1 self.zoomed_width = self.game.width self.zoomed_height = self.game.height def init_gl(self, width, height): self.width = width self.height = height glViewport(0, 0, self.width, self.height) def draw(self): glPushMatrix() glOrtho(self.left, self.right, self.bottom, self.top, 1, -1) glTranslatef(-self.game.player.sprite.x + self.width / 2, -self.game.player.sprite.y + self.height / 2, 0) self.game.clear() if self.game.runGame: for sprite in self.game.mapDraw_3: self.game.mapDraw_3[sprite].draw() glPopMatrix() print(self.game.player.sprite.x, self.game.player.sprite.y) def scroll(self, dy): f = self.zoom_in_factor if dy > 0 else self.zoom_out_factor if dy < 0 else 1 if .1 < self.zoom_level * f < 2: self.zoom_level *= f vx = self.game.player.sprite.x / self.width vy = self.game.player.sprite.y / self.height vx_in_world = self.left + vx * self.zoomed_width vy_in_world = self.bottom + vy * self.zoomed_height self.zoomed_width *= f self.zoomed_height *= f self.left = vx_in_world - vx * self.zoomed_width self.right = vx_in_world + (1 - vx) * self.zoomed_width self.bottom = vy_in_world - vy * self.zoomed_height self.top = vy_in_world + (1 - vy) * self.zoomed_heightThis is what I get:This is what I would like to get (use Factorio as an example):The script that I use at the moment I took it from here and modified for my need:How to pan and zoom properly in 2D?However, the script I am using, as you see, is based on something that has been created by someone else and I hate using something this way, because it does not belong to me. So I'm using it just to experiment and create my own camera class. That's why I asked for advice.Other examples I watched:https://www.programcreek.com/python/example/91285/pyglet.gl.glOrthohttps://groups.google.com/forum/#!topic/pyglet-users/g4dfSGPNCOkhttps://www.tartley.com/2d-graphics-with-pyglet-and-openglThere are other places I've watched, but I do not remember the linksTo avoid repetition, yes, I looked on pyglet's guide, but at least that I am so stupid (I do not exclude it), I did not find anything that would help me to understand how to do it. | Well, I'm unsure of your first problem but I can help with the zoom.def on_mouse_scroll(self, x, y, scroll_x, scroll_y): zoom = 1.00 if scroll_y > 0: zoom = 1.03 elif scroll_x < 0: zoom = 0.97 glOrtho(-zoom, zoom, -zoom, zoom, -1, 1) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.