questions
stringlengths 50
48.9k
| answers
stringlengths 0
58.3k
|
---|---|
Sympy: How to calculate the t value for a point on a 3D Line Using sympy how would one go about to solve for the t value for a specific point on a line or line segment?p1 = sympy.Point3D(0,0,0)p2 = sympy.Point3D(1,1,1)p3 = sympy.Point3D(0.5,0.5,0.5)lineSegment = sympy.Segment(p1,p2)eqnV = lineSegment.arbitrary_point()if lineSegment.contains(p3): t = SolveForT(lineSegment, p3) | You can get a list of coordinate equations and pass them to sympy's solve function:In [112]: solve((lineSegment.arbitrary_point() - p3).coordinates)Out[112]: {t: 1/2} |
Python Dataframe add new row based on column name How do I add a new row to my dataframe, with values that are based on the column names?For exampleDog = 'happy'Cat = 'sad'df = pd.DataFrame(columns=['Dog', 'Cat'])I want to add a new line to the dataframe where is pulls in the variable of the column heading Dog Cat0 happy sad | You can try append:df.append({'Dog':Dog,'Cat':Cat}, ignore_index=True)Output: Dog Cat0 happy sad |
Write Data to BigQuery table using load_table_from_dataframe method ERROR - 'str' object has no attribute 'to_api_repr' I am trying to read the data from Cloud storage and write the data into BigQuery table. Used Pandas library for reading the data from GCS and to write the data used client.load_table_from_dataframe method. I am executing this code as python operator in Google cloud composer. Got below error when i execute the code.[2020-06-23 17:09:36,119] {taskinstance.py:1059} ERROR - 'str' object has no attribute 'to_api_repr'@-@{"workflow": "DataTransformationSample1", "task-id": "dag_init", "execution-date": "2020-06-23T17:03:42.202219+00:00"}Traceback (most recent call last): File "/usr/local/lib/airflow/airflow/models/taskinstance.py", line 930, in _run_raw_task result = task_copy.execute(context=context) File "/usr/local/lib/airflow/airflow/operators/python_operator.py", line 113, in execute return_value = self.execute_callable() File "/usr/local/lib/airflow/airflow/operators/python_operator.py", line 118, in execute_callable return self.python_callable(*self.op_args, **self.op_kwargs) File "/home/airflow/gcs/dags/DataTransformationSample1.py", line 225, in dag_initialization destination=table_id, job_config=job_config) File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery/client.py", line 968, in load_table_from_dataframe job_config=job_config, File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery/client.py", line 887, in load_table_from_file job_resource = load_job._build_resource() File "/opt/python3.6/lib/python3.6/site-packages/google/cloud/bigquery/job.py", line 1379, in _build_resource self.destination.to_api_repr())AttributeError: 'str' object has no attribute 'to_api_repr'[2020-06-23 17:09:36,122] {base_task_runner.py:115} INFO - Job 202544: Subtask dag_init [2020-06-23 17:09:36,119] {taskinstance.py:1059} ERROR - 'str' object has no attribute 'to_api_repr'@-@{"workflow": "DataTransformationSample1", "task-id": "dag_init", "execution-date": "2020-06-23T17:03:42.202219+00:00"}Below code i used,client = bigquery.Client()table_id = 'project.dataset.table'job_config = bigquery.LoadJobConfig()job_config.schema = [ bigquery.SchemaField(name="Code", field_type="STRING", mode="NULLABLE"), bigquery.SchemaField(name="Value", field_type="STRING", mode="NULLABLE") ]job_config.create_disposition = "CREATE_IF_NEEDED"job_config.write_disposition = "WRITE_TRUNCATE"load_result = client.load_table_from_dataframe(dataframe=concatenated_df, destination=table_id, job_config=job_config)load_result.result()Someone please help to solve this case. | Basically Panda consider string as object, but BigQuery doesn't know it. We need to explicitly convert the object to string using Panda in order to make it load the data to BQ table.df[columnname] = df[columnname].astype(str) |
Python tracemalloc's "compare_to" function delivers always "StatisticDiff" objects with len(traceback)=1 Using Python's 3.5 tracemalloc module as followstracemalloc.start(25) # (I also tried PYTHONTRACEMALLOC=25)snapshot_start = tracemalloc.take_snapshot()... # my code is runningsnapshot_stop = tracemalloc.take_snapshot()diff = snapshot_stop.compare_to(snapshot_start, 'lineno')tracemalloc.stop()leads in a list of StatisticDiff instances where each instance has a traceback with only 1 (the most recent) frame.Any hints how to get there the full stack trace for each StatisticDiff instance?Thank you!Michael | You need to use 'traceback' instead of 'lineno' when calling compare_to() to get more than one line.BTW, I also answered a similar question here with a little more detail. |
Filter objects manyTomany with users manyTomany I want to filter the model Foo by its manyTomany field bar with users bar.Modelsclass User(models.Model): bar = models.ManyToManyField("Bar", verbose_name=_("Bar"), blank=True)class Foo(models.Model): bar = models.ManyToManyField("Bar", verbose_name=_("Bar"), blank=True)class Bar(models.Model): fubar = models.CharField()with thisuser = User.objects.get(id=user_id)I want to gett all Foo's that have the same Bar's that the User has.I would like this to work:bar = Foo.objects.filter(foo=user.foo)but it doesn't work. | foos = Foo.objects.filter(bar__in=user.bar.all()) |
Not able to align specific patterns side by side in this grid So I tried different methods to do this like:a = ("+ " + "- "*4)b = ("|\n"*4)print(a + a + "\n" + b + a + a + "\n" + b + a + a)But the basic problem I am facing is how to print the vertical pattern on the sixth column i.e in the middle as well as, at the last | I got it actually and thought of posting the solution I might help others:we ought to make use of the do_twice and do_four function:def draw_grid_art(): a = "+ - - - - + - - - - +" def do_twice(f): f() f() def do_four(f): do_twice(f) do_twice(f) def vertical(): b = "| | |" print(b) print(a) do_four(vertical) print(a) do_four(vertical) print(a)I was able to come up with only this.As always anyone is free to shorten/organize my code as I think it is long |
How to remove rows that contains a repeating number pandas python I have a dataframe like:'a' 'b' 'c' 'd' 0 1 2 3 3 3 4 5 9 8 8 8and I want to remove rows that have a number that repeats more than once. So the answer is :'a' 'b' 'c' 'd' 0 1 2 3Thanks. | Use DataFrame.nunique with compare length of columns ad filter by boolean indexing:df = df[df.nunique(axis=1) == len(df.columns)]print (df) 'a' 'b' 'c' 'd'0 0 1 2 3 |
Slowly updating global window side inputs In Python I try to get the updating sideinputs working in python as stated in the Documentation (there is only a java example provided) [https://beam.apache.org/documentation/patterns/side-inputs/]I already found this thread here on Stackoverflow: [https://stackoverflow.com/questions/63812879/how-to-implement-the-slowly-updating-side-inputs-in-python] and tried the code and solution from there...But when I try: pipeline | "generate sequence" >> PeriodicImpulse(0,90,30) | beam.WindowInto( GlobalWindows(), trigger=Repeatedly(AfterProcessingTime(1*30)), accumulation_mode=AccumulationMode.DISCARDING ) | beam.Map(lambda _: print("fired")) )There are 3 events fired as expected... the only thing is that those 3 events are fired instant and not every 30 seconds as I would be expecting.To get it working I'm currently don't use it as a sideinput but just run it in pytest via:def test_updating_sideinput(): pipeline = beam.Pipeline() res = ( pipeline | "generate sequence" >> PeriodicImpulse(0, 90, 30) | beam.Map(lambda _: print("fired")) | beam.WindowInto( GlobalWindows(), trigger=Repeatedly(AfterProcessingTime(1*30)), accumulation_mode=AccumulationMode.DISCARDING ) ) pipeline.run()What would be the correct way to have a sideInput Updated triggered periodically using python?thanks and regards | The reason why all of the elements from PeriodicImpulse are emitted at the same time is because of the parameters you use when creating the transform. The documentation of the transform states that the arguments start_timestamp and stop_timestamp are timestamps, and (despite the documentation not stating that), interval is then interpreted as a number of seconds.Since the implementation of PeriodicImpulse is based on Splittable DoFn with OffsetRange, every time a single output is processed, the remainder of all (future) outputs is deferred to later time, which is specified by the current timestamp + interval. This causes all the deferred timestamps generated to be in the past (lower than Timestamp.now()), therefore triggering processing of the remainder immediately. You can see the implementation in https://beam.apache.org/releases/pydoc/2.32.0/_modules/apache_beam/transforms/periodicsequence.html#ImpulseSeqGenDoFn.Using Timestamps instead of absolute numbers in PeriodicImpulse should solve your problem.start = Timestamp.now()stop = now + Duration(seconds=60)...pipeline| "generate sequence" >> PeriodicImpulse(start, stop, 30)But keep in mind once you are using a runner, Timestamp.now() is called when constructing the pipeline, and by the time the pipeline is executed, may already be well in the past, possibly triggering several minutes worth of data immediately.Also note that PeriodicImpulse already supports windowing into FixedWindows based on the inverval param. |
StaleElementReferenceException while looping over list I'm trying to make a webscraper for this website. The idea is that code iterates over all institutions by selecting the institution's name (3B-Wonen at first instance), closes the pop-up screen, clicks the download button, and does it all again for all items in the list.However, after the first loop it throws the StaleElementReferenceException when selecting the second institution in the loop. From what I read about it this implies that the elements defined in the first loop are no longer accessible. I've read multiple posts but I've no idea to overcome this particular case.Can anybody point me in the right directon? Btw, I'm using Pythons selenium and I'm quite a beginner in programming so I'm still learning. If you could point me in a general direction that would help me a lot! The code I have is te following:#importing and setting up parameters for geckodriver/firefox...# webpagedriver.get("https://opendata-dashboard.cijfersoverwonen.nl/dashboard/opendata-dashboard/beleidswaarde")WebDriverWait(driver, 30)# Get rid of cookie notification# driver.find_element_by_class_name("cc-compliance").click()# Store position of download buttonelement_to_select = driver.find_element_by_id("utilsmenu")action = ActionChains(driver)WebDriverWait(driver, 30)# Drop down menudriver.find_element_by_id("baseGeo").click()# Add institutions to arraycorporaties=[]corporaties = driver.find_elements_by_xpath("//button[@role='option']")# Iterationfor i in corporaties: i.click() # select institution driver.find_element_by_class_name("close-button").click() # close pop-up screen action.move_to_element(element_to_select).perform() # select download button driver.find_element_by_id("utilsmenu").click() # click download button driver.find_element_by_id("utils-export-spreadsheet").click() # pick export to excel driver.find_element_by_id("baseGeo").click() # select drop down menu for next iteration | This code worked for me. But I am not doing driver.find_element_by_id("utils-export-spreadsheet").click()from selenium import webdriverimport timefrom selenium.webdriver.common.action_chains import ActionChainsdriver = webdriver.Chrome(executable_path="path")driver.maximize_window()driver.implicitly_wait(10)driver.get("https://opendata-dashboard.cijfersoverwonen.nl/dashboard/opendata-dashboard/beleidswaarde")act = ActionChains(driver)driver.find_element_by_xpath("//a[text()='Sluiten en niet meer tonen']").click() # Close pop-up# Get the count of optionsdriver.find_element_by_id("baseGeoContent").click()cor_len = len(driver.find_elements_by_xpath("//button[contains(@class,'sel-listitem')]"))print(cor_len)driver.find_element_by_class_name("close-button").click()# No need to start from 0, since 1st option is already selected. Start from downloading and then move to next items.for i in range(1,cor_len-288): # Tried only for 5 items act.move_to_element(driver.find_element_by_id("utilsmenu")).click().perform() #Code to click on downloading option print("Downloaded:{}".format(driver.find_element_by_id("baseGeoContent").get_attribute("innerText"))) driver.find_element_by_id("baseGeoContent").click() time.sleep(3) # Takes time to load. coritems = driver.find_elements_by_xpath("//button[contains(@class,'sel-listitem')]") coritems[i].click() driver.find_element_by_class_name("close-button").click()driver.quit()Output:295Downloaded:3B-WonenDownloaded:AcantusDownloaded:AccoladeDownloaded:ActiumDownloaded:Almelose Woningstichting Beter WonenDownloaded:Alwel |
SQLite|Pandas|Python: Select rows that contain values in any column? I have an SQLite table with 13500 rows with the following SQL schema:PRAGMA foreign_keys = false;-- ------------------------------ Table structure for numbers-- ----------------------------DROP TABLE IF EXISTS "numbers";CREATE TABLE "numbers" ( "RowId" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, "Date" TEXT NOT NULL, "Hour" TEXT NOT NULL, "N1" INTEGER NOT NULL, "N2" INTEGER NOT NULL, "N3" INTEGER NOT NULL, "N4" INTEGER NOT NULL, "N5" INTEGER NOT NULL, "N6" INTEGER NOT NULL, "N7" INTEGER NOT NULL, "N8" INTEGER NOT NULL, "N9" INTEGER NOT NULL, "N10" INTEGER NOT NULL, "N11" INTEGER NOT NULL, "N12" INTEGER NOT NULL, "N13" INTEGER NOT NULL, "N14" INTEGER NOT NULL, "N15" INTEGER NOT NULL, "N16" INTEGER NOT NULL, "N17" INTEGER NOT NULL, "N18" INTEGER NOT NULL, "N19" INTEGER NOT NULL, "N20" INTEGER NOT NULL, UNIQUE ("RowId" ASC));PRAGMA foreign_keys = true;Each row contain non repeating numbers from 1 to 80, sorted in ascending order.I want to select from this table only the rows that contain numbers only these numbers: 10,20,30,40,50,60,70,80 but not more than 3 of them (I mean EXACTLY 3 and not more and not less).I did the following:First step:e.g. for selecting only the rows that contains ANY of these numbers on the column N1 I did this command:SELECT * FROM numbers WHERE N1 IN (10,20,30,40,50,60,70,80);Of course that this is giving to me rows with just one of these numbers but also rows with let's say 5 or even all these numbers which I do not want, I want exactly 3 of these numbers on ANY column.Second step:For selecting rows which contain any of these numbers on columns N1 and N2 we just run this command:SELECT * FROM numbers WHERE N1 IN (10,20,30,40,50,60,70,80) AND N2 IN (10,20,30,40,50,60,70,80);But this will give also columns with 2 or more (even all numbers) which I do not want because this is not exactly 3 of this numbers on any of this columns.Third step:Retrieving rows that contain any of these numbers on N1, N2 and N3 with this command:SELECT * FROM numbers WHERE N1 IN (10,20,30,40,50,60,70,80) AND N2 IN (10,20,30,40,50,60,70,80) AND N3 IN (10,20,30,40,50,60,70,80);This is almost good because of giving the rows with any 3 of these numbers but also gives rows that could have more than 3 of these numbers like 4, 5 or even all numbers which I don't need.Also, one idea is to modify this command by adding AND NOT N4 IN (10,20,30,40,50,60,70,80) AND NOT N5 IN (10,20,30,40,50,60,70,80) and so on until reach the N20.On the other hand, any of these numbers (10,20,30,40,50,60,70,80) could be on N1, N2,N3 but also in any given column like N1, N12, N18 and any other combination of columns which means I should create any possible combination of 3 columns taken from 20 columns in order to get what I need.Is there any smarter way to do this?Thank you in advance!P.S.I have already read this which is somehow something I need butI want to avoid because of to many combinations (and also it is inthe Java language section), this which is doing what I need (Ithink) but it is in Python and pandas not SQLite syntax and I thinkthis one is the same but also in Python and pandas, also, keepin mind that the last two do not look for any possible combinationbut just for a give combination to look for in any given columnwhich partially what I need.Also, If you can do it in Python and pandas it is very good toobecause I could use that too (so, I'm adding tags for these in orderto be seen as well maybe there is someone which is looking for thatsolution too, if you don't mind). | Here's an SQLite query that will give you the results you want. It creates a CTE of all the values of interest, then joins your numbers table to the CTE if any of the columns contain the value from the CTE, selecting only RowId values from numbers where the number of rows in the join is exactly 3 (using GROUP BY and HAVING) and then finally selecting all the data from the rows which match that criteria:WITH CTE(n) AS ( VALUES (10),(20),(30),(40),(50),(60),(70),(80)),rowids AS ( SELECT RowId FROM numbers JOIN CTE ON n IN (n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, n17, n18, n19, n20) GROUP BY RowId HAVING COUNT(*) = 3)SELECT n.*FROM numbers nJOIN rowids r ON n.RowId = r.RowIdI've made a small demo on db-fiddle. |
How to configure logging with colour, format etc in separate setting file in python? I am trying to call python script from bash script.(Note: I am using python version 3.7)Following is the Directory structure (so_test is a directory)so_test shell_script_to_call_py.sh main_file.py log_settings.pyfiles are as below,shell_script_to_call_py.sh#!/bin/bashecho "...Enable Debug..."python $(cd $(dirname ${BASH_SOURCE[0]}) && pwd)/main_file.py "input_1" --debugecho "...No Debug..."python $(cd $(dirname ${BASH_SOURCE[0]}) && pwd)/main_file.py "input_2"main_file.pyimport argparseimport importlibimportlib.import_module("log_settings.py")from so_test import log_settingsdef func1(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test")def func2(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test")def main(): parser = argparse.ArgumentParser() parser.add_argument("input", type=str, help="input argument 1 is missing") parser.add_argument("--debug", help="to print debug logs", action="store_true") args = parser.parse_args() log_settings.log_conf(args.debug) log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") func1() func2()if __name__ == "__main__": main()log_settings.pyimport loggingfrom colorlog import ColoredFormatterdef log_config(is_debug_level): log_format = "%(log_color)s %(levelname)s %(message)s" if is_debug_level: logging.root.setLevel(logging.DEBUG) else: logging.root.setLevel(logging.INFO) stream = logging.StreamHandler() stream.setFormatter(ColoredFormatter(log_format)) global log log = logging.getLogger('pythonConfig') log.addHandler(stream)Following are 2 issues I am facing. (as a newbie to python)I am not able to import the log_settings.py properly in main_file.pyI want to access use log.debug, log.info etc. in main_file (and other .py file) across different functions, for which the settings (format, color etc.) is declared in log_settings.py file. | I got the code working with the following changes:Declare 'log' variable outside the function in log_settings.py, so that it can be imported by other programs.Rename the function named log_config to log_conf, which is referred in the main program.In the main program, update the import statements to import 'log' and 'log_conf' from log_settingsWorking code:1. log_settings.pyimport loggingfrom colorlog import ColoredFormatterglobal loglog = logging.getLogger('pythonConfig')def log_conf(is_debug_level): log_format = "%(log_color)s %(levelname)s %(message)s" if is_debug_level: logging.root.setLevel(logging.DEBUG) else: logging.root.setLevel(logging.INFO) stream = logging.StreamHandler() stream.setFormatter(ColoredFormatter(log_format)) log.addHandler(stream)2. main_file.pyimport argparseimport importlibfrom log_settings import log_conf, logdef func1(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test")def func2(): log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test")def main(): parser = argparse.ArgumentParser() parser.add_argument("input", type=str, help="input argument 1 is missing") parser.add_argument("--debug", help="to print debug logs", action="store_true") args = parser.parse_args() log_conf(args.debug) log.info("INFO Test") log.debug("DEBUG Test") log.warning("WARN Test") func1() func2()if __name__ == "__main__": main()Testing$ python3 main_file.py "input_1" --debugINFO INFO Test (Shows in Green)DEBUG DEBUG Test (Shows in White)WARNING WARN Test (Shows in Yellow)INFO INFO TestDEBUG DEBUG TestWARNING WARN TestINFO INFO TestDEBUG DEBUG TestWARNING WARN Test |
How to slice Data frame with Pandas, and operate on each slice I'm new into pandas and python in general and I want to know your opinion about the best way to create a new data frame using slices of an "original" data frame.input (original df): date author_id time_spent 0 2020-01-02 1 2.51 2020-01-02 2 0.52 2020-01-02 1 1.53 2020-01-01 1 24 2020-01-01 1 15 2020-01-01 3 3.56 2020-01-01 2 1.57 2020-01-01 2 1.5expected output (new df): date author_id total_time_spent 0 2020-01-01 1 31 2020-01-01 2 32 2020-01-01 3 3.53 2020-01-02 1 44 2020-01-02 2 0.5I want:Slice the original df by day.Operate each day to get the total_time_spentCreate new df with these dataWhat you think which is the most efficient way?Thanks for share your answer! | What we will dodf = df.groupby(['date','author_id'])['time_spent'].sum().reset_index() date author_id time_spent0 2020-01-01 1 3.01 2020-01-01 2 3.02 2020-01-01 3 3.53 2020-01-02 1 4.04 2020-01-02 2 0.5 |
Convert one-hot encoded data-frame columns into one column In the pandas data frame, the one-hot encoded vectors are present as columns, i.e:Rows A B C D E0 0 0 0 1 01 0 0 1 0 02 0 1 0 0 03 0 0 0 1 04 1 0 0 0 04 0 0 0 0 1How to convert these columns into one data frame column by label encoding them in python? i.e:Rows A 0 4 1 3 2 2 3 4 4 1 5 5 Also need suggestion on this that some rows have multiple 1s, how to handle those rows because we can have only one category at a time. | Try with argmax#df=df.set_index('Rows')df['New']=df.values.argmax(1)+1dfOut[231]: A B C D E NewRows 0 0 0 0 1 0 41 0 0 1 0 0 32 0 1 0 0 0 23 0 0 0 1 0 44 1 0 0 0 0 14 0 0 0 0 1 5 |
Odoo 11 - Action Server Here is my code for a custom action declaration: <record id="scheduler_synchronization_update_school_and_grade" model="ir.cron"> <field name="name">Action automatisee ...</field> <field name="user_id" ref="base.user_root"/> <field name="interval_number">1</field> <field name="interval_type">days</field> <field name="numbercall">-1</field> <field name="doall" eval="False"/> <field name="model_id" ref="model_ecole_partner_school"/> <field name="code">model.run_grade_establishment_smartbambi()</field> <field name="active" eval="False"/> </record>Here is the start of my function which is called:Here is the error message when I update my custom module on the server:odoo.tools.convert.ParseError: "ERREUR: une valeur NULL viole la contrainte NOT NULL de la colonne « use_relational_model »DETAIL: La ligne en échec contient (516559, 1, null, 1, 2020-01-02 14:56:39.02145, null, 2020-01-02 14:56:39.02145, ir.actions.server, Action automatisee ..., null, action, model.run_grade_establishment_smartbambi(), 5, null, null, null, null, null, null, null, null, object_write, null, null, 397, null, null, null, null, null, null, null, null, null, null, null, null, f, null, null, ir_cron, null)" while parsing /opt/odoo11/addons-odoo/Odoo/ecole/data/actions.xml:33, near<record id="scheduler_synchronization_update_school_and_grade" model="ir.cron"> <field name="name">Action automatisee ...</field> <field name="user_id" ref="base.user_root"/> <field name="interval_number">1</field> <field name="interval_type">days</field> <field name="numbercall">-1</field> <field name="doall" eval="False"/> <field name="model_id" ref="model_ecole_partner_school"/> <field name="code">model.run_grade_establishment_smartbambi()</field> <field name="active" eval="False"/> </record>Do you have an idea of the problem ? I can't find anything on the internetthank you so muchEDIT : I have solved my problem. With PGAdmin 4, the use_relational_model field was required. I have deactivate the required. Thanks | You missed the state field in the cron definition. This is the "Action To Do" field. Try following: <record id="scheduler_synchronization_update_school_and_grade" model="ir.cron"> <field name="name">Action automatisee ...</field> <field name="user_id" ref="base.user_root"/> <field name="interval_number">1</field> <field name="interval_type">days</field> <field name="numbercall">-1</field> <field name="doall" eval="False"/> <field name="model_id" ref="model_ecole_partner_school"/> <field name="state">code</field> <field name="code">model.run_grade_establishment_smartbambi()</field> <field name="active" eval="False"/> </record> |
Execute python script in Qlik Sense load script I am trying to run python script inside my load script in Qlik Sense app.I know that I need to put OverrideScriptSecurity=1 in Settings.iniI putExecute py lib://python/getSolution.py 100 'bla'; // 100 and 'bla' are parametersand I get no error in qlik sense, but script is not executed (I think) because inside the script I havef = open("file.xml", "wb")f.write(xml)f.closeand file is not saved.If I run script from terminal, then script is properly executed.What could go wrong?By the way, my full path to python interpreter isC:\Users\Marko Z\AppData\Local\Programs\Python\Python37-32\python.exeEDIT :Even if I add thisSet vPythonPath = "C:\Users\Marko Z\AppData\Local\Programs\Python\Python37-32\python.exe";Set vPythonFile = "C:\Users\Marko Z\Documents\Qlik\Sense\....\getSolution.py";Execute $(vPythonPath) $(vPythonFile);I get the same behaviour. No error, but not working,...I even see that if I change path (incorrect path) it give me an error, but incorrect file it doesn't give me an error.... (but I am sure it is the right file path...)My python code isxml = "Marko"xml = xml.encode('utf-8')f = open("C:\\Users\\Marko Z\\Test.xml", "wb")f.write(xml)f.close | I figure out what was wrong. For all others that would have similar problems:Problem is in space in path. If I move my script in c:\Windows\getSolution.py it work. I also need to change the python path to c:\Windows\py.exeso end script looks like:Execute c:\Windows\py.exe c:\Windows\getSolution.py 100 'bla';But I still need to figure how to work with space in path... |
openAI Gym NameError in Google Colaboratory I've just installed openAI gym on Google Colab, but when I try to run 'CartPole-v0' environment as explained here.Code:import gymenv = gym.make('CartPole-v0')for i_episode in range(20): observation = env.reset() for t in range(100): env.render() print(observation) action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t+1)) breakI get this:WARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.---------------------------------------------------------------------------NameError Traceback (most recent call last)<ipython-input-19-a81cbed23ce4> in <module>() 4 observation = env.reset() 5 for t in range(100):----> 6 env.render() 7 print(observation) 8 action = env.action_space.sample()/content/gym/gym/core.py in render(self, mode) 282 283 def render(self, mode='human'):--> 284 return self.env.render(mode) 285 286 def close(self):/content/gym/gym/envs/classic_control/cartpole.py in render(self, mode) 104 105 if self.viewer is None:--> 106 from gym.envs.classic_control import rendering 107 self.viewer = rendering.Viewer(screen_width, screen_height) 108 l,r,t,b = -cartwidth/2, cartwidth/2, cartheight/2, -cartheight/2/content/gym/gym/envs/classic_control/rendering.py in <module>() 21 22 try:---> 23 from pyglet.gl import * 24 except ImportError as e: 25 reraise(prefix="Error occured while running `from pyglet.gl import *`",suffix="HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'. If you're running on a server, you may need a virtual frame buffer; something like this should work: 'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'")/usr/local/lib/python3.6/dist-packages/pyglet/gl/__init__.py in <module>() 225 else: 226 from .carbon import CarbonConfig as Config--> 227 del base 228 229 # XXX removeNameError: name 'base' is not definedThe problem is the same in this question about NameError in openAI gymNothing is being rendered. I don't know how I could use this in google colab: 'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'" | One way to render gym environment in google colab is to use pyvirtualdisplay and store rgb frame array while running environment. Environment frames can be animated using animation feature of matplotlib and HTML function used for Ipython display module.You can find the implementation here. Make sure you install required libraries which you can find in the first cell of the colab. In case the first link for google colab doesn't work you can see this one. |
I need example on how to mention using PTB I need further elaboration on this thread How can I mention Telegram users without a username?Can someone give me an example of how to use the markdown style? I am also using PTB libraryThe code I want modifiedcontext.bot.send_message(chat_id=-1111111111, text="hi") | Alright, so I finally found the answer. The example below should work.context.bot.send_message(chat_id=update.effective_chat.id, parse_mode = ParseMode.MARKDOWN_V2, text = "[inline mention of a user](tg://user?id=123456789)") |
Increasing distances among nodes in NetworkX I'm trying to create a network of approximately 6500 nodes for retweets. The shape of network looks so bad with a very low distance among node. I've tried spring_layout to increase the distances but it didn't change anything.nx.draw(G, with_labels=False, node_color=color_map_n, node_size=5,layout=nx.spring_layout(G,k=100000)) | I swapped "layout=..." with "pos=..." and it worked |
Selenium not sending keys to input field I'm trying to scrape this url https://www.veikkaus.fi/fi/tulokset#!/tarkennettu-hakuThere's three main parts to the scrape:Select the correct game type from "Valitse peli" For this I want to choose "Eurojackpot"Set the date range from variables. In the full version I'll be generating dates based on the 12 week range limit. For now I've just chose two dates that are close enough. This date range needs to be inputted into the two input fields below "Näytä tulokset aikaväliltä"I need to click the show results button. (Labeled "Näytä Tulokset")I believe my code does parts 1 and 3 correct, but I'm having trouble with part 2. For some reason the scraper isn't sending the dates to the elements. I've tried click, clear and then send_keys. I've also tried to first send key_down(Keys.CONTROL) then send_keys("a") and then send_keys(date), but none of these are working. The site always goes back to the date it loads up with (current date).Here's my full code:# -*- coding: utf-8 -*-"""Created on Sat Jun 12 12:05:40 2021@author: Samu Kaarlela"""from selenium import webdriverfrom selenium.webdriver import ActionChainsfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.common.keys import Keysfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.select import Selectfrom selenium.webdriver.chrome.options import Optionsurl = "https://www.veikkaus.fi/fi/tulokset#!/tarkennettu-haku"wd = r"C:\Users\Oppilas\Desktop\EJ prediction\scraper\chromedriver"chrome_options = Options()chrome_options.add_argument("--headless")webdriver = webdriver.Chrome( wd, options=chrome_options)from_date = "05.05.2021"to_date = "11.06.2021" with webdriver as driver: wait = WebDriverWait(driver,10) driver.get(url) game_type_element = driver.find_element_by_css_selector( "#choose-game" ) slc = Select(game_type_element) slc.select_by_visible_text("Eurojackpot") from_date_element = WebDriverWait( driver, 20).until( EC.element_to_be_clickable( ( By.CSS_SELECTOR, "#date-range div:nth-child(1) input" ) ) ) ActionChains(driver). \ click(from_date_element). \ key_down(Keys.CONTROL). \ send_keys("a"). \ send_keys(from_date). \ perform() print(from_date_element.get_attribute("value")) driver.save_screenshot("./image.png") driver.close() EDIT:I just realized that when selected the input field goes from #date-range #from-date to #date-range #from-date #focus-visible | For me, simply doing the following works:driver.find_element_by_css_selector('.date-input.from-date').send_keys(from_date)ActionChains(driver).send_keys(Keys.RETURN).perform()driver.find_element_by_css_selector('.date-input.to-date').send_keys(to_date)ActionChains(driver).send_keys(Keys.RETURN).perform() |
Python Multiple Datetimes To One I have two types of datetime format in a Dataframe.Date2019-01-06 00:00:00 (%Y-%d-%m %H:%M:%S')07/17/2018 ('%m/%d/%Y')I want to convert into one specific datetime format. Below is the script that I am usingd1 = pd.to_datetime(df1['DATE'], format='%m/%d/%Y',errors='coerce')d2 = pd.to_datetime(df1['DATE'], format='%Y-%d-%m %H:%M:%S',errors='coerce')df1['Date'] = d2.fillna(d1)While doing this, the code is clubbing some of the other datetime into another. For ex: 7th January 2018 is coming as July 1st 2018. This problem is associated with this format (%Y-%d-%m %H:%M:%S') after running the above script. | If there are mixed format also in format 2019-01-06 00:00:00 - it means it should be January or June, only ways is prioritize one format - e.g. here first months and add first format d2 and then d3 in chained fillna:d1 = pd.to_datetime(df1['DATE'], format='%m/%d/%Y',errors='coerce')d2 = pd.to_datetime(df1['DATE'], format='%Y-%m-%d %H:%M:%S',errors='coerce')d3 = pd.to_datetime(df1['DATE'], format='%Y-%d-%m %H:%M:%S',errors='coerce')df1['Date'] = d2.fillna(d1).fillna(d3)If need prioritize first days:df1['Date'] = d3.fillna(d1).fillna(d2)In sample data is possible check difference:print (df1) DATE0 2019-01-06 00:00:001 2019-01-15 00:00:002 2019-20-10 00:00:003 07/17/2018d1 = pd.to_datetime(df1['DATE'], format='%m/%d/%Y',errors='coerce')d2 = pd.to_datetime(df1['DATE'], format='%Y-%m-%d %H:%M:%S',errors='coerce')d3 = pd.to_datetime(df1['DATE'], format='%Y-%d-%m %H:%M:%S',errors='coerce')df1['Date1'] = d2.fillna(d1).fillna(d3)df1['Date2'] = d3.fillna(d1).fillna(d2)print (df1) DATE Date1 Date20 2019-01-06 00:00:00 2019-01-06 2019-06-01 <- difference1 2019-01-15 00:00:00 2019-01-15 2019-01-152 2019-20-10 00:00:00 2019-10-20 2019-10-203 07/17/2018 2018-07-17 2018-07-17 |
pandas change all rows with Type X if 1 Type X Result = 1 Here is a simple pandas df:>>> df Type Var1 Result0 A 1 NaN1 A 2 NaN2 A 3 NaN3 B 4 NaN4 B 5 NaN5 B 6 NaN6 C 1 NaN7 C 2 NaN8 C 3 NaN9 D 4 NaN10 D 5 NaN11 D 6 NaNThe object of the exercise is: if column Var1 = 3, set Result = 1 for all that Type.This finds the rows with 3 in Var1 and sets Result to 1,df['Result'] = df['Var1'].apply(lambda x: 1 if x == 3 else 0)but I can't figure out how to then catch all the same Type and make them 1. In this case it should be all the As and all the Cs. Doesn't have to be a one-liner.Any tips please? | Create boolean mask and for True/False to 1/0 mapp convert values to integers:df['Result'] = df['Type'].isin(df.loc[df['Var1'].eq(3), 'Type']).astype(int)#alternativedf['Result'] = np.where(df['Type'].isin(df.loc[df['Var1'].eq(3), 'Type']), 1, 0)print (df) Type Var1 Result0 A 1 11 A 2 12 A 3 13 B 4 04 B 5 05 B 6 06 C 1 17 C 2 18 C 3 19 D 4 010 D 5 011 D 6 0Details:Get all Type values if match condition:print (df.loc[df['Var1'].eq(3), 'Type'])2 A8 CName: Type, dtype: objectTest original column Type by filtered types:print (df['Type'].isin(df.loc[df['Var1'].eq(3), 'Type']))0 True1 True2 True3 False4 False5 False6 True7 True8 True9 False10 False11 FalseName: Type, dtype: boolOr use GroupBy.transform with any for test if match at least one value, thi solution is slowier if larger df:df['Result'] = df['Var1'].eq(3).groupby(df['Type']).transform('any').astype(int) |
Referencing folder without absolute path I am writing a code that will be implemented alongside my company's software. My code is written in Python and requires access to a data file (.ini format) that will be stored on the user's desktop, inside the software's shortcuts folder.This being said, I want to be able to read/write from that file, but I can't simply reference the desktop as C:\USERS\DESKTOP\Parameters\ParameterUpdate.ini, since the absolute path will be different across different systems.Is there a way to ensure that I am referencing whatever the desktop's absolute path is? | In windows, desktop absolute path looks like this:%systemdrive%\users\%username%\DesktopSo this path will fit your requirements:%systemdrive%\users\%username%\Desktop\Parameters\ParameterUpdate.iniPlease make sure u don't actually mean public desktop path, with will be:%public%\Desktop\Parameters\ParameterUpdate.ini |
Subscription modelling in Flask SQLAlchemy I am trying to model the following scenario in Flask SQLAlchemy:There are a list of SubscriptionPacks available for purchase. When a particular User buys a SubscriptionPack they start an instance of that Subscription.The model is as follows:A User can have many Subscriptions (only one of which will be Active at a time) and each Subscription will be referencing one SubscriptionPack.How would this be modelled in SQLAlchemy?Currently I have the User.id and SubscriptionPack.id referenced as db.ForeignKey in the Subscriptions model. And I have Subscriptions referenced as a db.Relationship in the Users table. This seems inconsistent and wrong and is leading me to have to hand-code a lot of SQL statements to return the right results.Any help as to how to do this right? | For those who stumble upon this, what I was looking for was the bidirectional SQLAlchemy Association Object pattern.This allows the intermediate table of a Many-to-Many to have it's own stored details. In my instance above the Subscription table needed to be an Association Object (has it's own class). |
Append information to failed tests I have some details I have to print out for a failed test. Right now I'm just outputting this information to STDOUT and I use the -s to see this information. But I would like to append this information to the test case details when it failed, and not need to use the -s option. | You can just keep printing to stdout and simply not use -s. If you do this py.test will put the details you printed next to the assertion failure message when the test fails, in a "captured stdout" section.When using -s things get worse since they are also printed to stdout even if the test passes and it also displays during the test run instead of nicely in a section of a failure report. |
Install Numpy Requirement in a Dockerfile. Results in error I am attempting to install a numpy dependancy inside a docker container. (My code heavily uses it). On building the container the numpy library simply does not install and the build fails. This is on OS raspbian-buster/stretch. This does however work when building the container on MAC OS. I suspect some kind of python related issue, but can not for the life of me figure out how to make it work.I should point out that removing the pip install numpy from the requirements file and using it in its own RUN statement in the dockerfile does not solve the issue.The Dockerfile:FROM python:3.6ENV PYTHONUNBUFFERED 1ENV APP /appRUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezoneRUN mkdir $APPWORKDIR $APPADD requirements.txt .RUN pip install -r requirements.txtCOPY . .The requirements.txt contains all the project requirements, amounf which is numpy.Step 6/15 : RUN pip install numpy==1.14.3 ---> Running in 266a2132b078Collecting numpy==1.14.3 Downloading https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zip (4.9MB)Building wheels for collected packages: numpy Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): still running... Building wheel for numpy (setup.py): still running...EDIT:So after the comment by skybunk and the suggestion to head to official docs, some more debugging on my part, the solution wound up being pretty simple. Thanks skybunk to you go all the glory. Yay.Solution:Use alpine and install python install package dependencies, upgrade pip before doing a pip install requirements.This is my edited Dockerfile - working obviously...FROM python:3.6-alpine3.7RUN apk add --no-cache --update \ python3 python3-dev gcc \ gfortran musl-dev \ libffi-dev openssl-devRUN pip install --upgrade pipENV PYTHONUNBUFFERED 1ENV APP /appRUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezoneRUN mkdir $APPWORKDIR $APPADD requirements.txt .RUN pip install -r requirements.txtCOPY . . | To use Numpy on python3 here, we first head over to the official documentation to find what dependencies are required to build Numpy.Mainly these 5 packages + their dependencies must be installed:Python3 - 70 mbPython3-dev - 25 mbgfortran - 20 mbgcc - 70 mbmusl-dev -10 mb (used for tracking unexpected behaviour/debugging)An POC setup would look something like this -Dockerfile:FROM gliderlabs/alpineADD repositories.txt /etc/apk/repositoriesRUN apk add --no-cache --update \ python3 python3-dev gcc \ gfortran musl-devADD requirements-pip.txt .RUN pip3 install --upgrade pip setuptools && \ pip3 install -r requirements-pip.txtADD . /appWORKDIR /appENV PYTHONPATH=/app/ENTRYPOINT python3 testscript.pyrepositories.txthttp://dl-5.alpinelinux.org/alpine/v3.4/mainrequirements-pip.txtnumpytestscript.pyimport numpy as npdef random_array(a, b): return np.random.random((a, b))a = random_array(2,2)b = random_array(2,2)print(np.dot(a,b))To run this - clone alpine, build it using "docker build -t gliderlabs/alpine ."Build and Run your Dockerfiledocker build -t minidocker .docker run minidockerOutput should be something like this-[[ 0.03573961 0.45351115][ 0.28302967 0.62914049]]Here's the git link, if you want to test it out |
Python program can not import dot parser I am trying to run a huge evolution simulating python software from the command line. The software is dependent on the following python packages:1-networkX 2-pyparsing3-numpy4-pydot 5-matplotlib6-graphvizThe error I get is this:Couldn't import dot_parser, loading of dot files will not be possible.initializing with file= initAdapt.py in model dir= ./Test_adaptation//Traceback (most recent call last): File "run_evolution.py", line 230, in <module> gr.write_dot( os.path.join(test_output_dir, 'test_net.dot') ) File "/Library/Python/2.7/site-packages/pydot.py", line 1602, in <lambda> lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File "/Library/Python/2.7/site-packages/pydot.py", line 1696, in write dot_fd.write(self.create(prog, format)) File "/Library/Python/2.7/site-packages/pydot.py", line 1740, in create self.write(tmp_name) File "/Library/Python/2.7/site-packages/pydot.py", line 1694, in write dot_fd.write(self.to_string()) File "/Library/Python/2.7/site-packages/pydot.py", line 1452, in to_string graph.append( node.to_string()+'\n' ) File "/Library/Python/2.7/site-packages/pydot.py", line 722, in to_string node_attr.append( attr + '=' + quote_if_necessary(value) )TypeError: cannot concatenate 'str' and 'int' objectsI have already tried the solution suggested for a similar question on stack overflow. I still get the same error. Here are the package versions I am using and my python version. I'm using python 2.7.6 typing the command which -a python yields the result: "/usr/bin/python".1-pyparsing (1.5.7)2-pydot (1.0.2)3-matplotlib (1.3.1)4-graphviz (0.4.2)5-networkx (0.37)6-numpy (1.8.0rc1)Any ideas? Seeing that the solution to similar questions is not working for me, I think the problem might be more fundamental in my case. Something wrong with the way I installed my python perhaps. | Any particular reason you're not using the newest version of pydot?This revision of 1.0.2 looks like it fixes exactly that problem:https://code.google.com/p/pydot/source/diff?spec=svn10&r=10&format=side&path=/trunk/pydot.pySee line 722. |
How to extract specific time period from Alpha Vantage in Python? outputsize='compact' is giving last 100 days, and outputsize='full' is giving whole history which is too much data. Any idea how to write a code that extract some specific period? ts=TimeSeries(key='KEY', output_format='pandas')data, meta_data = ts.get_daily(symbol='MSFT', outputsize='compact')print(data)Thanks. | This is how I was able to get the dates to workts = TimeSeries (key=api_key, output_format = "pandas")data_daily, meta_data = ts.get_daily_adjusted(symbol=stock_ticker, outputsize ='full')start_date = datetime.datetime(2000, 1, 1)end_date = datetime.datetime(2019, 12, 31) # Create a filtered dataframe, and change the order it is displayed. date_filter = data_daily[(data_daily.index > start_date) & (data_daily.index <= end_date)]date_filter = date_filter.sort_index(ascending=True)If you want to iterate trough the rows in the new dataframefor index, row in date_filter.iterrows(): |
How do I get hundreds of DLL files? I am using python and I am trying to install the GDAL library. I kept having an error telling me that many DLL files were missing so I used the software Dependency Walker and it showed me that 330 DLL files were missing...My question is: How do I get that much files without downloading them one by one on a website ? | First of all, never download .dll files from shady websites.The best way of repairing missing dependencies is to reinstall the software that shipped the .dll files completely. |
How to change the performance metric from accuracy to precision, recall and other metrics in the code below? As a beginner in scikit-learn, and trying to classify the iris dataset, I'm having problems with adjusting the scoring metric from scoring='accuracy' to others like precision, recall, f1 etc., in the cross-validation step. Below is the full code sample (enough to start at # Test options and evaluation metric).# Load librariesimport pandasfrom pandas.plotting import scatter_matriximport matplotlib.pyplot as pltfrom sklearn import model_selection # for command model_selection.cross_val_scorefrom sklearn.metrics import classification_reportfrom sklearn.metrics import confusion_matrixfrom sklearn.metrics import accuracy_scorefrom sklearn.linear_model import LogisticRegressionfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysisfrom sklearn.naive_bayes import GaussianNBfrom sklearn.svm import SVC# Load dataseturl = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/iris.csv"names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']dataset = pandas.read_csv(url, names=names)# Split-out validation datasetarray = dataset.valuesX = array[:,0:4]Y = array[:,4]validation_size = 0.20seed = 7X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)# Test options and evaluation metricseed = 7scoring = 'accuracy'#Below, we build and evaluate 6 different models# Spot Check Algorithmsmodels = []models.append(('LR', LogisticRegression()))models.append(('LDA', LinearDiscriminantAnalysis()))models.append(('KNN', KNeighborsClassifier()))models.append(('CART', DecisionTreeClassifier()))models.append(('NB', GaussianNB()))models.append(('SVM', SVC()))# evaluate each model in turn, we calculate the cv-scores, ther mean and std for each model# results = []names = []for name, model in models: #below, we do k-fold cross-validation kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg)Now, apart from scoring ='accuracy', I'd like to evaluate other performance metrics for this multiclass classification problem. But when I use, scoring='precision', it raises:ValueError: Target is multiclass but average='binary'. Please choose another average setting.My questions are:1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct? If yes, then, which command(s) should replace scoring='accuracy' in the code above?2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type? 3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find:ValueError: 'balanced_accuracy' is not a valid scoring value.Why is this happening, when the model evaluation documentation (https://scikit-learn.org/stable/modules/model_evaluation.html) clearly says balanced_accuracy is a scoring method? I'm quite confused here, so an actual code to show how to evaluate other performance etrics would be appreciated! Thanks inn advance!! | 1) I guess the above is happening because 'precision' and 'recall' are defined in scikit-learn only for binary classification-is that correct?No. Precision & recall are certainly valid for multi-class problems, too - see the docs for precision & recall. If yes, then, which command(s) should replace scoring='accuracy' in the code above?The problem arises because, as you can see from the documentation links I have provided above, the default setting for these metrics is for binary classification (average='binary'). In your case of multi-class classification, you need to specify which exact "version" of the particular metric you are interested in (there are more than one); have a look at the relevant page of the scikit-learn documentation, but some valid options for your scoring parameter could be:'precision_macro''precision_micro''precision_weighted''recall_macro''recall_micro''recall_weighted'The documentation link above contains even an example of using 'recall_macro' with the iris data - be sure to check it. 2) If I want to compute the confusion matrix, precision and recall for each fold while performing the k-fold cross validation, what commands should I type? This is not exactly trivial, but you can see a way in my answer for Cross-validation metrics in scikit-learn for each data split 3) For the sake of experimentation, I tried scoring='balanced_accuracy', only to find: ValueError: 'balanced_accuracy' is not a valid scoring value.This is because you are probably using an older version of scikit-learn. balanced_accuracy became available only in v0.20 - you can verify that it is not available in v0.18. Upgrade your scikit-learn to v0.20 and you should be fine. |
Cannot find django.views.generic . Where is generic. Looked in all folders for the file I know this is a strange question but I am lost on what to do. i cloned pinry... It is working and up . I am trying to find django.views.generic. I have searched the directory in my text editor, I have looked in django.views. But I cannot see generic (only a folder with the name "generic"). I cant understand where the generic file is . It is used in many imports and to extend classes but I cannot find the file to see the import functions. I have a good understanding of files and imports and i would say at this stage I am just above noob level. So is there something I am missing here. How come i cannot find this file? If i go to from django.core.urlresolvers import reverse, I can easly find this but not eg : from django.views.generic import CreateViewWhere is generic? | Try running this from a Python interpreter: >>> import django.views.generic>>> django.views.generic.__file__This will show you the location of the gerneric as a string path. In my case the output is:'/.../python3.5/site-packages/django/views/generic/__init__.py'If you look at this __init__.py you will not see the code for any of the generic *View classes. However, these classes can still be imported from the path django.views.generic (if I am not mistaken, this is because the *View classes are part of the __all__ list in django/views/generic/__init__.py). In the case of CreateView, it is actually in django/views/generic/edit.py, although it can be imported from django.views.generic, because of the way the __init__.py is set up.This is technique is generally useful when you want to find the path to a .py file. Also useful: if you use it on its own in a script (print(__file__)), it will give you the path to the script itself. |
Django- limit_choices_to using 2 different tables I fear that what I am trying to do might be impossible but here we go:Among my models, I have the followingClass ParentCategory(models.Model): name = models.CharField(max_length=128) def __unicode__(self): return self.name Class Category(models.Model): parentCategory = models.ForeignKey(ParentCategory, on_delete=models.CASCADE, ) name = models.CharField(max_length=128) def __unicode__(self): return self.nameClass Achievement(models.Model): milestone = models.ForeignKey(Milestone, on_delete=models.CASCADE) description = models.TextField( ) level_number = models.IntegerField() completeion_method = models.ForeignKey(Category, on_delete = models.CASCADE, limit_choices_to={'parentCategory.name':'comp method'}) def __unicode__(self): # TODO: return description[0,75] + '...'I know the completion method field throws an error because it is not correct syntax. But is there a way to achieve the wanted result using a similar method? | Maybe this will work:limit_choices_to={'parentCategory__name': 'comp method'} |
How to reduce the retry count for kubernetes cluster in kubernetes-client-python I need to reduce the retry count for unavailable/deleted kubernetes cluster using kubernetes-client-python, currently by default it is 3.WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x00000000096E3860>: Failed to establish a new connection: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',)': /api/v1/podsWARNING Retrying (Retry(total=1,....... /api/v1/podsWARNING Retrying (Retry(total=0,....... /api/v1/podsAfter 3 retries it throws an exception.Is there any way to reduce the count.Example Codefrom kubernetes import client, configconfig.load_kube_config(config_file='location-for-kube-config')v1 = client.CoreV1Api()ret = v1.list_pod_for_all_namespaces()for i in ret.items: print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) | Sadly it seems that it's not possible because:Python client use urlib3 PoolManager to make requests as you can see there https://github.com/kubernetes-client/python/blob/master/kubernetes/client/rest.py#L162r = self.pool_manager.request(method, url, body=request_body, preload_content=_preload_content, timeout=timeout, headers=headers)and underhood it uses urlopen with default parameters as you can see therehttps://urllib3.readthedocs.io/en/1.2.1/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopenurlopen(..., retries=3, ...)so there is now way to pass other value here - you must fork official lib to achieve that. |
Sum value by group by and cumulatively add to separate list or numpy array cumulatively and use the last value in conditional statement I want to sum the values for multi-level index pandas dataframe. I would then like to add this value to another value in a cumulative fashion. I would then like to use a conditional statement which is dependant on the last value of this cumulative list for the next index value of the same level.I have been able to sum the values for of the multi-level index but unable to add this cumulatively to a list which I have stored separately. Here is a snippet of my dataframe. There is rather a lot of code but I feel it is required to fully explain my problem:import pandas as pdimport numpy as npbalance = [20000]data = {'EVENT_ID': [112335580,112335580,112335580,112335580,112335580,112335580,112335580,112335580, 112335582, 112335582,112335582,112335582,112335582,112335582,112335582,112335582,112335582,112335582, 112335582,112335582,112335582], 'SELECTION_ID': [6356576,2554439,2503211,6297034,4233251,2522967,5284417,7660920,8112876,7546023,8175276,8145908, 8175274,7300754,8065540,8175275,8106158,8086265,2291406,8065533,8125015], 'BSP': [5.080818565,6.651493872,6.374683435,24.69510797,7.776082305,11.73219964,270.0383021,4,8.294425408,335.3223613, 14.06040142,2.423340019,126.7205863,70.53780982,21.3328554,225.2711962,92.25113066,193.0151362,3.775394142, 95.3786641,17.86333041], 'WIN_LOSE':[0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0], 'INDICATOR': [1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0], 'POT_BET': [2.258394,2.257205,2.255795,2.255495,2.254286,2.250119,2.237375,2.120843,2.256831,2.253802,2.244174,2.232902, 2.226021,2.220088,2.160382,2.143235,2.141063,2.122452,2.095736,2.086548,2.065200], 'LIABILITY': [2.258394,2.257205,12.124184,12.746919,15.275225,24.148729,53.014851,570.587899,2.256831,6.255188, 16.369963,29.162601,37.538122,45.140722,150.228225,195.572610,202.070630,266.835913,402.412997, 467.952670,690.442601]}df = pd.DataFrame(data, columns=['EVENT_ID','SELECTION_ID','BSP','WIN_LOSE','INDICATOR','POT_BET','LIABILITY'])df = df.sort_values(["EVENT_ID",'BSP']) df.set_index(['EVENT_ID', 'SELECTION_ID'], inplace=True) df['BET'] = np.where(df.groupby(level = 0)['LIABILITY'].transform('sum') < 0.75*balance[-1], df['POT_BET'], 0)df.loc[(df.INDICATOR == 1) & (df.WIN_LOSE == 1), 'RESULT'] = df['BSP'] * df['BET'] - df['BET']df.loc[(df.INDICATOR == 1) & (df.WIN_LOSE == 0), 'RESULT'] = - df['BET']df.loc[(df.INDICATOR == 0) & (df.WIN_LOSE == 0), 'RESULT'] = df['BET']df.loc[(df.INDICATOR == 0) & (df.WIN_LOSE == 1), 'RESULT'] = -df['BSP'] * df['BET'] + df['BET']results = df.groupby('EVENT_ID')['RESULT'].sum()balance.append(results)This yields the following result for the balance list:[20000, EVENT_ID 112335580 23.872099 112335582 -22.304487 Name: RESULT, dtype: float64]I expect the balance list to be:balance = [20000, 20023.8721, 20001.56761]It is important to note that the balance value should change for each iteration and this new value used in the conditional statement.I am also not sure that a list is the most efficient way to achieve my goals but that is a slightly different question. Cheers,Sandy | Let's change balance to a pd.Series:balance = pd.Series([20000])Your code#change this linedf['BET'] = np.where(df.groupby(level = 0)['LIABILITY'].transform('sum') < 0.75*balance.values.tolist()[-1], df['POT_BET'], 0)Your codebalance = pd.concat([balance, results]).cumsum().tolist()Output:[20000.0, 20023.872099225347, 20001.567612410585] |
Questions Tags Users Unanswered shifting specific column to before/after specific column in dataframe In dataframe example : medcine_preg_oth medcine_preg_oth1 medcine_preg_oth2 medcine_preg_oth30 Berplex Berplex None None1 NaN NaN NaN NaN2 NaN NaN NaN NaN3 obmin obmin None None4 NaN NaN NaN NaN'medcine_preg_oth1' 'medcine_preg_oth2' 'medcine_preg_oth3' ,these three columns are in somewhere of dataframe with other columns. . I want to shift these three : medcine_preg_oth1 medcine_preg_oth2 medcine_preg_oth3 to the place of after 'medcine_preg_oth'.My idea is shifting the specific columns to place after/ before specific columns in dataframe for wider purpose . please suggest me! Thanks | You can re-arrange your columns like this:re_ordered_columns = ['medicine_pred_oth','medcine_preg_oth1','medcine_preg_oth2','medcine_preg_oth3']df = df[re_ordered_columns+df.columns.difference(re_ordered_columns).tolist()]add the remaining columns in place of ... |
padding a batch with 0 vectors in dynamic rnn I have a prediction task working with variable sequences of input data. Directly using a dynamic rnn will run into the trouble of splitting the outputs according to this post:Using a variable for num_splits for tf.split()So, I am wondering if is it possible to pad an entire batch of sequence to make all examples have the same number of sequences and then in sequence_length parameter of tf.nn.dynamic_rnn I give 0 length for the padded batch of sequence. Would this work? | These days (2022) two methods you can use to pad sequences in tensorflow are using a tf.data.Dataset pipeline, or preprocessing with tf.keras.utils.pad_sequences.Method 1: Use Tensorflow Pipelines (tf.data.Dataset)The padded_batch() method can be used in place of a normal batch() method to pad the elements of a tf.data.Dataset object when batching for model training: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#padded_batchThe 'batching tensors with padding' pipeline is also described here: https://www.tensorflow.org/guide/data#batching_tensors_with_paddingThe call signature is:padded_batch( batch_size, padded_shapes=None, padding_values=None, drop_remainder=False, name=None)An example for your use case of inputting to an RNN is:import tensorflow as tfimport numpy as np# input is a ragged tensor of different sequence lengthsinputs = tf.ragged.constant([[1], [2, 3], [4, 5, 6]], dtype = tf.float32)# construct dataset using tf.data.Datasetdataset = tf.data.Dataset.from_tensor_slices(inputs)# convert ragged tensor to dense tensor to avoid TypeErrordataset = dataset.map(lambda x: x)# pad sequences using padded_batchdataset = dataset.padded_batch(3)# run the batch through a simple RNN modelsimple_rnn = tf.keras.Sequential([ tf.keras.layers.SimpleRNN(4)])output = simple_rnn(batch)Note that this method does not allow you to use pre-padding, the method is always post-padding. However, you can use padded_shapes argument to specify the sequence length.Method 2: Preprocess sequence as nested list using Keras pad_sequencesKeras (a package sitting on top of Tensorflow since version 2.0) provides a utility function to truncate and pad Python lists to a common length: https://www.tensorflow.org/api_docs/python/tf/keras/utils/pad_sequencesThe call signature is:tf.keras.utils.pad_sequences( sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.0)From the documentation:This function transforms a list (of length num_samples) of sequences(lists of integers) into a 2D Numpy array of shape(num_samples,num_timesteps). num_timesteps is either the maxlenargument if provided, or the length of the longest sequence in the list.Sequences that are shorter than num_timesteps are padded with valueuntil they are num_timesteps long.Sequences longer than num_timesteps are truncated so that they fit thedesired length.The position where padding or truncation happens is determined by thearguments padding and truncating, respectively. Pre-padding orremoving values from the beginning of the sequence is the default.An example for your use case of inputting to an RNN:import tensorflow as tfimport numpy as np# inputs is list of varying length sequences with batch size (list length) 3inputs = [[1], [2, 3], [4, 5, 6]]# pad the sequences with 0's using pre-padding (default values)inputs = tf.keras.preprocessing.sequence.pad_sequences(inputs, dtype = np.float32)# add an outer batch dimension for RNN inputinputs = tf.expand_dims(inputs, axis = 0)# run the batch through a simple RNN layersimple_rnn = tf.keras.layers.SimpleRNN(4)output = simple_rnn(inputs) |
Login to a website then open it in browser I am trying to write a Python 3 code that logins in to a website and then opens it in a web browser to be able to take a screenshot of it.Looking online I found that I could do webbrowser.open('example.com')This opens the website, but cannot login.Then I found that it is possible to login to a website using the request library, or urllib. But the problem with both it that they do not seem to provide the option of opening a web page.So how is it possible to login to a web page then display it, so that a screenshot of that page could be takenThanks | Have you considered Selenium? It drives a browser natively as a user would, and its Python client is pretty easy to use. Here is one of my latest works with Selenium. It is a script to scrape multiple pages from a certain website and save their data into a csv file:import osimport timeimport csvfrom selenium import webdrivercols = [ 'ies', 'campus', 'curso', 'grau_turno', 'modalidade', 'classificacao', 'nome', 'inscricao', 'nota']codigos = [ 96518, 96519, 96520, 96521, 96522, 96523, 96524, 96525, 96527, 96528]if not os.path.exists('arquivos_csv'): os.makedirs('arquivos_csv')options = webdriver.ChromeOptions()prefs = { 'profile.default_content_setting_values.automatic_downloads': 1, 'profile.managed_default_content_settings.images': 2}options.add_experimental_option('prefs', prefs)# Here you choose a webdriver ("the browser")browser = webdriver.Chrome('chromedriver', chrome_options=options)for codigo in codigos: time.sleep(0.1) # Here is where I set the URL browser.get(f'http://www.sisu.mec.gov.br/selecionados?co_oferta={codigo}') with open(f'arquivos_csv/sisu_resultados_usp_final.csv', 'a') as file: dw = csv.DictWriter(file, fieldnames=cols, lineterminator='\n') dw.writeheader() ies = browser.find_element_by_xpath('//div[@class ="nome_ies_p"]').text.strip() campus = browser.find_element_by_xpath('//div[@class ="nome_campus_p"]').text.strip() curso = browser.find_element_by_xpath('//div[@class ="nome_curso_p"]').text.strip() grau_turno = browser.find_element_by_xpath('//div[@class = "grau_turno_p"]').text.strip() tabelas = browser.find_elements_by_xpath('//table[@class = "resultado_selecionados"]') for t in tabelas: modalidade = t.find_element_by_xpath('tbody//tr//th[@colspan = "4"]').text.strip() aprovados = t.find_elements_by_xpath('tbody//tr') for a in aprovados[2:]: linha = a.find_elements_by_class_name('no_candidato') classificacao = linha[0].text.strip() nome = linha[1].text.strip() inscricao = linha[2].text.strip() nota = linha[3].text.strip().replace(',', '.') dw.writerow({ 'ies': ies, 'campus': campus, 'curso': curso, 'grau_turno': grau_turno, 'modalidade': modalidade, 'classificacao': classificacao, 'nome': nome, 'inscricao': inscricao, 'nota': nota })browser.quit()In short, you set preferences, choose a webdriver (I recommend Chrome), point to the URL and that's it. The browser is automatically opened and start executing your instructions.I have tested using it to log in and it works fine, but never tried to take screenshot. It theoretically should do. |
scrapy callback doesnt work in function When executing the first yield it will not go into the function parse_url and when executing the second yield it will not go back the function parse and it just end. During the whole process, there are no exceptions. I don't know how to deal with this problem, I need help.import scrapyimport refrom crawlurl.items import CrawlurlItemclass HouseurlSpider(scrapy.Spider): name = 'houseurl' allowed_domains = ['qhd.58.com/ershoufang/'] start_urls = ['http://qhd.58.com/ershoufang//'] header = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.221 Safari/537.36 SE 2.X MetaSr 1.0' } def parse(self, response): urls = response.xpath('//div[@class="list-info"]/h2[@class="title"]/a/@href').extract() next_url = response.xpath('//a[@class="next"]/@href').extract() for url in urls: yield scrapy.Request(url,headers=self.header,callback=self.parse_url) if next_url: next_url = next_url[0] yield scrapy.Request(next_url,headers=self.header,callback=self.parse) def parse_url(self,response): item = CrawlurlItem() url_obj = re.search('(http://qhd.58.com/ershoufang/\d+x.shtml).*',response.url) url = url_obj.group(1) item['url'] = url yield item | If you carefully looked at the logs then you might have noticed that scrapy filtered offsite domain requests. This means when scrapy tried to ping short.58.com and jxjump.58.com, it did not follow through. You can add those domains to the allowed_domains filter in your Spider class and you will see the requests being sent.Replace:allowed_domains = ['qhd.58.com/ershoufang/']With:allowed_domains = ['qhd.58.com', 'short.58.com', 'jxjump.58.com']And it should work! |
Splitting HTML text by while using beautifulsoup HTML code:<td> <label class="identifier">Speed (avg./max):</label> </td> <td class="value"> <span class="block">4.5 kn<br>7.1 kn</span> </td>I need to get values 4.5 kn and 7.1 as separate list items so I could append them separately. I do not want to split it I wanted to split the text string using re.sub, but it does not work. I tried too use replace to replace br, but it did not work. Can anybody provide any insight?Python code: def NameSearch(shipLink, mmsi, shipName): from bs4 import BeautifulSoup import urllib2 import csv import re values = [] values.append(mmsi) values.append(shipName) regex = re.compile(r'[\n\r\t]') i = 0 with open('Ship_indexname.csv', 'wb')as f: writer = csv.writer(f) while True: try: shipPage = urllib2.urlopen(shipLink, timeout=5) except urllib2.URLError: continue except: continue break soup = BeautifulSoup(shipPage, "html.parser") # Read the web page HTML #soup.find('br').replaceWith(' ') #for br in soup('br'): #br.extract() table = soup.find_all("table", {"id": "vessel-related"}) # Finds table with class table1 for mytable in table: #Loops tables with class table1 table_body = mytable.find_all('tbody') #Finds tbody section in table for body in table_body: rows = body.find_all('tr') #Finds all rows for tr in rows: #Loops rows cols = tr.find_all('td') #Finds the columns for td in cols: #Loops the columns checker = td.text.encode('ascii', 'ignore') check = regex.sub('', checker) if check == ' Speed (avg./max): ': i = 1 elif i == 1: print td.text pat=re.compile('<br\s*/>') print pat.sub(" ",td.text) values.append(td.text.strip("\n").encode('utf-8')) #Takes the second columns value and assigns it to a list called Values i = 0 #print values return valuesNameSearch('https://www.fleetmon.com/vessels/kind-of-magic_0_3478642/','230034570','KIND OF MAGIC') | Locate the "Speed (avg./max)" label first and then go to the value via .find_next():from bs4 import BeautifulSoup data = '<td> <label class="identifier">Speed (avg./max):</label> </td> <td class="value"> <span class="block">4.5 kn<br>7.1 kn</span> </td>'soup = BeautifulSoup(data, "html.parser")label = soup.find("label", class_="identifier", text="Speed (avg./max):")value = label.find_next("td", class_="value").get_text(strip=True)print(value) # prints 4.5 kn7.1 knNow, you can extract the actual numbers from the string:import respeed_values = re.findall(r"([0-9.]+) kn", value)print(speed_values)Prints ['4.5', '7.1'].You can then further convert the values to floats and unpack into separate variables:avg_speed, max_speed = map(float, speed_values) |
Repeating if statement I am having a problem with my code mapping a random walk in 3D space. The purpose of this code is to simulate N steps of a random walk in 3 dimensions. At each step, a random direction is chosen (north, south, east, west, up, down) and a step of size 1 is taken in that direction. Here is my code:import random # this helps us generate random numbersN = 30 # number of stepsn = random.random() # generate a random numberx = 0y = 0z = 0count = 0 while count <= N: if n < 1/6: x = x + 1 # move east n = random.random() # generate a new random number if n >= 1/6 and n < 2/6: y = y + 1 # move north n = random.random() # generate a new random number if n >= 2/6 and n < 3/6: z = z + 1 # move up n = random.random() # generate a new random number if n >= 3/6 and n < 4/6: x = x - 1 # move west n = random.random() # generate a new random number if n >= 4/6 and n < 5/6: y = y - 1 # move south n = random.random() # generate a new random number if n >= 5/6: z = z - 1 # move down n = random.random() # generate a new random number print("(%d,%d,%d)" % (x,y,z)) count = count + 1print("squared distance = %d" % (x*x + y*y + z*z)) The problem is I am getting more than a single step between each iteration. I've added comments showing the difference in steps between iterations.Here are the first 10 lines of the output:(0,-1,0) #1 step (0,-2,0) #1 step (1,-3,1) #4 steps (1,-4,1) #1 step (1,-3,1) #1 step (1,-2,1) #1 step (2,-2,0) #2 steps (2,-2,0) #0 steps (2,-2,0) #0 steps (2,-1,0) #1 step | If you remove the multiple n = random.random() from within the if statements and replace by a single n = random.random() at start of the while loop then there will be only one step per loop. |
Trying to create and use a class; name 'is_empty' is not defined I'm trying to create a class called Stack (it's probably not very useful for writing actual programmes, I'm just doing it to learn about creating classes in general) and this is my code, identical to the example in the guide I'm following save for one function name:class Stack: def __init__(self): self.items = [] def is_empty(self): return self.items == [] def push(self,item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[len(self.items)-1] def size(self): return len(self.items)I saved it in a file called stack.py and tested it with this:from stack import Stackmy_stack = Stack()print(is_empty(my_stack))but I got this error message:Mac:python mac$ python3 stacktest.pyTraceback (most recent call last): File "stacktest.py", line 5, in <module> print(is_empty(my_stack))NameError: name 'is_empty' is not definedThe guide in question has something called activecode, which is basically Python installed on the browser so you can run example programmes on it, and is_empty(my_stack) returns True like it should. What am I doing wrong?EDIT: Yeah, it's actually my_stack.is_empty(). I mixed classes up with functions AND misread the guide. | The method is_empty() is part of the class. To call it you need to my_stack.is_empty() |
How to control if a component exists in a Tk object/window? I would like to know what is the most effecient way to control if a certain component (label, button or entry) exists already on the Tk object/window.I have searched on the web for a while and the only thing I found is:if component.winfo_exists(): # But this doesn't work for me (I am using Python 3.4)I have tried also something (stupid, of course label is not a boolean) like this:if not self.label: self.label = Label(self, text="Label")I have tried to invent also something cleverer like this:if not self.label.exists(): self.label = Label(self, text="Label")Since I am a noob using tkinter, I am probably missing something.EDITThis is the whole class:class Form(Tk):def __init__(self): Tk.__init__(self) self.label_question = Label().pack(side=LEFT) self.text = StringVar() self.entry = Entry().pack(side=LEFT) self.button = Button(text="Show", command=self.showName).pack(side=LEFT) self.label = None # Initializinh to Nonedef showName(self): self.name = self.text.get() if not self.label: self.label = Label().pack(side=LEFT) | I think your second approach is good enough.self.label = None # Initialize `self.label` as None somewhere....if not self.label: self.label = Label(self, text="Label")This will work, because before the label creation, self.label is evaluated as false when used as predicate (bool(None) is False), and will be evaluated as truth value once the label is set.UPDATEFollowing line is not what you want, because pack does not return anything.self.label = Label().pack(side=LEFT) # pack return nothing -> Noneself.label become None after the statement.You should separate the label creation and packing:self.label = Label()self.label.pack(side=LEFT) |
How to determine the version of PyJWT? I have two different software environments (Environment A and Environment B) and I'm trying to run PyJWT on both environments. It is working perfectly fine on one environment Environment A but fail on Environment B. The error I'm getting on Environment B when I call jwt.encode() with algorithm == ES is: Algorithm not supported.I'm trying to figure out why it works on Environment A but not Environment B. It seems like the two environments have different versions of PyJWT installed. But determining which version of PyJWT is installed on Environment B is proving difficult for me. How can I do it??I ran the following instrumented code on both Environment A and Environment B:import jwt, cryptography, sys, pkg_resourcesmy_private_key = """XXXXX"""my_public_key = """YYYYYY"""original = {"Hello": "World"}print "sys.version = {}".format(str(sys.version))try: print "dir(jwt) = {}".format(str(dir(jwt)))except Exception as e: print "Failed to get dir of jwt module: {}".format(e)try: print "dir(cryptography) = {}".format(str(dir(cryptography)))except Exception as e: print "Failed to get dir of cryptography module: {}".format(e)try: print "jwt = {}".format(str(jwt.__version__))except Exception as e: print "Failed to get version of jwt module using .__version: {}".format(e)try: print "cryptography = {}".format(str(cryptography.__version__))except Exception as e: print "Failed to get version of cryptography module using .__version: {}".format(e)try: print "pkg_resources.require('jwt')[0].version = {}".format(str(pkg_resources.require("jwt")[0].version))except Exception as e: print "Failed to get version of jwt module via pkg_resources: {}".format(e)try: print "pkg_resources.require('cryptography')[0].version = {}".format(str(pkg_resources.require("cryptography")[0].version))except Exception as e: print "Failed to get version of cryptography module via pkg_resources: {}".format(e)try: print "original = {}".format(str(original)) encoded = jwt.encode(original, my_private_key, algorithm='ES256')except Exception as e: print "encoding exception = {}".format(str(e))else: try: print "encoded = {}".format(str(encoded)) unencoded = jwt.decode(encoded, my_public_key, algorithms=['ES256']) except Exception as e: print "decoding exception = {}".format(str(e)) else: print "unencoded = {}".format(str(unencoded))On Environment A, the encoding succeeds:sys.version = 2.7.12 (default, Sep 1 2016, 22:14:00)[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]dir(jwt) = ['DecodeError', 'ExpiredSignature', 'ExpiredSignatureError', 'ImmatureSignatureError', 'InvalidAudience', 'InvalidAudienceError', 'InvalidIssuedAtError', 'InvalidIssuer', 'InvalidIssuerError', 'InvalidTokenError', 'MissingRequiredClaimError', 'PyJWS', 'PyJWT', '__author__', '__builtins__', '__copyright__', '__doc__', '__file__', '__license__', '__name__', '__package__', '__path__', '__title__', '__version__', 'algorithms', 'api_jws', 'api_jwt', 'compat', 'decode', 'encode', 'exceptions', 'get_unverified_header', 'register_algorithm', 'unregister_algorithm', 'utils']dir(cryptography) = ['__about__', '__all__', '__author__', '__builtins__', '__copyright__', '__doc__', '__email__', '__file__', '__license__', '__name__', '__package__', '__path__', '__summary__', '__title__', '__uri__', '__version__', 'absolute_import', 'division', 'exceptions', 'hazmat', 'print_function', 'sys', 'utils', 'warnings']jwt = 1.4.2cryptography = 1.5.2Failed to get version of jwt module via pkg_resources: jwtpkg_resources.require('cryptography')[0].version = 1.5.2original = {'Hello': 'World'}encoded = eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJIZWxsbyI6IldvcmxkIn0.ciaXCcO2gTqsQ4JUEKj5q4YX6vfHu33XY32g2MNIVEDXHNllpuqDCj-cCrlGPf6hGNifAJbNI9kBaAyuCIwyJQunencoded = {u'Hello': u'World'}On Environment B the the encoding fails. You can see that I cannot tell what version of PyJWT is running. However this version of PyJWT doesn't have the algorithm ES256 that I'm trying to use: sys.version = 2.7.12 (default, Sep 1 2016, 22:14:00) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]"dir(jwt) = ['DecodeError', 'ExpiredSignature', 'Mapping', 'PKCS1_v1_5', 'SHA256', 'SHA384', 'SHA512', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'base64', 'base64url_decode', 'base64url_encode', 'binascii', 'constant_time_compare', 'datetime', 'decode', 'encode', 'hashlib', 'header', 'hmac', 'json', 'load', 'signing_methods', 'sys', 'timegm', 'unicode_literals', 'verify_methods', 'verify_signature']dir(cryptography) = ['__about__', '__all__', '__author__', '__builtins__', '__copyright__', '__doc__', '__email__', '__file__', '__license__', '__name__', '__package__', '__path__', '__summary__', '__title__', '__uri__', '__version__', 'absolute_import', 'division', 'print_function', 'sys', 'warnings']Failed to get version of jwt module using .__version: 'module' object has no attribute '__version__'cryptography = 1.5.2Failed to get version of jwt module via pkg_resources: jwtpkg_resources.require('cryptography')[0].version = 1.5.2original = {'Hello': 'World'}encoding exception = Algorithm not supported | The PyJWT .__version__ attribute appeared in 0.2.2 in this commit.Generally, to find the version of the package, that was installed via setuptools, you need to run following code:import pkg_resourcesprint pkg_resources.require("jwt")[0].versionIf pip was used to install the package, you could try from linux shell:pip show jwt | grep VersionSame thing from inside the python:import pipprint next(pip.commands.show.search_packages_info(['jwt']))['version'] |
Setting up proxy with selenium / python I am using selenium with python.I need to configure a proxy.It is working for HTTP but not for HTTPS.The code I am using is:# configure firefoxprofile = webdriver.FirefoxProfile()profile.set_preference("network.proxy.type", 1)profile.set_preference("network.proxy.http", '11.111.11.11')profile.set_preference("network.proxy.http_port", int('80'))profile.update_preferences()# launchdriver = webdriver.Firefox(firefox_profile=profile)driver.get('https://www.iplocation.net/find-ip-address')Also. Is there a way for me to completely block any outgoing traffic from my IP and restrict it ONLY to the proxy IP so that I don't accidently mess up the test/stats by accidently switching from proxy to direct connection?Any tips would help!Thanks :) | Check out browsermob proxy for setting up a proxies for use with seleniumfrom browsermobproxy import Serverserver = Server("path/to/browsermob-proxy")server.start()proxy = server.create_proxy()from selenium import webdriverprofile = webdriver.FirefoxProfile()profile.set_proxy(proxy.selenium_proxy())driver = webdriver.Firefox(firefox_profile=profile)proxy.new_har("google")driver.get("http://www.google.co.uk")proxy.har # returns a HAR JSON blobserver.stop()driver.quit()You can use a remote proxy server with the RemoteServer class. Is there a way for me to completely block any outgoing traffic from my IP and restrict it ONLY to the proxy IPYes, just look up how to setup proxies for whatever operating system you're using. Just use caution because some operating systems will ignore proxy rules based on certain conditions, for example, if using a VPN connection. |
link in html do not function python 2.7 DJANGO 1.11.14 win7when I click the link in FWinstance_list_applied_user.html it was supposed to jump to FW_detail.html but nothing happenedurl.pyurlpatterns += [ url(r'^myFWs/', views.LoanedFWsByUserListView.as_view(), name='my-applied'), url(r'^myFWs/(?P<pk>[0-9]+)$', views.FWDetailView.as_view(), name='FW-detail'),views.py:class FWDetailView(LoginRequiredMixin,generic.ListView): model = FW template_name = 'FW_detail.html'models.py class FW(models.Model): ODM_name = models.CharField(max_length=20) project_name = models.CharField(max_length=20)FW_detail.html{% block content %}<h1>FW request information: {{ FW.ODM_name}};{{ FW.project_name}}</h1><p><strong>please download using this link:</strong> {{ FW.download }}</p>{% endblock %}FWinstance_list_applied_user.html{% block content %} <h1>Applied FWs</h1> {% if FW_list %} <ul> {% for FWinst in FW_list %} {% if FWinst.is_approved %} <li class="{% if FWinst.is_approved %}text-danger{% endif %}">--> <a href="{% url 'FW-detail' FWinst.pk %}">{{FWinst.ODM_name}}</a> ({{ FWinst.project_name }}) </li> {% endif %} {% endfor %} </ul> {% else %} <p>Nothing.</p> {% endif %} {% endblock %}the image of FWinstance_list_applied_user.html, when I click the link CSR, nothing happened | You haven't terminated your "my-applied" URL pattern, so it matches everything beginning with "myFWs/" - including things that that would match the detail URL. Make sure you always use a terminating $ with regex URLs.url(r'^myFWs/$', views.LoanedFWsByUserListView.as_view(), name='my-applied'), |
Allow_Other with fusepy? I have a 16.04 ubuntu server with b2_fuse mounting my b2 cloud storage bucket which uses pyfuse. The problem is, I have no idea how I can pass the allow_other argument like with FUSE! This is an issue because other services running under different users cannot see the mounted drive.Does anybody here have some experience with this that could point me in the right direction? | Inside of file b2fuse.py if you change the line:FUSE(filesystem, mountpoint, nothreads=True, foreground=True)toFUSE(filesystem, mountpoint, nothreads=True, foreground=True,**{'allow_other': True})the volume will be mounted with allow_other. |
Python3 + vagrant ubuntu 16.04 + ssl request = [Errno 104] Connection reset by peer I'm using on my Mac Vagrant with "bento/ubuntu-16.04" box. I'm trying to use Google Adwords Api via python library but got error [Errno 104] Connection reset by peerI make sample script to check possibility to send requests:import urllib.requesturl ="https://adwords.google.com/api/adwords/mcm/v201609/ManagedCustomerService?wsdl"f = urllib.request.urlopen(url)print(f.read())If I try this request via python3 - I've got [Errno 104] Connection reset by peer.But if I send request via curl curl https://adwords.google.com/api/adwords/mcm/v201609/ManagedCustomerService?wsdl - I've got some response( even if it is 500 code) with body.If I try this sample python script from my host Mac machine - I also receive some text response.I tried this script from VDS server with ubuntu 16.04 - also worked.So I assume, problem is possible between Vagrant/Mac.Maybe you can help me?Thanks. | I found solution. It looks like bug in Virtualbox 5.1.8 version. You can read about it hereSo, you can fix it by downgrade Virtualbox to < 5.1.6 |
While loop causing issues with CSV read Everything was going fine until I tried to combine a while loop with a CSV read and I am just unsure where to go with this.The code that I am struggling with:airport = input('Please input the airport ICAO code: ')with open('airport-codes.csv', encoding='Latin-1') as f: reader = csv.reader(f, delimiter=',') for row in reader: if airport.lower() == row[0].lower(): airportCode = row[2] + "/" + row[0] print(airportCode) else: print('Sorry, I don\'t recognise that airport.') print('Please try again.')Executing this code causes the 'else' to print continuously until the code is stopped, regardless of whether or not the input matches that in the CSV file. The moment I remove this statement the code runs fine (albeit doesn't print anything if the input doesn't match).What I am aiming to try and do is have the question loop until true. So my attempt was as follows:with open('airport-codes.csv', encoding='Latin-1') as f: reader = csv.reader(f, delimiter=',') for row in reader: while True: airport = input('Please input the airport ICAO code: ') if airport.lower() == row[0].lower(): airportCode = row[2] + "/" + row[0] print(airportCode) break else: print('Sorry, I don\'t recognise that airport.') print('Please try again.') FalseI'm pretty sure my limited experience is causing me to oversee an obvious issue but I couldn't find anything similar with my search queries so my next stop was here.As requested, a few lines of the CSV file:EDQO small_airport Ottengrüner Heide Airport 50.22583389, 11.73166656 EDQP small_airport Rosenthal-Field Plössen Airport 49.86333466, EDQR small_airport Ebern-Sendelbach Airport 50.03944397, 10.82277775 EDQS small_airport Suhl-Goldlauter Airport 50.63194275, 10.72749996 EDQT small_airport Haßfurt-Schweinfurt Airport 50.01805496, EDQW small_airport Weiden in der Oberpfalz Airport 49.67890167, | I had a different suggestion using functions:import csvdef findAirportCode(airport): with open('airport-codes.csv', encoding='Latin-1') as f: reader = csv.reader(f, delimiter=',') for row in reader: if airport.lower() == row[0].lower(): airportCode = row[2] + "/" + row[0] return airportCode return Noneairport = input('Please input the airport ICAO code: ')code = findAirportCode(airport)if(code != None ): print (code)else: print('Sorry, I don\'t recognise that airport.') print('Please try again.') |
Getting next Timestamp Value What is the proper solution in pandas to get the next timestamp value?I have the following timestamp:Timestamp('2017-11-01 00:00:00', freq='MS')I want to get this as the result for the next timestamp value:Timestamp('2017-12-01 00:00:00', freq='MS')Edit:I am working with multiple frequencies (1min, 5min, 15min, 60min, D, W-SUN, MS).Is there a generic command to get next value? Is the best approach to build a function that behaves accordingly to each one of the frequencies? | General solution is convert strings to offset and add to timestamp:L = ['1min', '5min', '15min', '60min', 'D', 'W-SUN', 'MS']t = pd.Timestamp('2017-11-01 00:00:00', freq='MS')t1 = [t + pd.tseries.frequencies.to_offset(x) for x in L]print (t1)[Timestamp('2017-11-01 00:01:00', freq='MS'), Timestamp('2017-11-01 00:05:00', freq='MS'), Timestamp('2017-11-01 00:15:00', freq='MS'), Timestamp('2017-11-01 01:00:00', freq='MS'), Timestamp('2017-11-02 00:00:00', freq='MS'), Timestamp('2017-11-05 00:00:00'), Timestamp('2017-12-01 00:00:00')] |
Django 1.10 Count on Models ForeignKey I guess this must be simple, but I've been trying for hours and can't find anything to help.I have 2 models. One for a Template Categories and another for a TemplateI'm listing the Template Categories on the Homepage and for each Category I want to show how many templates have that category as a Foreign Key.My code is as follows:Models.pyclass TemplateType(models.Model): type_title = models.CharField(max_length=60) type_description = models.TextField() file_count = models.ForeignKey('TemplateFile') def __str__(self): return self.type_title def get_absolute_url(self): return "/templates/%s/" %(self.id)class TemplateFile(models.Model): template_type = models.ForeignKey(TemplateType, on_delete=models.DO_NOTHING) template_file_title = models.CharField(max_length=120) template_file_description = models.TextField() def __str__(self): return self.template_file_titleViews.pyfrom django.shortcuts import HttpResponsefrom django.shortcuts import render, get_object_or_404from django.db.models import Countfrom .models import TemplateTypefrom .models import TemplateFiledef home(request): queryset = TemplateType.objects.all().order_by('type_title').annotate(Count('file_count')) context = { "object_list": queryset, "title": "Home", } return render(request, "index.html", context)index.html<div class="row"> {% for obj in object_list %} <div class="template_type col-md-6"> <a href="{{ obj.get_absolute_url }}"> <h4>{{ obj.type_title }}</h4> </a> <p>{{ obj.type_short_description }}</p> <button class="btn btn-primary" type="button">Templates <span class="badge">{{ obj.file_count__count }}</span></button> </div> {% endfor %} </div>Can somebody help please? | Views.pyfrom django.shortcuts import HttpResponsefrom django.shortcuts import render, get_object_or_404from django.db.models import Countfrom .models import TemplateTypefrom .models import TemplateFiledef home(request): queryset = TemplateType.objects.order_by('type_title').annotate(num_file=Count('file_count')) context = { "object_list": queryset, "title": "Home", } return render(request, "index.html", context)Now object_list contains TemplateType objects. And you can acces num_file like : object_list[0].num_file. Use it in your template.index.html<div class="row"> {% for obj in object_list %} <div class="template_type col-md-6"> <a href="{{ obj.get_absolute_url }}"> <h4>{{ obj.type_title }}</h4> </a> <p>{{ obj.type_short_description }}</p> <button class="btn btn-primary" type="button">Templates <span class="badge">{{ obj.num_file }}</span></button> </div> {% endfor %} </div> |
Internal Error 500 when using Flask and Apache I am working on a small college project using Raspberry Pi. Basically, the project is to provide an html interface to control a sensor attached to the Pi. I wrote a very simple Python code attached with a very basic html code also. Everything is done in this path /var/www/NewTest. However everytime I try to access it throws a 500 internal error. I tried simple "Hello World" examples that worked with me and tried to do this example the same way but didn't work.led.pyfrom gpiozero import LEDfrom time import sleepfrom flask import Flask, render_templateapp = Flask(__name__)ledr = LED(17)ledg = LED(27)ledb = LED(22)@app.route('/')def index(): return render_template('index.html')@app.route('/red/')def red(): ledr.off() ledg.off() ledb.off() ledr.on() return ' '@app.route('/green/')def green(): ledr.off() ledg.off() ledb.off() ledg.on() return ' '@app.route('/blue/')def blue(): ledr.off() ledg.off() ledb.off() ledb.on() return ' 'if __name__ == '__main__': app.run(debug=True)led.conf<virtualhost *:80> ServerName 10.0.0.146 WSGIDaemonProcess led user=www-data group=www-data threads=5 home=/var/www/NewTest/ WSGIScriptAlias / /var/www/NewTest/led.wsgi <directory /var/www/NewTest> WSGIProcessGroup led WSGIApplicationGroup %{GLOBAL} WSGIScriptReloading On Order deny,allow Allow from all </directory></virtualhost>index.html<!doctype html><title>Test</title><meta charset=utf-8><a href="/red/">RED</a> <br/><a href="/green/">GREEN</a><br/><a href="/blue/">BLUE</a>any ideas?Thanks! | The problem was in led.conf. The user needs to be pi.<virtualhost *:80> ServerName 10.0.0.146 WSGIDaemonProcess led user=pi group=www-data threads=5 home=/var/www/NewTest/ WSGIScriptAlias / /var/www/NewTest/led.wsgi <directory /var/www/NewTest> WSGIProcessGroup led WSGIApplicationGroup %{GLOBAL} WSGIScriptReloading On Order deny,allow Allow from all </directory></virtualhost> |
lmdb no locks available error I have a data.mdb and lock.mdb file in test/ directory. I was trying to use the python lmdb package to read/write data from the lmdb database. I triedimport lmdbenv = lmdb.open('test', map_size=(1024**3), readonly=True)but got the following error:lmdb.Error: test: No locks availableThen I triedmdb_stat testwith a separately installed lmdb library compiled from source and got the following error:mdb_env_open failed, error 37 No locks availableHowever, in python I also triedenv = lmdb.open('test', map_size=(1024**3), lock=False)This works and I can read data from the database normally.I searched on Google about "lmdb no locks available error" very hard but got nothing. Any one has any idea where this error came from?Thanks! | Use the -r option in mdb_stat to check the number of readers in the reader lock table. You may be hitting the max limit for number of readers. You can try setting this limit to a higher number. |
Neural networks pytorch I am very new in pytorch and implementing my own network of image classifier. However I see for each epoch training accuracy is very good but validation accuracy is 0.i noted till 5th epoch. I am using Adam optimizer and have learning rate .001. also resampling the whole data set after each epoch into training n validation set. Please help where I am going wrong.Here is my code:### where is data?data_dir_train = '/home/sup/PycharmProjects/deep_learning/CNN_Data/training_set'data_dir_test = '/home/sup/PycharmProjects/deep_learning/CNN_Data/test_set'# Define your batch_sizebatch_size = 64allData = datasets.ImageFolder(root=data_dir_train,transform=transformArr)# We need to further split our training dataset into training and validation sets.def split_train_validation(): # Define the indices num_train = len(allData) indices = list(range(num_train)) # start with all the indices in training set split = int(np.floor(0.2 * num_train)) # define the split size #train_idx, valid_idx = indices[split:], indices[:split] # Random, non-contiguous split validation_idx = np.random.choice(indices, size=split, replace=False) train_idx = list(set(indices) - set(validation_idx)) # define our samplers -- we use a SubsetRandomSampler because it will return # a random subset of the split defined by the given indices without replacement train_sampler = SubsetRandomSampler(train_idx) validation_sampler = SubsetRandomSampler(validation_idx) #train_loader = DataLoader(allData,batch_size=batch_size,sampler=train_sampler,shuffle=False,num_workers=4) #validation_loader = DataLoader(dataset=allData,batch_size=1, sampler=validation_sampler) return (train_sampler,validation_sampler)Trainingfrom torch.optim import Adamimport torchimport createNNimport torch.nn as nnimport loadData as ldfrom torch.autograd import Variablefrom torch.utils.data import DataLoader# check if cuda - GPU support availablecuda = torch.cuda.is_available()#create model, optimizer and loss functionmodel = createNN.ConvNet(class_num=2)optimizer = Adam(model.parameters(),lr=.001,weight_decay=.0001)loss_func = nn.CrossEntropyLoss()if cuda: model.cuda()# function to save modeldef save_model(epoch): torch.save(model.load_state_dict(),'imageClassifier_{}.model'.format(epoch)) print('saved model at epoch',epoch)def exp_lr_scheduler ( epoch , init_lr = args.lr, weight_decay = args.weight_decay, lr_decay_epoch = cf.lr_decay_epoch): lr = init_lr * ( 0.5 ** (epoch // lr_decay_epoch))def train(num_epochs): best_acc = 0.0 for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) model.train() acc = 0.0 loss = 0.0 total = 0 # train model with training data for i,(images,labels) in enumerate(train_loader): # if cuda then move to GPU if cuda: images = images.cuda() labels = labels.cuda() # Variable class wraps a tensor and we can calculate grad images = Variable(images) labels = Variable(labels) # reset accumulated gradients for each batch optimizer.zero_grad() # pass images to model which returns preiction output = model(images) #calculate the loss based on prediction and actual loss = loss_func(output,labels) # backpropagate the loss and compute gradient loss.backward() # update weights as per the computed gradients optimizer.step() # prediction class predVal , predClass = torch.max(output.data, 1) acc += torch.sum(predClass == labels.data) loss += loss.cpu().data[0] total += labels.size(0) # print the statistics train_acc = acc/total train_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch,acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, loss)) # Valid model with validataion data model.eval() acc = 0.0 loss = 0.0 total = 0 for i,(images,labels) in enumerate(validation_loader): # if cuda then move to GPU if cuda: images = images.cuda() labels = labels.cuda() # Variable class wraps a tensor and we can calculate grad images = Variable(images) labels = Variable(labels) # reset accumulated gradients for each batch optimizer.zero_grad() # pass images to model which returns preiction output = model(images) #calculate the loss based on prediction and actual loss = loss_func(output,labels) # backpropagate the loss and compute gradient loss.backward() # update weights as per the computed gradients optimizer.step() # prediction class predVal, predClass = torch.max(output.data, 1) acc += torch.sum(predClass == labels.data) loss += loss.cpu().data[0] total += labels.size(0) # print the statistics valid_acc = acc / total valid_loss = loss / total print('Mean train acc = {} over epoch = {}'.format(epoch, valid_acc)) print('Mean train loss = {} over epoch = {}'.format(epoch, valid_loss)) if(best_acc<valid_acc): best_acc = valid_acc save_model(epoch) # at 30th epoch we save the model if (epoch == 30): save_model(epoch)train(20) | I think you did not take into account that acc += torch.sum(predClass == labels.data) returns a tensor instead of a float value. Depending on the version of pytorch you are using I think you should change it to:acc += torch.sum(predClass == labels.data).cpu().data[0] #pytorch 0.3acc += torch.sum(predClass == labels.data).item() #pytorch 0.4Although your code seems to be working for old pytorch version, I would recommend you to upgrade to the 0.4 version.Also, I mentioned other problems/typos in your code. You are loading the dataset for every epoch. for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) train_sampler, validation_sampler = ld.split_train_validation() train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False) validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler) ...That should not happen, it should be enough loading it oncetrain_sampler, validation_sampler = ld.split_train_validation()train_loader = DataLoader(ld.allData, batch_size=30, sampler=train_sampler, shuffle=False)validation_loader = DataLoader(dataset=ld.allData, batch_size=1, sampler=validation_sampler)for epoch in range(num_epochs): print('\n\nEpoch {}'.format(epoch)) ...In the training part you have (this does not happen in the validation):train_acc = acc/totaltrain_loss = loss / totalprint('Mean train acc = {} over epoch = {}'.format(epoch,acc))print('Mean train loss = {} over epoch = {}'.format(epoch, loss))Where you are printing acc instead of train_accAlso, in the validation part I mentioned that you are printing print('Mean train acc = {} over epoch = {}'.format(epoch, valid_acc)) when it should be something like 'Mean val acc'.Changing this lines of code, using a standard model I created and CIFAR dataset the training seems to converge, accuracy increases at every epoch while mean loss value decreases. I Hope I could help you! |
How to feed weights into igraph community detection [Python/C/R] When using commuinity_leading_eigenvector of igraph, assuming a graph g has already been created, how do I pass the list of weights of graph g to community_leading_eigenvector? community_leading_eigenvector(clusters=None, weights=None, arpack_options=None) | You can either pass the name of the attribute containing the weights to the weights parameter, or retrieve all the weights into a list using g.es["weight"] and then pass that to the weights parameter. So, either of these would suffice, assuming that your weights are in the weight edge attribute:g.community_leading_eigenvector(weights="weight")g.community_leading_eigenvector(weights=g.es["weight"]) |
Aggregation fails when using lambdas I'm trying to port parts of my application from pandas to dask and I hit a roadblock when using a lamdba function in a groupby on a dask DataFrame.import dask.dataframe as dddask_df = dd.from_pandas(pandasDataFrame, npartitions=2)dask_df = dask_df.groupby( ['one', 'two', 'three', 'four'], sort=False ).agg({'AGE' : lambda x: x * x })This code fails with the following error: ValueError: unknown aggregate lambda My lambda function is more complex in my application than here, but the content of the lambda doesn't matter, the error is always the same. There is a very similar example in the documentation, so this should actually work, I'm not sure what I'm missing. The same groupby works in pandas, but I need to improve it's performance.I'm using dask 0.12.0 with python 3.5. | From the Dask docs:"Dask supports Pandas’ aggregate syntax to run multiple reductions on the same groups. Common reductions such as max, sum, list and mean are directly supported.Dask also supports user defined reductions. To ensure proper performance, the reduction has to be formulated in terms of three independent steps. The chunk step is applied to each partition independently and reduces the data within a partition. The aggregate combines the within partition results. The optional finalize step combines the results returned from the aggregate step and should return a single final column. For Dask to recognize the reduction, it has to be passed as an instance of dask.dataframe.Aggregation.For example, sum could be implemented as:custom_sum = dd.Aggregation('custom_sum', lambda s: s.sum(), lambda s0: s0.sum())df.groupby('g').agg(custom_sum)" |
how to convert pandas series to tuple of index and value I'm looking for an efficient way to convert a series to a tuple of its index with its values.s = pd.Series([1, 2, 3], ['a', 'b', 'c'])I want an array, list, series, some iterable:[(1, 'a'), (2, 'b'), (3, 'c')] | Well it seems simply zip(s,s.index) works too!For Python-3.x, we need to wrap it with list -list(zip(s,s.index))To get a tuple of tuples, use tuple() : tuple(zip(s,s.index)).Sample run -In [8]: sOut[8]: a 1b 2c 3dtype: int64In [9]: list(zip(s,s.index))Out[9]: [(1, 'a'), (2, 'b'), (3, 'c')]In [10]: tuple(zip(s,s.index))Out[10]: ((1, 'a'), (2, 'b'), (3, 'c')) |
Python Kivy: Add Background loop I want to paste a background loop into my Python-Kivy script. The problem is, that I've got only a App().run() under my script. So, if I put a loop, somewhere in the the App-Class, the whole App stopps updating and checking for events. Is there a function name like build(self), that's recognized by Kivy, and represents a main/background-loop?If you don't know, what I'm talking about, feel free to ask. | In case you need to schedule a repeated activity in a loop, you can use Clock.schedule_interval() to call a function on a regular schedule:def my_repeated_function(data): print ("My function called.")Clock.schedule_interval(my_repeated_function, 1.0 / 30) # no brackets on function reference # call it 30 times per secondThere is a lot more information on how to schedule events on a regular, conditional or one-time basis with Kivy's event loop here. |
How to learn multi-class multi-output CNN with TensorFlow I want to train a convolutional neural network with TensorFlow to do multi-output multi-class classification.For example: If we take the MNIST sample set and always combine two random images two a single one and then want to classify the resulting image. The result of the classification should be the two digits shown in the image. So the output of the network could have the shape [-1, 2, 10] where the first dimension is the batch, the second represents the output (is it the first or the second digit) and the third is the "usual" classification of the shown digit. I tried googling for this for a while now, but wasn't able find something useful. Also, I don't know if multi-output multi-class classification is the correct naming for this task. If not, what is the correct naming? Do you have any links/tutorials/documentations/papers explaining what I'd need to do to build the loss function/training operations?What I tried was to split up the output of the network into the single outputs with tf.split and then use softmax_cross_entropy_with_logits on every single output. The result I averaged over all outputs but it doesn't seem to work. Is this even a reasonable way? | For nomenclature of classification problems, you can have a look at this link:http://scikit-learn.org/stable/modules/multiclass.htmlSo your problem is called "Multilabel Classification". In normal TensorFlow multiclass classification (classic MNIST) you will have 10 output units and you will use softmax at the end for computing losses i.e. "tf.nn.softmax_cross_entropy_with_logits". Ex: If your image has "2", then groundtruth will be [0,0,1,0,0,0,0,0,0,0]But here, your network output will have 20 units and you will use sigmoid i.e. "tf.nn.sigmoid_cross_entropy_with_logits"Ex: If your image has "2" & "4", then groundtruth will be [0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0], i.e. first ten bits to represent first digit class and second to represent second digit class. |
Using Tweepy to determine the age on an account I'm looking to use Tweepy for a small project. I'd like to be able to write a bit of code that returns the age of a given Twitter account. The best way I can think of to do this is to return all Tweets from the very first page, find the earliest Tweet and check the date/timestamp on it. It's a bit hacky but I was wondering if anyone could think of an easier or cleaner way to accomplish this? | The get_user method returns a user object that includes a created_at field.Check https://dev.twitter.com/overview/api/users |
how to remove /n and comma while extracting using response.css I am trying to crawl amazon to get product name, price and [savings information]. i am using response.css to extract [saving information] as belowpython code to extract [savings information]:savingsinfo = amzscrape.css(".a-color-secondary .a-row , .a-row.a-size-small.a-color-secondary span").css('::text').extract()Returning below output with above code'savingsinfo_item': ['Save ', '$20.00', ' when you buy ', '$100.00', ' of select items']Expected output:Save $20.00 when you buy $100 of select items | output = ''.join(savingsinfo['savingsinfo_item']) |
BeautifulSoup not defined when called in function My web scraper is throwing NameError: name 'BeautifulSoup' is not defined when I call BeautifulSoup() inside my function, but it works normally when I call it outside the function and pass the Soup as an argument. Here is the working code:from teams.models import *from bs4 import BeautifulSoupfrom django.conf import settingsimport requests, os, stringsoup = BeautifulSoup(open(os.path.join(settings.BASE_DIR, 'revolver.html')), 'html.parser')def scrapeTeamPage(soup): teamInfo = soup.find('div', 'profile_info')...print(scrapeTeamPage(soup))But when I move the BeautifulSoup call inside my function, I get the error.from teams.models import *from bs4 import BeautifulSoupfrom django.conf import settingsimport requests, os, stringdef scrapeTeamPage(url): soup = BeautifulSoup(open(os.path.join(settings.BASE_DIR, url)), 'html.parser') teamInfo = soup.find('div', 'profile_info') | I guess you are doing some spelling mistake of BeautifulSoup, its case sensitive. if not, use requests in your code as:from teams.models import *from bs4 import BeautifulSoupfrom django.conf import settingsimport requests, os, stringdef scrapeTeamPage(url): res = requests.get(url) soup = BeautifulSoup(res.content, 'html.parser') teamInfo = soup.find('div', 'profile_info') |
Missing parameters when creating new table in Google BigQuery through Python API V2 I'm trying to create new table using BigQuery's Python API:bigquery.tables().insert( projectId="xxxxxxxxxxxxxx", datasetId="xxxxxxxxxxxxxx", body='{ "tableReference": { "projectId":"xxxxxxxxxxxxxx", "tableId":"xxxxxxxxxxxxxx", "datasetId":"accesslog"}, "schema": { "fields": [ {"type":"STRING", "name":"ip"}, {"type":"TIMESTAMP", "name":"ts"}, {"type":"STRING", "name":"event"}, {"type":"STRING", "name":"id"}, {"type":"STRING","name":"sh"}, {"type":"STRING", "name":"pub"}, {"type":"STRING", "name":"context"}, {"type":"STRING", "name":"brand"}, {"type":"STRING", "name":"product"} ] } }').execute()The error I'm getting is:(<class 'apiclient.errors.HttpError'>, <HttpError 400 when requesting https://www.googleapis.com/bigquery/v2/projects/xxxxxxxxxxxxxx/datasets/xxxxxxxxxxxxxx/tables?alt=json returned "Required parameter is missing">, <traceback object at 0x17e1c20>)I think all required parameters are included as far as this is documented at https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.tables.html#insertWhat's missing? | The only required parameter for a tables.insert is the tableReference, which must have tableId, datasetId, and projectId fields. I think the actual issue may be that you're passing the JSON string when you could just pass a dict with the values. For instance, the following code works to create a table (note the dataset_ref is a Python trick to copy the contents to named arguments):project_id = <my project>dataset_id = <my dataset>table_id = 'table_001'dataset_ref = {'datasetId': dataset_id, 'projectId': project_id}table_ref = {'tableId': table_id, 'datasetId': dataset_id, 'projectId': project_id}table = {'tableReference': table_ref}table = bigquery.tables().insert( body=table, **dataset_ref).execute(http) |
Loop does not iterate over all data I have code that produces the following df as output: year month day category keywords0 '2021' '09' '06' 'us' ['afghan, refugees, volunteers']1 '2021' '09' '05' 'us' ['politics' 'military, drone, strike, kabul']2 '2021' '09' '06' 'business' ['rto, return, to, office']3 '2021' '09' '06' 'nyregion' ['nyc, jewish, high, holy, days']4 '2021' '09' '06' 'world' ['americas' 'mexico, migrants, asylum, border']5 '2021' '09' '06' 'us' ['TAHOE, CALDORFIRE, WORKERS']6 '2021' '09' '06' 'nyregion' ['queens, flooding, cleanup']7 '2021' '09' '05' 'us' ['new, orleans, power, failure, traps, older, residents, in, homes']8 '2021' '09' '05' 'nyregion' ['biden, flood, new, york, new, jersey']9 '2021' '09' '06' 'technology' ['freedom, phone, smartphone, conservatives']10 '2021' '09' '06' 'sports' ['football' 'nfl, preview, nfc, predictions']11 '2021' '09' '06' 'sports' ['football' 'nfl, preview, afc, predictions']12 '2021' '09' '06' 'opinion' ['texas, abortion, september, 11']13 '2021' '09' '06' 'opinion' ['coronavirus, masks, school, board, meetings']14 '2021' '09' '06' 'opinion' ['south, republicans, vaccines, climate, change']15 '2021' '09' '06' 'opinion' ['labor, workers, rights']16 '2021' '09' '05' 'opinion' ['ku, kluxism, trumpism']17 '2021' '09' '05' 'opinion' ['culture' 'sexually, harassed, pentagon']18 '2021' '09' '05' 'opinion' ['parenting, college, empty, nest, pandemic']19 '2021' '09' '04' 'opinion' ['letters' 'coughlin, caregiving']20 '2021' '08' '24' 'opinion' ['kara, swisher, maggie, haberman, event']21 '2021' '09' '05' 'opinion' ['labor, day, us, history']22 '2021' '09' '04' 'opinion' ['drowning, our, future, in, the, past']23 '2021' '09' '04' 'opinion' ['biden, job, approval, rating']24 '2021' '09' '05' 'opinion' ['dorothy, day, christian, labor']25 '2021' '09' '03' 'business' ['goodbye, office, mom']26 '2021' '09' '06' 'business' ['media' 'burn, out, companies, pandemic']27 '2021' '08' '30' 'arts' ['music' 'popcast, lorde, solar, power']28 '2021' '09' '02' 'opinion' ['sway, kara, swisher, julie, cordua, ashton, kutcher']29 '2021' '08' '12' 'science' ['fauci, kids, and, covid, event']30 '2021' '09' '05' 'us' ['shooting, lakeland, florida']31 '2021' '09' '05' 'business' ['media' 'leah, finnegan, gawker']32 '2021' '09' '06' 'nyregion' ['piping, plovers, bird, rescue']33 '2021' '09' '05' 'us' ['anti, abortion, movement, texas, law']34 '2021' '09' '05' 'us' ['politics' 'bernie, sanders, budget, bill']35 '2021' '09' '05' 'world' ['africa' 'guinea, coup']36 '2021' '09' '05' 'sports' ['soccer' 'brazil, argentina, suspended']37 '2021' '09' '06' 'world' ['africa' 'south, africa, jacob, zuma, medical, parole']38 '2021' '09' '05' 'sports' ['nfl, social, justice']39 '2021' '09' '02' 'well' ['go, bag, essentials']40 '2021' '09' '01' 'parenting' ['raising, resilient, kids']41 '2021' '09' '03' 'books' ['911, anniversary, fiction, literature']42 '2021' '09' '01' 'arts' ['design' 'german, hygiene, museum']43 '2021' '09' '03' 'arts' ['music' 'opera, livestreams']44 '2021' '09' '04' 'style' ['the, return, of, the, dream, honeymoon']<class 'str'>I built a for loop to iterate over all the elements in the 'keyword' column and put them separately into a new df called df1.The loop look like this:df1 = pd.DataFrame(columns=['word'])i = 0for p in df.loc[i, 'keywords']: teststr = df.loc[i, 'keywords'] splitstr = teststr.split() u = 0 for p1 in splitstr: dict_1 = {'word': splitstr[u]} df1.loc[len(df1)] = dict_1 u = u + 1 i = i + 1print(df1)The output it produces is: word0 ['afghan,1 refugees,2 volunteers']3 ['politics'4 'military,5 drone,6 strike,7 kabul']8 ['rto,9 return,10 to,11 office']12 ['nyc,13 jewish,14 high,15 holy,16 days']17 ['americas'18 'mexico,19 migrants,20 asylum,21 border']22 ['TAHOE,23 CALDORFIRE,24 WORKERS']25 ['queens,26 flooding,27 cleanup']28 ['new,29 orleans,30 power,31 failure,32 traps,33 older,34 residents,35 in,36 homes']37 ['biden,38 flood,39 new,40 york,41 new,42 jersey']43 ['freedom,44 phone,45 smartphone,46 conservatives']47 ['football'48 'nfl,49 preview,50 nfc,51 predictions']52 ['football'53 'nfl,54 preview,55 afc,56 predictions']57 ['texas,58 abortion,59 september,60 11']61 ['coronavirus,62 masks,63 school,64 board,65 meetings']66 ['south,67 republicans,68 vaccines,69 climate,70 change']71 ['labor,72 workers,73 rights']74 ['ku,75 kluxism,76 trumpism']77 ['culture'78 'sexually,79 harassed,80 pentagon']81 ['parenting,82 college,83 empty,84 nest,85 pandemic']86 ['letters'87 'coughlin,88 caregiving']89 ['kara,90 swisher,91 maggie,92 haberman,93 event']94 ['labor,95 day,96 us,97 history']98 ['drowning,99 our,100 future,101 in,102 the,103 past']104 ['biden,105 job,106 approval,107 rating']108 ['dorothy,109 day,110 christian,111 labor']112 ['goodbye,113 office,114 mom']115 ['media'116 'burn,117 out,118 companies,119 pandemic']120 ['music'121 'popcast,122 lorde,123 solar,124 power']125 ['sway,126 kara,127 swisher,128 julie,129 cordua,130 ashton,131 kutcher']132 ['fauci,133 kids,134 and,135 covid,136 event']137 ['shooting,138 lakeland,139 florida']140 ['media'141 'leah,142 finnegan,143 gawker']Although the for loop works fine, it does not iterate over all the rows from df and stops more or less in the middle (it doesn't stop always at the same spot).Do you have an idea why?Thanks in advance | I think the problem is that with:for p in df.loc[i, 'keywords']:you are iterating over the letters in the first entry. So you will stop at that count.This should work for you:for teststr in df['keywords']: splitstr = teststr.split() for p1 in splitstr: dict_1 = {'word': p1} df1.loc[len(df1)] = dict_1print(df1) |
How use asyncio with pyqt6? qasync doesn't support pyqt6 yet and I'm trying to run discord.py in the same loop as pyqt but so far I'm not doing the best. I've tried multiprocess, multithread, and even running synchronous code from non-synchronous code but I either end up with blocking code that makes the pyqt program non responsive or it just outright doesn't work. Can somebody please point me in the right direction? | qasync does not currently support PyQt6 but I have created a PR that implements it.At the moment you can install my version of qasync using the following command:pip install git+https://github.com/eyllanesc/qasync.git@PyQt6Probably in future releases my PR will be accepted so there will already be support for PyQt6 They already accepted my PR so you can already install the latest version of qasync that has support for PyQt6. |
Used IDs are not available anymore in Selenium Python I am using Python and Selenium to scrape some data out of an website. This website has the following structure:First group item has the following base ID: frmGroupList_Label_GroupName and then you add _2 or _3 at the end of this base ID to get the 2nd/3rd group's ID.Same thing goes for the user item, it has the following base ID: frmGroupContacts_TextLabel3 and then you add _2 or _3 at the end of this base ID to get the 2nd/3rd users's ID.What I am trying to do is to get all the users out of each group. And this is how I did it: find the first group, select it and grab all of it users, then, go back to the 2nd group, grab its users, and so on.def grab_contact(number_of_members): groupContact = 'frmGroupContacts_TextLabel3' contact = browser.find_element_by_id(groupContact).text print(contact) i = 2 time.sleep(1) # write_to_excel(contact, group) while i <= number_of_members: group_contact_string = groupContact + '_' + str(i) print(group_contact_string) try: contact = browser.find_element_by_id(group_contact_string).text print(contact) i = i + 1 time.sleep(1) # write_to_excel(contact, group) except NoSuchElementException: break time.sleep(3)Same code applies for scraping the groups. And it works, up to a point!! Although the IDs of the groups are different, the IDs of the users are the same from one group to another. Example:group_id_1 = user_id_1, user_id_2group_id_2 = user_id_1, user_id_2, user_id_3, user_id_4, user_id_5group_id_3 = user_id_1, user_id_2, user_id_3The code runs, it goes to group_id_1, grabs user_id_1 and user_id_2 correctly, but when it gets to group_id_2, the user_id_1 and user_id_2 (which are different in matter of content) are EMPTY, and only user_id_3, user_id_4, user_id_5 are correct. Then, when it gets to group_id_3, all of the users are empty.This has to do with the users having same IDs. As soon as it gets to a certain user ID in a group, I cannot retrieve all the users before that ID in another group. I tried quitting the browser, and reopening a new browser (it doesn't work, the new browser doesn't open), tried refreshing the page (doesn't work), tried opening a new tab (doesn't work).I think the content of the IDs get stuck in memory when they are accessed, and are not freed when accessing a new group. Any ideas on how to get past this?Thanks! | As the saying goes... it ain't stupid, if it works.def refresh(): # accessing the groups page url = "https://google.com" browser.get(url) time.sleep(5) url = "https://my_url.com" browser.get(url) time.sleep(5)While trying to debug this, and finding a solution, I thought: "what if you go to another website, then come back to yours, between group scraping"... and it works! Until I find other solution, I'll stick with this one. |
Finding an unfilled circle in an image of finite size using Python Trying to find a circle in an image that has finite radius. Started off using 'HoughCircles' method from OpenCV as the parameters for it seemed very much related to my situation. But it is failing to find it. Looks like the image may need more pre-processing for it to find reliably. So, started off playing with different thresholds in opencv to no success. Here is an example of an image (note that the overall intensity of the image will vary, but the radius of the circle always remain the same ~45pixels)Here is what I have tried so farimage = cv2.imread('image1.bmp', 0)img_in = 255-imagemean_val = int(np.mean(img_in))ret, img_thresh = cv2.threshold(img_in, thresh=mean_val-30, maxval=255, type=cv2.THRESH_TOZERO)# detect circlecircles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.0, 100, minRadius=40, maxRadius=50)If you look at the image, the circle is obvious, its a thin light gray circle in the center of the blob.Any suggestions?Edited to show expected resultThe expected result should be like this, as you can see, the circle is very obvious for naked eye on the original image and is always of the same radius but not at the same location on the image. But there will be only one circle of this kind on any given image.As of 8/20/2020, here is the code I am using to get the center and radiifrom numpy import zeros as np_zeros,\ full as np_fullfrom cv2 import calcHist as cv2_calcHist,\ HoughCircles as cv2_HoughCircles,\ HOUGH_GRADIENT as cv2_HOUGH_GRADIENTdef getCenter(img_in, saturated, minradius, maxradius): img_local = img_in[100:380,100:540,0] res = np_full(3, -1) # do some contrast enhancement img_local = stretchHistogram(img_local, saturated) circles = cv2_HoughCircles(img_local, cv2_HOUGH_GRADIENT, 1, 40, param1=70, param2=20, minRadius=minradius, maxRadius=maxradius) if circles is not None: # found some circles circles = sorted(circles[0], key=lambda x: x[2]) res[0] = circles[0][0]+100 res[1] = circles[0][1]+100 res[2] = circles[0][2] return res #x,y,radiidef stretchHistogram(img_in, saturated=0.35, histMin=0.0, binSize=1.0): img_local = img_in.copy() img_out = img_in.copy() min, max = getMinAndMax(img_local, saturated) if max > min: min = histMin+min * binSize max = histMin+max * binSize w, h = img_local.shape[::-1] #create a new lut lut = np_zeros(256) max2 = 255 for i in range(0, 256): if i <= min: lut[i] = 0 elif i >= max: lut[i] = max2 else: lut[i] = (round)(((float)(i - min) / (max - min)) * max2) #update image with new lut values for i in range(0, h): for j in range(0, w): img_out[i, j] = lut[img_local[i, j]] return img_outdef getMinAndMax(img_in, saturated): img_local = img_in.copy() hist = cv2_calcHist([img_local], [0], None, [256], [0, 256]) w, h = img_local.shape[::-1] pixelCount = w * h saturated = 0.5 threshold = (int)(pixelCount * saturated / 200.0) found = False count = 0 i = 0 while not found and i < 255: count += hist[i] found = count > threshold i = i + 1 hmin = i i = 255 count = 0 while not found and i > 0: count += hist[i] found = count > threshold i = i - 1 hmax = i return hmin, hmaxand calling the above function asgetCenter(img, 5.0, 55, 62)But it is still very unreliable. Not sure why it is so hard to get to an algorithm that works reliably for something that is very obvious to a naked eye. Not sure why there is so much variation in the result from frame to frame even though there is no change between them.Any suggestions are greatly appreciated. Here are some more samples to play with | simple, draw your circles: cv2.HoughCircles returns a list of circles..take care of maxRadius = 100for i in circles[0,:]: # draw the outer circle cv2.circle(image,(i[0],i[1]),i[2],(255,255,0),2) # draw the center of the circle cv2.circle(image,(i[0],i[1]),2,(255,0,255),3)a full working code (you have to change your tresholds):import cv2import numpy as npimage = cv2.imread('0005.bmp', 0)height, width = image.shapeprint(image.shape)img_in = 255-imagemean_val = int(np.mean(img_in))blur = cv2.blur(img_in , (3,3))ret, img_thresh = cv2.threshold(blur, thresh=100, maxval=255, type=cv2.THRESH_TOZERO)# detect circlecircles = cv2.HoughCircles(img_thresh, cv2.HOUGH_GRADIENT,1,40,param1=70,param2=20,minRadius=60,maxRadius=0)print(circles)for i in circles[0,:]: # check if center is in middle of picture if(i[0] > width/2-30 and i[0] < width/2+30 \ and i[1] > height/2-30 and i[1] < height/2+30 ): # draw the outer circle cv2.circle(image,(i[0],i[1]),i[2],(255,255,0),2) # draw the center of the circle cv2.circle(image,(i[0],i[1]),2,(255,0,255),3)cv2.imshow("image", image )while True: keyboard = cv2.waitKey(2320) if keyboard == 27: breakcv2.destroyAllWindows()result: |
How to Plot Time Stamps HH:MM on Python Matplotlib "Clock" Polar Plot I am trying to plot mammalian feeding data on time points on a polar plot. In the example below, there is only one day, but each day will eventually be plotted on the same graph (via different axes). I currently have all of the aesthetics worked out, but my data is not graphing correctly. How do I get the hours to plot correctly?I assume that the solution will likely have to do with pd.datetime and np.deg2rad, but I have not found the correct combo.I am importing my data from csv, and filtering each day based on the date as follows:#Filtered portion:Day1 = df[df.Day == '5/22']This gives me the following data: Day Time Feeding_Quality Feed_Num0 5/22 16:15 G 21 5/22 19:50 G 22 5/22 20:15 G 23 5/22 21:00 F 14 5/22 23:30 G 2Here is the code:fig = plt.figure(figsize=(7,7))ax = plt.subplot(111, projection = 'polar')ax.bar(Day1['Time'], Day1['Feed_Num'], width = 0.1, alpha=0.3, color='red', label='Day 1')# Make the labels go clockwiseax.set_theta_direction(-1)#Place Zero at Topax.set_theta_offset(np.pi/2)#Set the circumference ticksax.set_xticks(np.linspace(0, 2*np.pi, 24, endpoint=False))# set the label namesticks = ['12 AM', '1 AM', '2 AM', '3 AM', '4 AM', '5 AM', '6 AM', '7 AM','8 AM','9 AM','10 AM','11 AM','12 PM', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7 PM', '8 PM', '9 PM', '10 PM', '11 PM' ]ax.set_xticklabels(ticks)# suppress the radial labelsplt.setp(ax.get_yticklabels(), visible=False)#Bars to the wallplt.ylim(0,2)plt.legend(bbox_to_anchor=(1,0), fancybox=True, shadow=True)plt.show()As you can assume from the data, all bars plotted would be in the afternoon, but as you can see from the graph output, the data is all over the place. | import numpy as npfrom matplotlib import pyplot as pltimport datetimedf = pd.DataFrame({'Day': {0: '5/22', 1: '5/22', 2: '5/22', 3: '5/22', 4: '5/22'}, 'Time': {0: '16:15', 1: '19:50', 2: '20:15', 3: '21:00', 4: '23:30'}, 'Feeding_Quality': {0: 'G', 1: 'G', 2: 'G', 3: 'F', 4: 'G'}, 'Feed_Num': {0: 2, 1: 2, 2: 2, 3: 1, 4: 2}})Create a series of datetime.datetime objects from the 'Time' column; transform that into percentages of 24 hours; transform that into radians.xs = pd.to_datetime(df['Time'],format= '%H:%M' )xs = xs - datetime.datetime.strptime('00:00:00', '%H:%M:%S')xs = xs.dt.seconds / (24 * 3600)xs = xs * 2 * np.piUse that as the x values for the plotfig = plt.figure(figsize=(7,7))ax = plt.subplot(111, projection = 'polar')ax.bar(xs, df['Feed_Num'], width = 0.1, alpha=0.3, color='red', label='Day 1')# Make the labels go clockwiseax.set_theta_direction(-1)#Place Zero at Topax.set_theta_offset(np.pi/2)#Set the circumference ticksax.set_xticks(np.linspace(0, 2*np.pi, 24, endpoint=False))# set the label namesticks = ['12 AM', '1 AM', '2 AM', '3 AM', '4 AM', '5 AM', '6 AM', '7 AM','8 AM','9 AM','10 AM','11 AM','12 PM', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7 PM', '8 PM', '9 PM', '10 PM', '11 PM' ]ax.set_xticklabels(ticks)# suppress the radial labelsplt.setp(ax.get_yticklabels(), visible=False)#Bars to the wallplt.ylim(0,2)plt.legend(bbox_to_anchor=(1,0), fancybox=True, shadow=True)plt.show()The 'Time' column could also be transformed to radians withdef trans(x): h,m = map(int,x) return 2 * np.pi * (h + m/60)/24xs = df['Time'].str.split(':')xs = xs.apply(trans)Which is probably a little better than using timedelta's - that seemed a little convoluted. |
How to fix 'else' outputting more than 1 outcome Very basic problem, trying to output if a number is divisible by 3/5/both/none but else will return 2 statements when they are not true. How do I fix this?I've tried to move where the else is indented, first time it wouldn't output for the numbers that are not multiples of 3 or 5 and second time it would output two answers.while True: z = input("Please enter a number- to end the program enter z as -1 ") if z % 3 == 0 and z % 5 ==0: print("Your number is a multiple of 3 and 5") elif z % 3 == 0 and z % 5 != 0: print("Your number is a multiple of 3") elif z % 3 != 0 and z % 5 ==0: print("Your number is a multiple of 5") if z == -1: break else: print("Your number is not a multiple of 3 or 5")I.e. if 67 is entered Your number is not a multiple of 3 or 5 as expected. But if 15 is entered Your number is a multiple of 3 and 5 and Your number is not a multiple of 3 or 5 is the unexpected output. | If you incorporate all the comment suggestions so far you get something like this:while True: z = input("Please enter a number- to end the program enter z as -1 ") # cast to int z = int(z) # break early if z == -1: break elif z % 3 == 0 and z % 5 == 0: print("Your number is a multiple of 3 and 5") elif z % 3 == 0: print("Your number is a multiple of 3") elif z % 5 == 0: print("Your number is a multiple of 5") else: print("Your number is not a multiple of 3 or 5") |
pytest will not run the test files in subdirectories I am new to pytest and trying to run a simple test to check if pytest works. I'm using windows 10, python 3.8.5 and pytest 6.0.1.Here is my project directory:projects/ tests/ __init__.py test_sample.pyHere is what I put in test_sample.py:def func(x): return x + 1def test_answer(): assert func(3) == 5if I do the following:> pytest the test run fine (1 failed in 0.004s)> pytest tests/test_sample.py the test run fine (1 failed in 0.006s) However, if I do this:> pytest test_sample.pyIt will return a message like this:no test ran in 0.000sERROR: file not found: test_sample.pyI tried deleting __init__.py file but the result was still the same. Also, I have tried this on 2 different computers but nothing changed. In case the problem can't be solved, can I just ignore it and move on with the solutions I'm having? | The "best practices" approach to configuring a project with pytest is using a config file. The simplest solution is a pytest.ini that looks like this:# pytest.ini[pytest]testpaths = testsThis configures the testpaths relative to your rootdir (pytest will tell you what both paths are whenever you run it). This answers the specific problem you raised in your question.C:\YourProject <<-- Run pytest on this path and it will be considered your rootdir.││ pytest.ini│ your_module.py│├───tests <<-- This is the directory you configured as testpaths in pytest.ini│ __init__.py│ test_sample.pyYour example was about running specific tests from the command line. The complete set of rules for finding the rootdir from args is somewhat contrived.You should notice that pytest currently supports two possible layouts for your tests and modules. It's currently strongly suggested by pytest documentation to use a src layout. Answering about the importance of using __init__.py depends on the former to an extent, however choosing a configuration file and layout still takes precedence over how you choose to use __init__.py to define your packages. |
Updating R that is used within IPython/ Jupyter I wanted to use R within Jupyter Notebook so I installed via R Essentials (see: https://www.continuum.io/blog/developer/jupyter-and-conda-r). The version that got installed is the following:R.Version()Out[2]:$platform"x86_64-w64-mingw32"$arch"x86_64"$os"mingw32"$system"x86_64, mingw32"$status""$major"3"$minor"1.3"$year"2015"$month"03"$day"09"$svn rev"67962"$language"R"$version.string"R version 3.1.3 (2015-03-09)"$nickname"Smooth Sidewalk"I have attempted to update R and install some packages (like RWeka for example) to no avail. I have looked for various sources but nothing seems to point me in the right direction. Does anyone know what to do?My main motivation is trying to use R libaries but will get warnings like the following:library("RWeka")Warning message:: package 'RWeka' was built under R version 3.2.4Warning message:In unique(paths): bytecode version mismatch; using eval | If you want to stay with conda packages, try conda update --all, but I think there are still no R 3.2.x packages for windows.You can also install R via the binary installer available at r-project.org, install the R kernel manually; e.g. via install_github("irkernel/repr")install_github("irkernel/IRdisplay")install_github("irkernel/IRkernel")and then make this kernel available in the notebook IRkernel::installspec(name = 'ir32', displayname = 'R 3.2') |
How to find orphan process's pid How can I find child process pid after the parent process died.I have program that creates child process that continues running after it (the parent) terminates.i.e.,I run a program from python script (PID = 2).The script calls program P (PID = 3, PPID = 2)P calls fork(), and now I have another instance of P named P` (PID = 4 and PPID = 3).After P terminates P` PID is 4 and PPID is 1.Assuming that I have the PID of P (3), how can I find the PID of the child P`?Thanks. | The information is lost when a process-in-the-middle terminates. So in your situation there is no way to find this out.You can, of course, invent your own infrastructure to store this information at forking time. The middle process (PID 3 in your example) can of course save the information which child PIDs it created (e. g. in a file or by reporting back to the father process (PID 1 in your example) via pipes or similar). |
"Threading" in Python, plotting received data and sending simultaneously I am asking for some high level advice here. I am using Python to plot data that is received constantly through serial. At the same time, I want the user to be able to input data through a prompt (such as the Python shell). That data will then be sent through the same serial port to talk to the device that is also sending the data. My problem is that the plotting app.MainLoop() "Thread" seems to block and it wont show my raw_input portion until the window is closed. I've also tried putting those 4 lines inside my while loop but the same problem occurs- it lets me input all my information once, but once plotting starts it blocks forever until I close the graphing window.if __name__ == '__main__': app = wx.App() window = DataLoggerWindow() window.Show() app.MainLoop() prompt_counter = "main" while(1): if prompt_counter == "main": ans = raw_input("Press f for freq, press a for amplitude: \n") if ans == "f": prompt_counter = "freq" elif ans == "a": prompt_counter = "amp" else: prompt_counter = "main" elif prompt_counter == "freq": freq = raw_input("Enter the frequency you want to sample at in Hz: \n") ser.write("f"+freq+"\n") prompt_counter = "main" elif prompt_counter == "amp": amp = raw_input("Type in selection") ser.write("a"+amp+"\n") prompt_counter = "main"All the plotting portion does is read the serial port, and print the data received. Both portions work separately with the device on the backend. So I'm pretty sure this is a problem with how I wrote the Python code but I'm not sure why....any ideas? | Disclaimer: I don't think that the following is good practice.You can put the execution of the wx stuff inside a separate thread.app = wx.App()window = DataLoggerWindow()import threadingclass WindowThread(threading.Thread): def run(self): window.Show() app.MainLoop()WindowThread().start()That way, the MainLoop is only blocking another thread and the main thread should still be usable.However, I think that this is not the optimal approach and you should rather use something like the App.OnIdle hook. |
building dictionary to be JSON encoded - python I have a list of class objects. Each object needs to be added to a dictionary so that it can be json encoded. I've already determined that I will need to use the json library and dump method. The objects look like this:class Metro: def __init__(self, code, name, country, continent, timezone, coordinates, population, region): self.code = code #string self.name = name #string self.country = country #string self.continent = continent #string self.timezone = timezone #int self.coordinates = coordinates #dictionary as {"N" : 40, "W" : 88} self.population = population #int self.region = region #intSo the json will look like this: { "metros" : [ { "code" : "SCL" , "name" : "Santiago" , "country" : "CL" , "continent" : "South America" , "timezone" : -4 , "coordinates" : {"S" : 33, "W" : 71} , "population" : 6000000 , "region" : 1 } , { "code" : "LIM" , "name" : "Lima" , "country" : "PE" , "continent" : "South America" , "timezone" : -5 , "coordinates" : {"S" : 12, "W" : 77} , "population" : 9050000 , "region" : 1 } , {...Is there a simple solution for this? I've been looking into dict comprehension but it seems it will be very complicated. | dict comprehension will not be very complicated.import jsonlist_of_metros = [Metro(...), Metro(...)]fields = ('code', 'name', 'country', 'continent', 'timezone', 'coordinates', 'population', 'region',)d = { 'metros': [ {f:getattr(metro, f) for f in fields} for metro in list_of_metros ]}json_output = json.dumps(d, indent=4) |
Tensorflow: building graph with batch sizes varying in dimension 1? I'm trying to build a CNN model in Tensorflow where all the inputs within a batch are equal shape, but between batches the inputs vary in dimension 1 (i.e. minibatch sizes are the same but minibatch shapes are not). To make this clearer, I have data (Nx23x1) of various values N that I sort in ascending order first. In each batch (50 samples) I zero-pad every sample so that each N_i equals the max N within its minibatch. Now I have defined Tensorflow placeholder for the batch input:input = tf.placeholder(tf.float32, shape=(batch_size, None, IMAGE_WIDTH, NUM_CHANNELS))I use 'None' in the input placeholder because between batches this value varies, even though within a batch it doesn't. In running my training code, I use a feed_dict to pass in values for input (numpy matrix) as defined in the tutorials.My CNN code takes in this input; however this is where I run into issues. I get a ValueError when trying to flatten the input just before my fully connected layers. It tries to flatten the array but one of the dimensions is still 'None'. So then I tried:length = tf.shape(input)[1]reshaped = tf.reshape(input, [batch_size, length, IMAGE_WIDTH, NUM_CHANNELS])But still the value is 'None' and I am getting issues when trying to build the graph initially. My FC layer (and in flattening) explicitly takes in 'input_to_layer.get_shape()[1]' in building the weight and bias tensors, but it cannot handle the None input. I am quite lost as to how to proceed! Help would be much appreciated, thanks :) ## EDIT ##Danevskyi points out below that this may not be possible. What if instead of the fully connected layer, I wanted to mean pool over the entire caption (i.e. for the 1024 flat filters of size (D,) outputted from the prior conv layer, I want to create a 1024-dim vector by mean pooling over the length D of each filter)? Is this possible with 1D Global Avg Pooling? Again between batches the value of D would vary...## UPDATE ##The global mean pooling method from tflearn (tflearn.layers.conv.global_avg_pool) doesn't need a specified window size, it uses the full input dimension, so it's compatible even with unknown 'None' dimensions in TensorFlow. | There is no way to do this, as you want to use a differently shaped matrix (for fully-connected layer) for every distinct batch. One possible solution is to use global average pooling (along all spatial dimensions) to get a tensor of shape (batch_size, 1, 1, NUM_CHANNELS) regardless of the second dimension. |
How can I find out if a file-like object performs newline translation? I have a library that does some kind of binary search in a seekable open file that it receives as an argument.The file must have been opened with open(..., newline="\n"), otherwise .seek() and .tell() might not work properly if there's newline translation.The README of the library does make this thing clear, but still it's easy to miss. I missed it myself and I was wondering why things aren't working properly. I'd therefore like to make the library raise an error or at least a warning if it receives a file-like object that performs text translation. Is it possible to make this check? | I see two ways around this. One is Python 3.7's io.TextIOWrapper.reconfigure() (thanks @martineau!).The second one is to make some tests to see whether seek/tell work as expected. A simple but inefficient way to do it is this:from io import SEEK_ENDdef has_newlines_translated(f): f.seek(0) file_size_1 = len(f.read()) file_size_2 = f.seek(0, SEEK_END) - 1 return file_size_1 != file_size_2It may be possible to do it more efficiently by reading character by character (with f.read(1)) until past the first newline and playing with seek()/tell() to see whether results are consistent, but it's tricky and it wouldn't work in all cases (e.g. if the first newline is a lone \n whereas other newlines are \r\n). |
Opencv - Ellipse Contour Not fitting correctly I want to draw contours around the concentric ellipses shown in the image appended below. I am not getting the expected result. I have tried the following steps:Read the Image Convert Image to Grayscale.Apply GaussianBlurGet the Canny edgesDraw the ellipse contourHere is the Source code:import cv2target=cv2.imread('./source image.png')targetgs = cv2.cvtColor(target,cv2.COLOR_BGRA2GRAY)targetGaussianBlurGreyScale=cv2.GaussianBlur(targetgs,(3,3),0)canny=cv2.Canny(targetGaussianBlurGreyScale,30,90)kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))close=cv2.morphologyEx(canny,cv2.MORPH_CLOSE,kernel)_,contours,_=cv2.findContours(close,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)if len(contours) != 0: for c in contours: if len(c) >= 50: hull=cv2.convexHull(c) cv2.ellipse(target,cv2.fitEllipse(hull),(0,255,0),2)cv2.imshow('mask',target)cv2.waitKey(0)cv2.destroyAllWindows()The image below shows the Expected & Actual result:Source Image: | Algorithm can be simple:Convert RGB to HSV, split and working with a V channel.Threshold for delete all color lines.HoughLinesP for delete non color lines.dilate + erosion for close holes in ellipses.findContours + fitEllipse.Result:With new image (added black curve) my approach do not works. It seems that you need to use Hough ellipse detection instead "findContours + fitEllipse".OpenCV don't have implementation but you can find it here or here.If you don't afraid C++ code (for OpenCV library C++ is more expressive) then:cv::Mat rgbImg = cv::imread("sqOOE.jpg", cv::IMREAD_COLOR);cv::Mat hsvImg;cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);std::vector<cv::Mat> chans;cv::split(hsvImg, chans);cv::threshold(255 - chans[2], chans[2], 200, 255, cv::THRESH_BINARY);std::vector<cv::Vec4i> linesP;cv::HoughLinesP(chans[2], linesP, 1, CV_PI/180, 50, chans[2].rows / 4, 10);for (auto l : linesP){ cv::line(chans[2], cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar::all(0), 3, cv::LINE_AA);}cv::dilate(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 4);cv::erode(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 3);std::vector<std::vector<cv::Point> > contours;std::vector<cv::Vec4i> hierarchy;cv::findContours(chans[2], contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);for (size_t i = 0; i < contours.size(); i++){ if (contours[i].size() > 4) { cv::ellipse(rgbImg, cv::fitEllipse(contours[i]), cv::Scalar(255, 0, 255), 2); }}cv::imshow("rgbImg", rgbImg);cv::waitKey(0); |
How can I delete stopwords from a column in a df? I've been trying to delete the stopwords from a column in a df, but I'm having trouble doing it.discografia["SSW"] = [word for word in discografia.CANCIONES if not word in stopwords.words('spanish')]But in the new column I just get the same words as in the column "CANCIONES". What am I doing wrong? Thanks! | We can use explode in conjunction with grouping by the original index to assign back to the original DataFrame.stopwords = ["buzz"]df = pd.DataFrame({"CANCIONES": [["fizz", "buzz", "foo"], ["baz", "buzz"]]})words = r".|".join(stopwords)exploded = df.explode("CANCIONES")print(exploded) CANCIONES0 fizz0 buzz0 foo1 baz1 buzzdf["SSW"] = exploded.loc[~exploded.CANCIONES.str.contains(words)].reset_index().groupby( "index", as_index=False).agg({"CANCIONES": list}).CANCIONESprint(df) CANCIONES SSW0 [fizz, buzz, foo] [fizz, foo]1 [baz, buzz] [baz] |
Running interactive python script from emacs I am a fairly proficient vim user, but friends of mine told me so much good stuff about emacs that I decided to give it a try -- especially after finding about the aptly-named evil mode...Anyways, I am currently working on a python script that requires user input (a subclass of cmd.Cmd). In vim, if I wanted to try it, I could simply do :!python % and then could interact with my script, until it quits. In emacs, I tried M-! python script.py, which would indeed run the script in a separate buffer, but then RETURNs seems not to be sent back to the script, but are caught by the emacs buffer instead. I also tried to have a look at python-mode's C-c C-c, but this runs the script in some temporary directory, whereas I just want to run it in (pwd).So, is there any canonical way of doing that? | I don't know about canonical, but if I needed to interact with a script I'd do M-xshellRET and run the script from there.There's also M-xterminal-emulator for more serious terminal emulation, not just shell stuff. |
How can I print the entire converted sentence on a single line? I am trying to expand on Codeacademy's Pig Latin converter to practice basic programming concepts. I believe I have the logic nearly right (I'm sure it's not as concise as it could be!) and now I am trying to output the converted Pig Latin sentence entered by the user on a single line.If I print from inside the for loop it prints on new lines each time. If I print from outside it only prints the first word as it is not iterating through all the words. Could you please advise where I am going wrong?Many, many thanks for your help.pyg = 'ay'print ("Welcome to Matt's Pig Latin Converter!")def convert(original): while True: if len(original) > 0 and (original.isalpha() or " " in original): print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = "".join(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: "))convert(raw_input("Please enter a word: ")) | Try:pyg = 'ay'print ("Welcome to Matt's Pig Latin Converter!")def convert(original): while True: if len(original) > 0 and (original.isalpha() or " " in original): final_sentence = "" print "You entered \"%s\"." % original split_list = original.split() for word in split_list: first = word[0] new_sentence = word[1:] + first + pyg final_sentence = final_sentence.append(new_sentence) print final_sentence break else: print ("That's not a valid input. Please try again.") return convert(raw_input("Please enter a word: "))convert(raw_input("Please enter a word: "))It's because you are remaking final_sentence every time in the for loop instead of adding to it. |
How to use if statments on Tags in Beautiful Soup? I'm a beginner using Beautiful Soup and I have a question to do with 'if' statements.I am trying to scrap data from tables from a webpage but there are pro-ceding and post-ceding tables too.All the required tables have divisions with the form , while the useless tables have various divisions.What I thought of doing was going using find_all to search for all table divisions and then looping through the result and appending to a list all of the divisions who's .contents method had it's first item being a tag having the attribute align = 'center', but I didn't know how to do it with the tag being a Beautiful Soup object and not knowing how to work with it.I have my attempted code below and if anyone could give me some tips it would be greatly appreciated.import requestsfrom bs4 import BeautifulSoupr = requests.get('https://afltables.com/afl/stats/2018.html')soup = BeautifulSoup(r.text, 'html.parser')results = soup.find_all('tr')lists =[]for result in results: if result.contents[0] == 'align = centre': #append to some list | This would get you what you are looking for I believe.for result in results: if 'align="center"' in str(result.contents[0]): #append to some list |
How to click HTML button in Python + Selenium I am trying to simulate button click in Python using Selenium. <li class="next" role="button" aria-disabled="false"><a href="www.abc.com">Next →</a></li>The Python script is driver.find_element_by_class_name('next').click().This gives an error. Can someone suggest me how to simulate a button class? | You can try the following code:from selenium.webdriver.support import uifrom selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.common.by import Byui.WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".next[role='button']"))).click()Hope it helps you! |
Algorithm for grouping points in given distance I'm currently searching for an efficient algorithm that takes in a set of points from three dimensional spaces and groups them into classes (maybe represented by a list). A point should belong to a class if it is close to one or more other points from the class. Two classes are then the same if they share any point.Because I'm working with large data sets, I don't want to use recursive methods. Also, using something like a distance matrix with O(n^2) performance is what I try to avoid.I tried to check for some algorithms online, but most of them don't appeal to this specific purpose (e.g. k-d tree or other cluster algorithms). I thought about parting space into smaller parts, but that (potentially) results in an inexact result.I tried to write something myself, but it turned out to be flawed. I would sort my points after distance and append the distance as a fourth coordinate and then repeat the following the following code-segment:def grouping_presorted(lst, distance): positions = [0] x = [] while positions: curr_el = lst[ positions[-1] ] nn_i = HasNeighbor(lst, distance, positions[-1]) if nn_i is None: x.append(lst.pop(positions[-1]) ) positions.pop(-1) else: positions.append(nn_i) return xdef HasNeighbor(lst,distance,index): i = index+1 while lst[i][3]- lst[index][3] < distance: dist = (lst[i][0]-lst[index][0])**2 + (lst[i][1]-lst[index][1])**2 + (lst[i][2]-lst[index][2])**2 if dist < distance: return i i+=1 return NoneAside from an (probably easy to fix) overflow error, there's a bigger flaw in the logic of linking the points. If you think of my points describing lines in space, the algorithm only works for lines that strictly point outwards the origin, but not for circles or similar structures.Does anybody know of a prewritten code for this or have an idea what I could try?Thanks in advance.Edit: It seems my spelling and maybe confusion of some terms has sparked some misunderstandings. I hope that this (badly-made) sketch helps. In this example, I marked my reference distance as d and circled the two containers I wan't to end up with in red. | What I ended up doingAfter following all the suggestions of your comments, help from cs.stackexchange and doing some research I was able to write down two different methods for solving this problem. In case someone might be interested, I decided to share them here. Again, the problem is to write a program that takes in a set of coordinate tuples and groups them into clusters. Two points x,y belong to the same cluster if there is a sequence of elements x=x_1,..,y=x_N such that d(x_i,x_i+1)DBSCAN: By fixing euclidean metric, minPts = 2 and grouping distance epsilon = r.scikit-learn provides a nice implementation of this algorithm. A minimal code snippet for the task would be:from sklearn.cluster import DBSCANfrom sklearn.datasets.samples_generator import make_blobsimport networkx as nximport scipy.spatial as spdef cluster(data, epsilon,N): #DBSCAN, euclidean distance db = DBSCAN(eps=epsilon, min_samples=N).fit(data) labels = db.labels_ #labels of the found clusters n_clusters = len(set(labels)) - (1 if -1 in labels else 0) #number of clusters clusters = [data[labels == i] for i in range(n_clusters)] #list of clusters return clusters, n_clusterscenters = [[1, 1,1], [-1, -1,1], [1, -1,1]]X,_ = make_blobs(n_samples=N, centers=centers, cluster_std=0.4, random_state=0)cluster(X,epsilon,N)On my machine, N=20000 for this clustering variation with an epsilon of epsilon = 0.1 takes just 290ms, so this seems really quick to me.Graph components: One can think of this problem as follows: The coordinates define nodes of a graph, and two nodes are adjacent if their distance is smaller than epsilon/r. A cluster is then given as a connected component of this graph. At first I had problems implementing this graph, but there are many ways to write a linear time algorithm to do this. The easiest and fastest way however, for me, was to use scipy.spatial's cKDTree data structure and the corresponding query_pairs() method, that returns a list of indice tuples of points that are in given distance. One could for example write it like this:class IGraph: def __init__(self, nodelst=[], radius = 1): self.igraph = nx.Graph() self.radii = radius self.nodelst = nodelst #nodelst is array of coordinate tuples, graph contains indices as nodes self.__make_edges__() def __make_edges__(self): self.igraph.add_edges_from( sp.cKDTree(self.nodelst).query_pairs(r=self.radii) ) def get_conn_comp(self): ind = [list(x) for x in nx.connected_components(self.igraph) if len(x)>1] return [self.nodelst[indlist] for indlist in ind]def graph_cluster(data, epsilon): graph = IGraph(nodelst = data, radius = epsilon) clusters = graph.get_conn_comp() return clusters, len(clusters)For the same dataset mentioned above, this method takes 420ms to find the connected components. However, for smaller clusters, e.g. N=700, this snippet runs faster. It also seems to have an advantage for finding smaller clusters (that is being given smaller epsilon values) and a vast disadvantage in the other direction (all on this specific dataset of course). I think, depending on the given situation, both methods are worth considering.Hope this is of use for somebody.Edit: Theoretically, DBSCAN has computational complexity O(n log n) when properly implemented (according to wikipedia...), while constructing the graph as well as finding its connected components runs linear in time. I'm not sure how well these statements hold for the given implementations though. |
Please help. I get this error: "SyntaxError: Unexpected EOF while parsing" try: f1=int(input("enter first digit")) f2=int(input("enter second digit")) answ=(f1/f2) print (answ)except ZeroDivisionError: | You can't have an except line with nothing after it. You have to have some code there, even if it doesn't do anything.try: f1=int(input("enter first digit")) f2=int(input("enter second digit")) answ=(f1/f2) print (answ)except ZeroDivisionError: pass |
what is the role of magic method in python? Base on my understanding, magic methods such as __str__ , __next__, __setattr__ are built-in features in Python. They will automatically called when a instance object is created. It also plays a role of overridden. What else some important features of magic method do I omit or ignore? | "magic" methods in python do specific things in specific contexts.For example, to "override" the addition operator (+), you'd define a __add__ method. subtraction is __sub__, etc.Other methods are called during object creation (__new__, __init__). Other methods are used with specific language constructs (__enter__, __exit__ and you might argue __init__ and __next__).Really, there's nothing special about magic methods other than they are guaranteed to be called by the language at specific times. As the programmer, you're given the power to hook into structure and change the way an object behaves in those circumstances.For a near complete summary, have a look at the python data model. |
Persisting test data across apps My Django site has two apps — Authors and Books. My Books app has a model which has a foreign key to a model in Authors. I have some tests for the Authors app which tests all my models and managers and this works fine. However, my app Books require some data from the Authors app in order to function.Can I specify the order in which my tests are run and make the generated test data from app Authors persist so that I can test my Books app whithout having to copy over the test which generate data from the Authors app.I might be doing this all wrong. Am I?Thanks. | Create a fixture containing the test data you need. You can then load the same data for both your Authors and Books tests.For details, see docs on Testcase.fixures and Introduction to Python/Django tests: Fixtures. |
loop through numpy arrays, plot all arrays to single figure (matplotlib) the functions below each plot a single numpy array plot1D, plot2D, and plot3D take arrays with 1, 2, and 3 columns, respectivelyimport numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3Ddef plot1D(data): x=np.arange(len(data)) plot2D(np.hstack((np.transpose(x), data)))def plot2D(data): # type: (object) -> object #if 2d, make a scatter plt.plot(data[:,0], data[:,1], *args, **kwargs)def plot3D(data): #if 3d, make a 3d scatter fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(data[:,0], data[:,1], data[:,2], *args, **kwargs)I would like the ability to input a list of 1, 2, or 3d arrays and plot all arrays from the list onto one figureI have added the looping elements, but am unsure how hold a figure and add additional plots...def plot1D_list(data): for i in range(0, len(data)): x=np.arange(len(data[i])) plot2D(np.hstack((np.transpose(x), data[i])))def plot2D_list(data): # type: (object) -> object #if 2d, make a scatter for i in range(0, len(data)): plt.plot(data[i][:,0], data[i][:,1], *args, **kwargs)def plot3D_list(data): #if 3d, make a 3d scatter for i in range(0, len(data)): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(data[i][:,0], data[i][:,1], data[i][:,2], *args, **kwargs) | To plot multiple data sets on the same axes, you can do something like this:def plot2D_list(data,*args,**kwargs): # type: (object) -> object #if 2d, make a scatter n = len(data) fig,ax = plt.subplots() #create figure and axes for i in range(n): #now plot data set i ax.plot(data[i][:,0], data[i][:,1], *args, **kwargs)Your other functions can be generalised in the same way. Here's an example of using the above function with a 5 sets of randomly generated x-y coordinates, each with length 100 (each of the 5 data sets appears as a different color):import numpy as npX = np.random.randn(5,100,2)plot2D_list(X,'o')plt.show() |
remove arguments passed to chrome by selenium / chromedriver I'm using selenium with python and chromium / chromedriver. I want to REMOVE switches passed to chrome (e.g. --full-memory-crash-report), but so far I could only find out how to add further switches.My current setup:from selenium import webdriverdriver = webdriver.Chrome(executable_path="/path/to/chromedriver")driver.get(someurl)As far as I understand this can be used to add arguments:from selenium.webdriver.chrome.options import Optionschrome_options = Options()chrome_options.add_argument("--some-switch")driver = webdriver.Chrome(chrome_options=chrome_options)So, how do I get rid of default arguments or wipe all default arguments clean and pass only a custom list? | It helped me:options = webdriver.ChromeOptions()options.add_experimental_option("excludeSwitches", ["test-type"])options.add_argument("--incognito")driver = webdriver.Chrome(options=options)Found solution here https://help.applitools.com/hc/en-us/articles/360007189411--Chrome-is-being-controlled-by-automated-test-software-notification |
importing from a text file to a dictionary filename:dictionary.txtYAHOO:YHOOGOOGLE INC:GOOGHarley-Davidson:HOGYamana Gold:AUYSotheby’s:BIDinBev:BUDcode:infile = open('dictionary.txt', 'r')content= infile.readlines()infile.close()counters ={}for line in content: counters.append(content) print(counters)i am trying to import contents of the file.txt to the dictionary. I have searched through stack overflow but please an answer in a simple way (not with open...) | First off, instead of opening and closing the files explicitly you can use with statement for opening the files which, closes the file automatically at the end of the block.Secondly, as the file objects are iterator-like objects (one shot iterable) you can loop over the lines and split them with : character. You can do all of these things as a generator expression within dict function: with open('dictionary.txt') as infile: my_dict = dict(line.strip().split(':') for line in infile) |
Python Daemon: checking to have one daemon run at all times myalert.pyfrom daemon import Daemonimport os, time, sysclass alertDaemon(Daemon): def run(self): while True: time.sleep(1)if __name__ == "__main__": alert_pid = '/tmp/ex.pid' # if pid doesnt exists run if os.path.isfile(alert_pid): # is this check enough? sys.exit(0) daemon = alertDaemon(alert_pid) daemon.start()Given that no other programs or users will create the pid file:1) Is there a case where pid does not exists yet the daemon process still running?2) Is there a case where pid does exists yet the daemon isnt running?Because if answer is yes to at least one of the questions above, then simply checking for the existence of pid file isnt enough if my goal is have one daemon running at all times. Q: If i have to check for the process then, I am hoping of avoid something like system call ps -ef and grep for the name of the script. Is there a standard way of doing this?Note: the script, myalert.py, will be a cronjob | The python-daemon library, which is the reference implementation for PEP 3143: "Standard daemon process library", handles this by using a file lock (via the lockfile library) on the pid file you pass to the DaemonContext object. The underlying OS guarantees that the file lock will be released when the daemon process exits, even if its uncleanly exited. Here's a simple usage example:import daemonfrom daemon.pidfile import PIDLockFilecontext = daemon.DaemonContext( pidfile= PIDLockFile('/var/run/spam.pid'), )with context: main()So, if a new instance starts up, it doesn't have to determine if the process that created the existing pid file is still running via the pid itself; if it can acquire the file lock, then no other instances are running (since they'd have acquired the lock). If it can't acquire the lock, then another daemon instance must be running.The only way you'd run into trouble is if someone came along and manually deleted the pid file while the daemon was running. But I don't think you need to worry about someone deliberately breaking things in that way.Ideally, python-daemon would be part of the standard library, as was the original goal of PEP 3143. Unfortunately, the PEP got deferred, essentially because there was no one willing to actually do the remaining work needed to get in added to the standard library: Further exploration of the concepts covered in this PEP has been deferred for lack of a current champion interested in promoting the goals of the PEP and collecting and incorporating feedback, and with sufficient available time to do so effectively. |
Where (at which point in the code) does pyAMF client accept SSL certificate? I've set up a server listening on an SSL port. I am able to connect to it and with proper credentials I am able to access the services (echo service in the example below)The code below works fine, but I don't understand at which point the client accepts the certificateServer:import os.pathimport loggingimport cherrypyfrom pyamf.remoting.gateway.wsgi import WSGIGatewaylogging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)-5.5s [%(name)s] %(message)s')def auth(username, password): users = {"user": "pwd"} if (users.has_key(username) and users[username] == password): return True return Falsedef echo(data): return dataclass Root(object): @cherrypy.expose def index(self): return "This is your main website"gateway = WSGIGateway({'myservice.echo': echo,}, logger=logging, debug=True, authenticator=auth)localDir = os.path.abspath(os.path.dirname(__file__))CA = os.path.join(localDir, 'new.cert.cert')KEY = os.path.join(localDir, 'new.cert.key')global_conf = {'global': {'server.socket_port': 8443, 'environment': 'production', 'log.screen': True, 'server.ssl_certificate': CA, 'server.ssl_private_key': KEY}}cherrypy.tree.graft(gateway, '/gateway/')cherrypy.quickstart(Root(), config=global_conf)Client:import loggingfrom pyamf.remoting.client import RemotingServicelogging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)-5.5s [%(name)s] %(message)s')client = RemotingService('https://localhost:8443/gateway', logger=logging)client.setCredentials('user', 'pwd')service = client.getService('myservice')print service.echo('Echo this')Now, when I run this, it runs OK, the client log is below:2010-01-18 00:50:56,323 INFO [root] Connecting to https://localhost:8443/gateway2010-01-18 00:50:56,323 DEBUG [root] Referer: None2010-01-18 00:50:56,323 DEBUG [root] User-Agent: PyAMF/0.5.12010-01-18 00:50:56,323 DEBUG [root] Adding request myservice.echo('Echo this',)2010-01-18 00:50:56,324 DEBUG [root] Executing single request: /12010-01-18 00:50:56,324 DEBUG [root] AMF version: 02010-01-18 00:50:56,324 DEBUG [root] Client type: 02010-01-18 00:50:56,326 DEBUG [root] Sending POST request to /gateway2010-01-18 00:50:56,412 DEBUG [root] Waiting for response...2010-01-18 00:50:56,467 DEBUG [root] Got response status: 2002010-01-18 00:50:56,467 DEBUG [root] Content-Type: application/x-amf2010-01-18 00:50:56,467 DEBUG [root] Content-Length: 412010-01-18 00:50:56,467 DEBUG [root] Server: PyAMF/0.5.1 Python/2.5.22010-01-18 00:50:56,467 DEBUG [root] Read 41 bytes for the response2010-01-18 00:50:56,468 DEBUG [root] Response: <Envelope amfVersion=0 clientType=0> (u'/1', <Response status=/onResult>u'Echo this'</Response>)</Envelope>2010-01-18 00:50:56,468 DEBUG [root] Removing request: /1Echo thisThe line 2010-01-18 00:50:56,467 DEBUG [root] Read 41 bytes for the response looks suspicious, since the response is too short (the certificate is ~1K) and I'd expect the cert transfer to be in the debug log.Question: At which point does the client accept the certificate? Where would it be stored by default? Which config parameter sets the default location? | PyAMF uses httplib under the hood to power the remoting requests. When connecting via https://, httplib.HTTPSConnection is used as the connection attribute to the RemotingService.It states in the docs that (in reference to HTTPSConnection): Note: This does not do any certificate verificationSo, in answer to your question certificates are basically ignored, even if you supply key_file/cert_file arguments to connection.The actual ignoring is done when the connect method is called - when the request is actually made to the gateway .. [root] Sending POST request to /gatewayThe Read 41 bytes for the response is the unencrypted http response length.This answer may not contain all the info you require but should go some way to explaining the behaviour you're seeing. |
NameError: name 'self' is not defined Why such structureclass A: def __init__(self, a): self.a = a def p(self, b=self.a): print bgives an error NameError: name 'self' is not defined? | Default argument values are evaluated at function define-time, but self is an argument only available at function call time. Thus arguments in the argument list cannot refer each other.It's a common pattern to default an argument to None and add a test for that in code:def p(self, b=None): if b is None: b = self.a print bUpdate 2022: Python developers are now considering late-bound argument defaults for future Python versions. |
How to send post requests using multi threading in python? I'm trying to use multi threading to send post requests with tokens from a txt file.I only managed to send GET requests,if i try to send post requests it results in a error.I tried modifying the GET to POST but it gets an error.I want to send post requests with tokens in them and verify for each token if they are true or false. (json response)Here is the code:import threadingimport timefrom queue import Queueimport requestsfile_lines = open("tokens.txt", "r").readlines() # Gets the tokens from the txt file.for line in file_lines: param={ "Token":line.replace('/n','') }def make_request(url): """Makes a web request, prints the thread name, URL, and response text. """ resp = requests.get(url) with print_lock: print("Thread name: {}".format(threading.current_thread().name)) print("Url: {}".format(url)) print("Response code: {}\n".format(resp.text))def manage_queue(): """Manages the url_queue and calls the make request function""" while True: # Stores the URL and removes it from the queue so no # other threads will use it. current_url = url_queue.get() # Calls the make_request function make_request(current_url) # Tells the queue that the processing on the task is complete. url_queue.task_done()if __name__ == '__main__': # Set the number of threads. number_of_threads = 5 # Needed to safely print in mult-threaded programs. print_lock = threading.Lock() # Initializes the queue that all threads will pull from. url_queue = Queue() # The list of URLs that will go into the queue. urls = ["https://www.google.com"] * 30 # Start the threads. for i in range(number_of_threads): # Send the threads to the function that manages the queue. t = threading.Thread(target=manage_queue) # Makes the thread a daemon so it exits when the program finishes. t.daemon = True t.start() start = time.time() # Puts the URLs in the queue for current_url in urls: url_queue.put(current_url) # Wait until all threads have finished before continuing the program. url_queue.join() print("Execution time = {0:.5f}".format(time.time() - start))I want to send a post request for each token in the txt file.Error i get when using replacing get with post:Traceback (most recent call last):File "C:\Users\Creative\Desktop\multithreading.py", line 40, in url_queue = Queue()NameError: name 'Queue' is not definedcurrent_url = url_queue.post()AttributeError: 'Queue' object has no attribute 'post'File "C:\Users\Creative\Desktop\multithreading.py", line 22, in manage_queueAlso tried a solution using tornado and async but none of them with success. | I finally managed to do post requests using multi threading.If anyone sees an error or if you can do an improvement for my code feel free to do it :)import requestsfrom concurrent.futures import ThreadPoolExecutor, as_completedfrom time import timeurl_list = [ "https://www.google.com/api/"]tokens = {'Token': '326729'}def download_file(url): html = requests.post(url,stream=True, data=tokens) return html.contentstart = time()processes = []with ThreadPoolExecutor(max_workers=200) as executor: for url in url_list: processes.append(executor.submit(download_file, url))for task in as_completed(processes): print(task.result())print(f'Time taken: {time() - start}') |
Scrolled Panel not working in wxPython class Frame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None,-1, "SCSM Observatory Log", size=(700, 700)) panel = wxScrolledPanel.ScrolledPanel(self,-1, size=(800,10000)) panel.SetupScrolling()Could someone please explain why this code is not working? I am not getting any errors, but its like the scrolling commands are not being initialized possibly? Edit: The scrolling works but I have to resize the window and make it smaller to enable the scrolling capabilities. Also, it will not scroll all the way to the bottom. Edit 2: Apparently the scroll bar only scrolls as far as the vertical size of the frame. So if i set the frame y-size to 1000, it will scroll to 1000. The only problem is that a window that large would be too big for the monitor this is used on. Is there a way to force the scrollbar to go to a distance that is larger than the size of the frame? For example, I would like the window to open with size of (700,700), but I need the scrollbar to go to 1000. | Not sure why it is not working for you, following a sample which works for me. I like using sized_controls as they handle sizers nicely (in my view).#!/usr/bin/env python# -*- coding: utf-8 -*-import wxprint(wx.VERSION_STRING)import wx.lib.sized_controls as SCclass MyCtrl(SC.SizedPanel): def __init__(self, parent): super(MyCtrl, self).__init__(parent) tx1 = wx.TextCtrl(self) tx1.SetSizerProps(expand=True, proportion=1) tx2 = wx.TextCtrl(self) tx2.SetSizerProps(expand=True, proportion=1)class MyFrame(SC.SizedFrame): def __init__(self, parent): super(MyFrame, self).__init__(parent, style=wx.RESIZE_BORDER|wx.DEFAULT_DIALOG_STYLE) pane = self.GetContentsPane() st = wx.StaticText(pane, label='Text') sp = SC.SizedScrolledPanel(pane) sp.SetSizerProps(expand=True, proportion=1) mc1 = MyCtrl(sp) mc2 = MyCtrl(sp)if __name__ == '__main__': import wx.lib.mixins.inspection as WIT app = WIT.InspectableApp() frame = MyFrame(None) frame.Show() app.MainLoop() |
Not able to add a column from a pandas data frame to mysql in python I have connected to mysql from python and I can add a whole data frame to sql by using df.to_sql command. When I am adding/updating a single column from pd.DataFrame, not able udate/add.Here is the information about dataset, result,In [221]: result.shapeOut[221]: (226, 5)In [223]: result.columnsOut[223]: Index([u'id', u'name', u'height', u'weight', u'categories'], dtype='object')I have the table already in the database with all the columns except categories, so I just need to add the column to the table. From these,Python MYSQL update statementProgrammingError: (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntaxcursor.execute("ALTER TABLE content_detail ADD category VARCHAR(255)" % result["categories"])This can be successfully add the column but with all NULL values,and when I was trying thiscursor.execute("ALTER TABLE content_detail ADD category=%s VARCHAR(255)" % result["categories"])ends with following errorProgrammingError Traceback (most recent call last) <ipython-input-227-ab21171eee50> in <module>() ----> 1 cur.execute("ALTER TABLE content_detail ADD category=%s VARCHAR(255)" % result["categories"])/usr/lib/python2.7/dist-packages/mysql/connector/cursor.pyc in execute(self, operation, params, multi) 505 self._executed = stmt 506 try:--> 507 self._handle_result(self._connection.cmd_query(stmt)) 508 except errors.InterfaceError: 509 if self._connection._have_next_result: # pylint: disable=W0212/usr/lib/python2.7/dist-packages/mysql/connector/connection.pyc in cmd_query(self, query) 720 if not isinstance(query, bytes): 721 query = query.encode('utf-8')--> 722 result = self._handle_result(self._send_cmd(ServerCmd.QUERY, query)) 723 724 if self._have_next_result:/usr/lib/python2.7/dist-packages/mysql/connector/connection.pyc in _handle_result(self, packet) 638 return self._handle_eof(packet) 639 elif packet[4] == 255:--> 640 raise errors.get_exception(packet) 641 642 # We have a text result setProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '=0 corporate1 corporateI think there is something I am missing with datatype, please help me to sort this out, thanks. | You cannot add a column to your table with data in it all in one step. You must use at least two separate statements to perform the DDL first (ALTER TABLE) and the DML second (UPDATE or INSERT ... ON DUPLICATE KEY UPDATE).This means that to add a column with a NOT NULL constraint requires three steps:Add nullable columnPopulate column with values in every rowAdd the NOT NULL constraint to the columnAlternatively, by using a "dummy" default value, you can do it in two steps (just be careful not to leave any "dummy" values floating around, or use values that are meaningful/well-documented):Add column as NOT NULL DEFAULT '' (or use e.g. 0 for numeric types)Populate column with values in every rowYou can optionally alter the table again to remove the DEFAULT value. Personally, I prefer the first method because it doesn't introduce meaningless values into your table and it's more likely to throw an error if the second step has a problem. I might go with the second method when a column lends itself to a certain natural DEFAULT value and I plan to keep that in the final table definition.Additionally, you are not parameterizing your query correctly; you should pass the parameter values to the method rather than formatting the string argument inside the method call. In other words:cursor.execute("Query with %s, %s, ...", iterable_with_values) # Do this!cursor.execute("Query with %s, %s, ..." % iterable_with_values) # NOT this! |
count how often each field point is inside a contour I'm working with 2D geographical data. I have a long list of contour paths. Now I want to determine for every point in my domain inside how many contours it resides (i.e. I want to compute the spatial frequency distribution of the features represented by the contours).To illustrate what I want to do, here's a first very naive implementation:import numpy as npfrom shapely.geometry import Polygon, Pointdef comp_frequency(paths,lonlat): """ - paths: list of contour paths, made up of (lon,lat) tuples - lonlat: array containing the lon/lat coordinates; shape (nx,ny,2) """ frequency = np.zeros(lonlat.shape[:2]) contours = [Polygon(path) for path in paths] # Very naive and accordingly slow implementation for (i,j),v in np.ndenumerate(frequency): pt = Point(lonlat[i,j,:]) for contour in contours: if contour.contains(pt): frequency[i,j] += 1 return frequencylon = np.array([ [-1.10e+1,-7.82+0,-4.52+0,-1.18+0, 2.19e+0,5.59e+0,9.01+0,1.24+1,1.58+1,1.92+1,2.26+1], [-1.20e+1,-8.65+0,-5.21+0,-1.71+0, 1.81e+0,5.38e+0,8.97+0,1.25+1,1.61+1,1.96+1,2.32+1], [-1.30e+1,-9.53+0,-5.94+0,-2.29+0, 1.41e+0,5.15e+0,8.91+0,1.26+1,1.64+1,2.01+1,2.38+1], [-1.41e+1,-1.04+1,-6.74+0,-2.91+0, 9.76e-1,4.90e+0,8.86+0,1.28+1,1.67+1,2.06+1,2.45+1], [-1.53e+1,-1.15+1,-7.60+0,-3.58+0, 4.98e-1,4.63e+0,8.80+0,1.29+1,1.71+1,2.12+1,2.53+1], [-1.66e+1,-1.26+1,-8.55+0,-4.33+0,-3.00e-2,4.33e+0,8.73+0,1.31+1,1.75+1,2.18+1,2.61+1], [-1.81e+1,-1.39+1,-9.60+0,-5.16+0,-6.20e-1,3.99e+0,8.66+0,1.33+1,1.79+1,2.25+1,2.70+1], [-1.97e+1,-1.53+1,-1.07+1,-6.10+0,-1.28e+0,3.61e+0,8.57+0,1.35+1,1.84+1,2.33+1,2.81+1], [-2.14e+1,-1.69+1,-1.21+1,-7.16+0,-2.05e+0,3.17e+0,8.47+0,1.37+1,1.90+1,2.42+1,2.93+1], [-2.35e+1,-1.87+1,-1.36+1,-8.40+0,-2.94e+0,2.66e+0,8.36+0,1.40+1,1.97+1,2.52+1,3.06+1], [-2.58e+1,-2.08+1,-1.54+1,-9.86+0,-3.99e+0,2.05e+0,8.22+0,1.44+1,2.05+1,2.65+1,3.22+1]])lat = np.array([ [ 29.6, 30.3, 30.9, 31.4, 31.7, 32.0, 32.1, 32.1, 31.9, 31.6, 31.2], [ 32.4, 33.2, 33.8, 34.4, 34.7, 35.0, 35.1, 35.1, 34.9, 34.6, 34.2], [ 35.3, 36.1, 36.8, 37.3, 37.7, 38.0, 38.1, 38.1, 37.9, 37.6, 37.1], [ 38.2, 39.0, 39.7, 40.3, 40.7, 41.0, 41.1, 41.1, 40.9, 40.5, 40.1], [ 41.0, 41.9, 42.6, 43.2, 43.7, 44.0, 44.1, 44.0, 43.9, 43.5, 43.0], [ 43.9, 44.8, 45.6, 46.2, 46.7, 47.0, 47.1, 47.0, 46.8, 46.5, 45.9], [ 46.7, 47.7, 48.5, 49.1, 49.6, 49.9, 50.1, 50.0, 49.8, 49.4, 48.9], [ 49.5, 50.5, 51.4, 52.1, 52.6, 52.9, 53.1, 53.0, 52.8, 52.4, 51.8], [ 52.3, 53.4, 54.3, 55.0, 55.6, 55.9, 56.1, 56.0, 55.8, 55.3, 54.7], [ 55.0, 56.2, 57.1, 57.9, 58.5, 58.9, 59.1, 59.0, 58.8, 58.3, 57.6], [ 57.7, 59.0, 60.0, 60.8, 61.5, 61.9, 62.1, 62.0, 61.7, 61.2, 60.5]])lonlat = np.dstack((lon,lat))paths = [ [(-1.71,34.4),(1.81,34.7),(5.15,38.0),(4.9,41.0),(4.63,44.0),(-0.03,46.7),(-4.33,46.2),(-9.6,48.5),(-8.55,45.6),(-3.58,43.2),(-2.91,40.3),(-2.29,37.3),(-1.71,34.4)], [(0.976,40.7),(-4.33,46.2),(-0.62,49.6),(3.99,49.9),(4.33,47.0),(4.63,44.0),(0.976,40.7)], [(2.9,55.8),(2.37,56.0),(8.47,56.1),(3.17,55.9),(-2.05,55.6),(-1.28,52.6),(-0.62,49.6),(4.33,47.0),(8.8,44.1),(2.29,44.0),(2.71,43.9),(3.18,46.5),(3.25,49.4),(3.33,52.4),(2.9,55.8)], [(2.25,35.1),(2.26,38.1),(8.86,41.1),(5.15,38.0),(5.38,35.0),(9.01,32.1),(2.25,35.1)]]frequency = comp_frequency(paths,lonlat)Of course this is about as inefficiently written as possible, with all the explicit loops, and accordingly takes forever.How can I do this efficiently?Edit: Added some sample data on request. Note that my real domain is 150**2 larger (in terms of resolution), as I've created the sample coordinates by slicing the original arrays: lon[::150]. | If your input polygons are actually contours, then you're better off working directly with your input grids than calculating contours and testing if a point is inside them.Contours follow a constant value of gridded data. Each contour is a polygon enclosing areas of the input grid greater than that value.If you need to know how many contours a given point is inside, it's faster to sample the input grid at the point's location and operate the returned "z" value. The number of contours that it's inside can be extracted directly from it if you know what values you created contours at.For example:import numpy as npfrom scipy.interpolate import RegularGridInterpolatorimport matplotlib.pyplot as plt# One of your input gridded datasetsy, x = np.mgrid[-5:5:100j, -5:5:100j]z = np.sin(np.hypot(x, y)) + np.hypot(x, y) / 10contour_values = [-1, -0.5, 0, 0.5, 1, 1.5, 2]# A point location...x0, y0 = np.random.normal(0, 2, 2)# Visualize what's happening...fig, ax = plt.subplots()cont = ax.contourf(x, y, z, contour_values, cmap='gist_earth')ax.plot([x0], [y0], marker='o', ls='none', color='salmon', ms=12)fig.colorbar(cont)# Instead of working with whether or not the point intersects the# contour polygons we generated, we'll turn the problem on its head:# Sample the grid at the point locationinterp = RegularGridInterpolator((x[0,:], y[:,0]), z)z0 = interp([x0, y0])# How many contours would the point be inside?num_inside = sum(z0 > c for c in contour_values)[0]ax.set(title='Point is inside {} contours'.format(num_inside))plt.show() |
Sending raw bytes over ZeroMQ in Python I'm porting some Python code that uses raw TCP sockets to ZeroMQ for better stability and a cleaner interface.Right off the bat I can see that a single packet of raw bytes is not sent as I'm expecting.In raw sockets:import socketsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)sock.connect((HOST, PORT))sock.send('\x00\x01\x02 and some more raw bytes')Which is the current working code. This is the same code using ZeroMQ:import zmqcontext = zmq.Context()sock = context.socket(zmq.REQ) # this connection utilizes REQ/REPsock.connect('tcp://{0}:{1}'.format(HOST, PORT))sock.send('\x00\x01\x02 and some more raw bytes')But when I inspect the packets going over the net, they're definitely not what I'm expecting. What am I missing here?Also, when testing this code on the loopback interface (127.0.0.1) with a dummy server it seems to work just fine.Using Python 2.7 if it matters (unicode or whatnot). | Oh. Wow. I overlooked a major flaw in my test, the remote server I was testing on was expecting a raw TCP connection, not a ZMQ connection.Of course ZMQ wasn't able to transfer the message, it didn't even negotiate the connection successfully. When I tested locally I was testing with a dummy ZMQ server, so it worked fine.If I'd have posted the server code it would have immediately made sense that that was the problem.In any case, sorry for the false alarm. |
How to filter for specific objects in a HDF5 file Learning the ILNumerics HDF5 API. I really like the option to setup a complex HDF5 file in one expression using C# object initializers. I created the following file: using (var f = new H5File("myFile.h5")) { f.Add(new H5Group("myTopNode") { new H5Dataset("dsNo1", ILMath.vec<float>(1,200)), // no attributes new H5Group("myGroup") { new H5Dataset("dsYes", ILMath.rand(100,200)) { // matching dataset Attributes = { { "att1", 1 }, { "att2", 2 } } }, new H5Dataset("dsNo2") { // attributes but wrong name Attributes = { { "wrong1", -100 }, { "wrong2", -200 } } } } });}Now I am searching for a clever way to iterate over the file and filter for datasets with specific properties. I want to find all datasets having at least one attribute with "att" in its name, collect and return their content. This is what I made so far: IList<ILArray<double>> list = new List<ILArray<double>>();using (var f = new H5File("myFile.h5")) { var groups = f.Groups; foreach (var g in groups) { foreach (var obj in g) { if (obj.H5Type == H5ObjectTypes.Dataset && obj.Name.Contains("ds")) { var ds = obj as H5Dataset; // look for attributes foreach (var att in ds.Attributes) { //ds.Attributes["att"]. if (att.Name.Contains("att")) { list.Add(ds.Get<double>()); } } } } }}return list; But it does not work recursively. I could adopt it but ILNumerics claims to be convenient so there must be some better way? Something similar to h5py in python? | H5Group provides the Find<T> method which does just what you are looking for. It iterates over the whole subtree, taking arbitrary predicates into account: var matches = f.Find<H5Dataset>( predicate: ds => ds.Attributes.Any(a => a.Name.Contains("att")));Why not make your function return 'ILCell' instead of a 'List'? This more nicely integrates into the ILNumerics memory management (there will be no storage laying around and waiting for the garbage collector to come by): using (var f = new H5File("myFile.h5")) { // create container for the dataset contents ILCell c = cell(size(1, 1)); // one element init // retrieve datasets filtered var matches = f.Find<H5Dataset>(predicate: ds => { if (ds.Attributes.Any(a => a.Name.Contains("att"))) { c[end + 1] = ds.Get<double>(); return true; } return false; }); return c; }Some links: http://ilnumerics.net/hdf5-interface.htmlhttp://ilnumerics.net/Cells.html http://ilnumerics.net/GeneralRules.html |
Total/Average/Changing Salary 1,2,3,4 Menu Change your program so there is a main menu for the manager to select from with four options: Print the total weekly salaries bill.Print the average salary.Change a player’s salary.QuitWhen I run the program, I enter the number 1 and the program stops. How do I link it to the 4 below programs?Program:Chelsea_Salaries_2014 = {'Jose Mourinho':[53, 163500, 'Unknown']}Chelsea_Salaries_2014['Eden Hazard']=[22, 185000, 'June 2017']Chelsea_Salaries_2014['Fernando Torres']=[29, 175000, 'June 2016']Chelsea_Salaries_2014['John Terry']=[32, 175000, 'June 2015']Chelsea_Salaries_2014['Frank Lampard']=[35, 125000, 'June 2014']Chelsea_Salaries_2014['Ashley Cole']=[32, 120000, 'June 2014']Chelsea_Salaries_2014['Petr Cech']=[31, 100000, 'June 2016']Chelsea_Salaries_2014['Gary Cahill']=[27, 80000, 'June 2017']Chelsea_Salaries_2014['David Luiz']=[26, 75000, 'June 2017']Chelsea_Salaries_2014['John Obi Mikel']=[26, 75000, 'June 2017']Chelsea_Salaries_2014['Nemanja Matic']=[25, 75000, 'June 2019']Chelsea_Salaries_2014['Marco Van Ginkel']=[20, 30000, 'June 2018']Chelsea_Salaries_2014['Ramires']=[26, 60000, 'June 2017']Chelsea_Salaries_2014['Oscar']=[21, 67500, 'June 2017']Chelsea_Salaries_2014['Lucas Piazon']=[19, 15000, 'June 2017']Chelsea_Salaries_2014['Ryan Bertrand']=[23, 35000, 'June 2017']Chelsea_Salaries_2014['Marko Marin']=[27, 35000, 'June 2017']Chelsea_Salaries_2014['Cesar Azpilicueta']=[23, 55000, 'June 2017']Chelsea_Salaries_2014['Branislav Ivanovic']=[29, 67500, 'June 2016']Chelsea_Salaries_2014['Ross Turnbull']=[22, 17000, 'June 2017']Chelsea_Salaries_2014['Demba Ba']=[28, 65000, 'June 2016']Chelsea_Salaries_2014['Oriol Romeu']=[22, 15000, 'June 2015']user_input = (int('Welcome! What would you like to do? 1: Print the total salaries bill. 2: Print the average salary. 3: Change a players salary. 4: Quit. '))if user_input == 1: print(sum(i[1] for i in Chelsea_Salaries_2014.values()))else: if user_input == 2: print(sum(i[1] for i in Chelsea_Salaries_2014.values()))/len(Chelsea_Salaries_2014) else: if user_input == 3: def change_salary(Chelsea_Salaries_2014): search_input = input('What player would you like to search for? ') print('His Current Salary is £{0:,}'.format(Chelsea_Salaries_2014[search_input][1])) new_salary = int(input('What would you like to change his salary to? ')) if new_salary <= 200000: Chelsea_Salaries_2014[search_input][1] = new_salary print('Salary has been changed to £{0:,}'.format(new_salary)) else: print('This salary is ridiculous!') while True: change_salary(Chelsea_Salaries_2014) choice = input("Go again? y/n ") if choice.lower() in ('n', 'no'): break else: if user_input == 4: print('Goodbye!') | Put the raw input in a while. while True: user_input = raw_input("Welcome!...") if user_input == 1: ... elif user_unput == 2: ... else: print "this salary is ridic..."After completing a 1,2,3... input ask the user if they would like to do something else y/n, if n: break, this will end the loop. If y, the loop begins again and asks for another user input. |
How to get python's json module to cope with right quotation marks? I am trying to load a utf-8 encoded json file using python's json module. The file contains several right quotation marks, encoded as E2 80 9D. When I calljson.load(f, encoding='utf-8')I receive the message:UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 212068: character maps to How can I convince the json module to decode this properly?EDIT: Here's a minimal example:[ { "aQuote": "“A quote”" }] | There is no encoding in the signature of json.load. The solution should be simply:with open(filename, encoding='utf-8') as f: x = json.load(f) |
How do I get pending windows updates in python? I am trying to get pending windows updates on python but no module returns me the pending windows updates, only windows update history, I don't need especifiation about the update I just need to know if there are pending updates or not, I'm trying to use this code:from windows_tools.updates import get_windows_updatesimport osfor update in get_windows_updates(filter_duplicates=True, include_all_states=False): print(update)It returns: {'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9PLFNLNT3G5G-AppUp.IntelGraphicsExperience', 'description': '9PLFNLNT3G5G-1152921505694231446', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9NBLGGH3FRZM-Microsoft.VCLibs.140.00', 'description': '9NBLGGH3FRZM-1152921505694106457', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9MW2LKJ0TPJF-Microsoft.NET.Native.Framework.2.2', 'description': '9MW2LKJ0TPJF-1152921505692414645', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:18:21', 'title': '9PLL735RFDSM-Microsoft.NET.Native.Runtime.2.2', 'description': '9PLL735RFDSM-1152921505689378154', 'supporturl': '', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:18:15', 'title': 'HP Inc. - HIDClass - 2.1.16.30156', 'description': 'HP Inc. HIDClass driver update released in November 2021', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:18:03', 'title': 'Intel Corporation - Bluetooth - 20.100.7.1', 'description': 'Intel Corporation Bluetooth driver update released in July 2020', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:18:01', 'title': 'Intel Corporation - Extension - 12/16/2018 12:00:00 AM - 20.110.1.1', 'description': 'Intel Corporation Extension driver update released in December 2018', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:17:50', 'title': 'Intel Corporation - Display - 27.20.100.8681', 'description': 'Intel Corporation Display driver update released in September 2020', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'}{'kb': None, 'date': '2022-01-14 20:15:12', 'title': 'Realtek Semiconductor Corp. - MEDIA - 6.0.8940.1', 'description': 'Realtek Semiconductor Corp. MEDIA driver update released in April 2020', 'supporturl': 'http://support.microsoft.com/select/?target=hub', 'operation': 'installation', 'result': 'succeeded'}{'kb': 'KB4591272', 'date': '2022-01-14 20:13:19', 'title': '2021-11 Atualização do Windows 10 Version 21H2 para sistemas baseados em x64 (KB4591272)', 'description': 'Instale esta atualização para resolver problemas no Windows. Para obter a lista completa dos problemas incluídos nesta atualização, consulte o artigo da Base de Dados de Conhecimento Microsoft associado. Talvez seja necessário reiniciar o computador após instalar este item.', 'supporturl': 'http://support.microsoft.com', 'operation': 'installation', 'result': 'succeeded'}{'kb': 'KB5003791', 'date': '2021-10-06 00:00:00', 'title': None, 'description': 'Update', 'supporturl': 'https://support.microsoft.com/help/5003791', 'operation': None, 'result': None}{'kb': 'KB5009636', 'date': '2022-01-20 00:00:00', 'title': None, 'description': 'Update', 'supporturl': None, 'operation': None, 'result': None}{'kb': 'KB5005699', 'date': '2021-10-06 00:00:00', 'title': None, 'description': 'Security Update', 'supporturl': None, 'operation': None, 'result': None}I get all my installed updates and not the pending ones, how can I find the pending ones programmatically.I'm using python 3.10 | There was no solution in python, so I did a vbs script and called from inside my function.the vbs script isSet updateSession = CreateObject("Microsoft.Update.Session")Set updateSearcher = updateSession.CreateupdateSearcher() Set searchResult = updateSearcher.Search("IsInstalled=0 and Type='Software'")If searchResult.Updates.Count <> 0 Then For i = 0 To searchResult.Updates.Count - 1 Set update = searchResult.Updates.Item(i) NextEnd IfMainSub Main() Dim result, fso, fs result = 1 / Cos(25) Set fso = CreateObject("Scripting.FileSystemObject") Set fs = fso.CreateTextFile("output.txt", True) fs.Write searchResult.Updates.Count fs.CloseEnd SubIt gets the number of pending updates, then I called inside my function like thisimport subprocess, time, osclass update_monitor(): def __init__(self): self.output='output.txt' def updates_restando(self): os.system(r'script.vbs') time.sleep(10) with open(self.output,'r') as file: for i in file: if i == '0': print('Não há atualizações disponiveis') return 'Não há atualizações disponiveis' else: print('Existem atualizações pendentes') return 'Existem atualizações pendentes'a = update_monitor()a.updates_restando()this solution worked perfectly fine. |
Python: get values from list of dictionaries I am using python-sudoers to parse a massive load of sudoers files, alas this library returns some weird data.looks like a list of dictionaries, i dont really know.[{'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'TSM_SSI'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMWIN'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMUNIX'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMLIBMGR'}]this works, but i need the single values in variables, like extracted_runas = "ALL", and so on...>>> lst = [{'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'TSM_SSI'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMWIN'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMUNIX'}, {'run_as': ['ALL'], 'tags': ['NOPASSWD'], 'command': 'SU_TSMLIBMGR'}] >>> print(*[val for dic in lst for val in dic.values()], sep='\n')['ALL']['NOPASSWD']TSM_SSI['ALL']['NOPASSWD']SU_TSMWIN['ALL']['NOPASSWD']SU_TSMUNIX['ALL']['NOPASSWD']SU_TSMLIBMGR | So because in each dict we have repeated variable names the only possible solution is to name them extracted_run_as_0 = 'ALL', extracted_run_as_1 = 'ALL' etc.for i, dictionary in enumerate(lst): for k, v in dictionary.items(): v = v[0] if isinstance(v, list) else v exec(f"extracted_{k}_{i} = {v!r}")print(extracted_run_as_0, extracted_tags_0, extracted_run_as_1) # etc.. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.