questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Difference between subprocess.Popen preexec_fn and start_new_session in python What is the difference between these two options to start a new process with subprocess.Popen for python3.2+ under Linux:proc = subprocess.Popen(args, ..., preexec_fn=os.setsid) # 1proc = subprocess.Popen(args, ..., start_new_session=True) # 2I need this as I need to set process group ID to have a possibility to kill at once this process and all of its children. This is then used in the case if the process run time exceeds certain threshold:try: out, err = proc.communicate(timeout=time_max)except subprocess.TimeoutExpired: os.killpg(os.getpgid(proc.pid), signal.SIGTERM) I tested my code with the both options (#1 & #2) and they both seem to work ok for me. But I wonder what is the best option here - the one with preexec_fn or the one with start_new_session?
According to the official Python Docs,The preexec_fn parameter is not safe to use in the presence of threads in your application. The child process could deadlock before exec is called. If you must use it, keep it trivial! Minimize the number of libraries you call into.If you need to modify the environment for the child use the env parameter rather than doing it in a preexec_fn. The start_new_session parameter can take the place of a previously common use of preexec_fn to call os.setsid() in the child.So I guess the answer to your question is that start_new_session was introduced to replace the common operation of using preexec_fn to set the session id through os.setsid(), which is not thread safe.
How can I make my dictionary be able to be indexed by a function in python 3.x I am trying to make a program that finds out how many integers in a list are not the integer that is represented the most in that list. To do that I have a command which creates a dictionary with every value in the list and the number of times it is represented in it. Next I try to create a new list with all items from the older list except the most represented value so I can count the length of the list. The problem is that I cannot access the most represented value in the dictionary as I get an error code.import operatorimport collectionsa = [7, 155, 12, 155]dictionary = collections.Counter(a).items()b = []for i in a: if a != dictionary[max(iter(dictionary), key=operator.itemgetter(1))[0]]: b.append(a)I get this error code: TypeError: 'dict_items' object does not support indexing
The variable you called dictionary is not a dict but a dict_items. >>> type(dictionary)<class 'dict_items'>>>> help(dict.items)items(...) D.items() -> a set-like object providing a view on D's itemsand sets are iterable, not indexable:for di in dictionary: print(di) # is okdictionary[0] # triggers the error you sawNote that Counter is very rich, maybe using Counter.most_common would do the trick.
Difference between id and equality in python How is the id of an object computed? https://docs.python.org/3/library/functions.html#idIt seems there is a place in a class to do equality, with __eq__ but where is the is operation done, and how is the id arrived at?
You can think of id(obj) as some sort of address of the object.The way it is computed, and what the value represents, is implementation-dependent, and you should not make any assumptions about the value.What you need to know:Object's id will not change as long as the object existsTwo co-existing objects will have different idsAn object may have the same id as another object which has already been deallocated (since it is gone, its address may be reused).a is b is equivalent to id(a) == id(b).You cannot override the way id is computed, nor the way is behaves, like you'd override operators such as __eq__.
Command line python and jupyter notebooks use two different versions of torch On my conda environment importing torch from command line Python and from a jupyter notebook yields two different results.Command line Python:$ source activate GNN(GNN) $ python>>> import torch>>> print(torch.__file__)/home/riccardo/.local/lib/python3.7/site-packages/torch/__init__.py>>> print(torch.__version__)0.4.1Jupyter:(GNN) $ jupyter notebook --no-browser --port=8890import torchprint(torch.__file__)/home/riccardo/.local/lib/python3.6/site-packages/torch/__init__.pyprint(torch.__version__)1.2.0+cu92I tried the steps suggested in Conda environments not showing up in Jupyter Notebook $ conda install ipykernel$ source activate GNN(GNN) $ python -m ipykernel install --user --name GNN --display-name "Python (GNN)"Installed kernelspec GNN in /home/riccardo/.local/share/jupyter/kernels/gnnbut that did not solve the problem.
You need to sort of make the Anaconda environment recognized in Jupyter using conda activate myenvconda install -n myenv ipykernelpython -m ipykernel install --user --name myenv --display-name "Python (myenv)"Replace myenv with the name of your environment. Later on, in your Jupyter Notebook, in the Select Kernel option, you will see this Python (myenv) option.
Is there a way to use the secrets python module with a seed? Random.seed() Is less secure than secrets, but I can't find any documentation on using a seed with secrets? or is random.seed just as fine?
No, there isn't. secrets uses random's SystemRandom class, which reads from the operating system's random device, such as /dev/urandom on Linux. This OS randomness is based off hardware entropy, which is what gives it its security, and there is no way to seed it.
Matplotlib stacked bar chart Hi I'm fairly new to matplotlib but I'm trying to plot a stacked bar chart. Instead of stacking, my bars are overlapping one another.This is the dictionary where I'm storing data. eventsDict = {'A' : [30.427007371788505, 3.821656050955414], 'B' : [15.308879925288613, 25.477707006369428], 'C' : [10.846066723627477, 1.910828025477707], 'D' : [0.32586881793073297, 0.6369426751592357],'E' : [3.110656307747332, 11.464968152866243], 'F' : [8.183480040534901, 1.910828025477707], 'G' : [3.048065650644783, 16.560509554140125], 'H' : [9.950920976811652, 4.45859872611465]}My stacked bar graph has two bars. The first one contains all the data from the first value of the list and the second one contains all the second values from the list. (The list being the values in the dictionary)First, I convert the dictionary to a tuple:allEvents = list(self.eventsDict.items()) This turns the dictionary to this list:all Events = [('A', [30.427007371788505, 3.821656050955414]), ('B', [15.308879925288613, 25.477707006369428]), ('C', [10.846066723627477, 1.910828025477707]), ('D', [0.32586881793073297, 0.6369426751592357]), ('E', [3.110656307747332, 11.464968152866243]), ('F', [8.183480040534901, 1.910828025477707]), ('G', [3.048065650644783, 16.560509554140125]), ('H', [9.950920976811652, 4.45859872611465])]This is where I plot it: range_vals = np.linspace(0, 2, 3) mid_vals = (range_vals[0:-1] + range_vals[1:]) * 0.5 colors = ['#DC7633', '#F4D03F', '#52BE80', '#3498DB', '#9B59B6', '#C0392B', '#2471A3', '#566573', '#95A5A6'] x_label = ['All events. %s total events' % (totalEvents), 'Corrected p-value threshold p < %s. %s total events' % (self.pVal, totalAdjusted)] #Turn the dict to a tuple. That way it is ordered and is subscriptable. allEvents = list(self.mod_eventsDict.items()) #print (allEvents) #Use below to index: #list[x] key - value pairing #list[x][0] event name (key) #list[x][1] list of values [val 1(all), val 2(adjusted)] #Plot the Top bar first plt.bar(mid_vals, allEvents[0][1], color = colors[0], label = allEvents[0][0]) #Plot the rest x = 1 for x in range(1, 20): try: plt.bar(mid_vals, allEvents[x-1][1], bottom =allEvents[x-1][1], color = colors[x], label = allEvents[x][0]) x = x + 1 except IndexError: continue plt.xticks(mid_vals) # for classic style plt.xticks(mid_vals, x_label) # for classic style plt.xlabel('values') plt.ylabel('Count/Fraction') plt.title('Stacked Bar chart') plt.legend() plt.axis([0, 2.5, 0, 1]) plt.show()This is the graph output. Ideally, they should all add up to 1 when stacked. I made them all a fraction of one whole so that both bars will have the same height. However, they just overlap each other. Also, note that stacks have a different label from their names on the dictionary. Please help me debug!!
You'll need to set bottom differently - this tells matplotlib where to place the bottom of the bar you're plotting, so it needs to be the sum of all of the heights of the bars that came before.You could for example track the current heights of the bars with a list like so:current_heights = [0] * 20for x in range(20): try: plt.bar(mid_vals, allEvents[x][1], bottom=current_heights[x], color=colors[x], label=allEvents[x][0]) x = x + 1 current_heights[x] += allEvents[x][1] #increment bar height after plotting except IndexError: continue
Muscle Multiple Sequence Alignment with Biopython? I just learned to use python (and Biopython) so this question may bespeak my inexperience.In order to carry out MSA of sequences in a file (FileA.fasta), I use the following code:from Bio.Align.Applications import MuscleCommandlineinp = 'FileA.fasta'outp = 'FileB.fasta'cline = MuscleCommandline(input=inp, out=outp)cline()I get the following error:ApplicationError... Non-zero return code 127 from 'muscle -in FileA.fasta -out FileB.fasta', message '/bin/sh: muscle: command not found'I know that this has something to do with the executable not being in my working PATH. The Biopython tutorial suggests that I update the PATH to include the location of Muscle Tools and it gives an example of this for Windows, but I don't know how to do this for MAC.Please help. Thank you.
First make sure you know where you installed muscle. If, for example, you installed muscle in:/usr/bin/muscle3.8.31_i86darwin64then you edit /etc/paths with:$ sudo vi /etc/pathsEach entry is separated by line breaks:/usr/local/bin/bin/usr/sbin/sbinAdd the appropriate path (in this example /usr/bin) to the list. Save with wq!Now, make sure the muscle is on your path. Try to runmuscle -in FileA.fasta -out FileB.fastaIf that works, the BioPython code should work as well.
Two dimensional FFT using python results in slightly shifted frequency I know there have been several questions about using the Fast Fourier Transform (FFT) method in python, but unfortunately none of them could help me with my problem:I want to use python to calculate the Fast Fourier Transform of a given two dimensional signal f, i.e. f(x,y). Pythons documentation helps a lot, solving a few issues, which the FFT brings with it, but i still end up with a slightly shifted frequency compared to the frequency i expect it to show. Here is my python code:from scipy.fftpack import fft, fftfreq, fftshiftimport matplotlib.pyplot as pltimport numpy as npimport mathfq = 3.0 # frequency of signal to be sampledN = 100.0 # Number of sample points within interval, on which signal is consideredx = np.linspace(0, 2.0 * np.pi, N) # creating equally spaced vector from 0 to 2pi, with spacing 2pi/Ny = xxx, yy = np.meshgrid(x, y) # create 2D meshgridfnc = np.sin(2 * np.pi * fq * xx) # create a signal, which is simply a sine function with frequency fq = 3.0, modulating the x(!) directionft = np.fft.fft2(fnc) # calculating the fft coefficientsdx = x[1] - x[0] # spacing in x (and also y) direction (real space)sampleFrequency = 2.0 * np.pi / dxnyquisitFrequency = sampleFrequency / 2.0freq_x = np.fft.fftfreq(ft.shape[0], d = dx) # return the DFT sample frequencies freq_y = np.fft.fftfreq(ft.shape[1], d = dx)freq_x = np.fft.fftshift(freq_x) # order sample frequencies, such that 0-th frequency is at center of spectrum freq_y = np.fft.fftshift(freq_y)half = len(ft) / 2 + 1 # calculate half of spectrum length, in order to only show positive frequenciesplt.imshow( 2 * abs(ft[:half,:half]) / half, aspect = 'auto', extent = (0, freq_x.max(), 0, freq_y.max()), origin = 'lower', interpolation = 'nearest',)plt.grid()plt.colorbar()plt.show()And what i get out of this when running it, is:Now you see that the frequency in x direction is not exactly at fq = 3, but slightly shifted to the left. Why is this? I would assume that is has to do with the fact, that FFT is an algorithm using symmetry arguments and half = len(ft) / 2 + 1is used to show the frequencies at the proper place. But I don't quite understand what the exact problem is and how to fix it.Edit: I have also tried using a higher sampling frequency (N = 10000.0), which did not solve the issue, but instead shifted the frequency slightly too far to the right. So i am pretty sure that the problem is not the sampling frequency.Note: I'm aware of the fact, that the leakage effect leads to unphysical amplitudes here, but in this post I am primarily interested in the correct frequencies.
I found a number of issuesyou use 2 * np.pi twice, you should choose one of either linspace or the arg to sine as radians if you want a nice integer number of cyclesadditionally np.linspace defaults to endpoint=True, giving you an extra point for 101 instead of 100fq = 3.0 # frequency of signal to be sampledN = 100 # Number of sample points within interval, on which signal is consideredx = np.linspace(0, 1, N, endpoint=False) # creating equally spaced vector from 0 to 2pi, with spacing 2pi/Ny = xxx, yy = np.meshgrid(x, y) # create 2D meshgridfnc = np.sin(2 * np.pi * fq * xx) # create a signal, which is simply a sine function with frequency fq = 3.0, modulating the x(!) directionyou can check these issues: len(x)Out[228]: 100plt.plot(fnc[0])fixing the linspace endpoint now means you have an even number of fft bins so you drop the + 1 in the half calcmatshow() appears to have better defaults, your extent = (0, freq_x.max(), 0, freq_y.max()), in imshow appears to fubar the fft bin numberingfrom scipy.fftpack import fft, fftfreq, fftshiftimport matplotlib.pyplot as pltimport numpy as npimport mathfq = 3.0 # frequency of signal to be sampledN = 100 # Number of sample points within interval, on which signal is consideredx = np.linspace(0, 1, N, endpoint=False) # creating equally spaced vector from 0 to 2pi, with spacing 2pi/Ny = xxx, yy = np.meshgrid(x, y) # create 2D meshgridfnc = np.sin(2 * np.pi * fq * xx) # create a signal, which is simply a sine function with frequency fq = 3.0, modulating the x(!) directionplt.plot(fnc[0])ft = np.fft.fft2(fnc) # calculating the fft coefficients#dx = x[1] - x[0] # spacing in x (and also y) direction (real space)#sampleFrequency = 2.0 * np.pi / dx#nyquisitFrequency = sampleFrequency / 2.0##freq_x = np.fft.fftfreq(ft.shape[0], d=dx) # return the DFT sample frequencies #freq_y = np.fft.fftfreq(ft.shape[1], d=dx)##freq_x = np.fft.fftshift(freq_x) # order sample frequencies, such that 0-th frequency is at center of spectrum #freq_y = np.fft.fftshift(freq_y)half = len(ft) // 2 # calculate half of spectrum length, in order to only show positive frequenciesplt.matshow( 2 * abs(ft[:half, :half]) / half, aspect='auto', origin='lower' )plt.grid()plt.colorbar()plt.show()zoomed the plot:
Monkey patching Python Property setters (and getters?) So, monkey patching is pretty awesome, but what if I want to monkey patch a @property?For example, to monkey patch a method:def new_method(): print('do stuff')SomeClass.some_method = new_methodhowever, properties in python re-write the = sign.Quick example, lets say I want to modify x to be 4. How would I go about doing that?:class MyClass(object): def __init__(self): self.__x = 3 @property def x(self): return self.__x @x.setter def x(self, value): if value != 3: print('Nice try') else: self.__x = valuefoo = MyClass()foo.x = 4print(foo.x)foo.__x = 4print(foo.x) Nice try 3 3
Using _ClassName__attribute, you can access the attribute:>>> class MyClass(object):... def __init__(self):... self.__x = 3... @property... def x(self):... return self.__x... @x.setter... def x(self, value):... if value != 3:... print('Nice try')... else:... self.__x = value... >>> foo = MyClass()>>> foo._MyClass__x = 4>>> foo.x4See Private Variables and Class-local References - Python tutorial, especially parts that mention about name mangling.
Path Finder code, __getitem__ TypeError I am trying to make a "path finder"def find_all_paths(start, end, graph, path=[]): path = path + [start] if start == end: return [path] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) return paths graph={1: ['2'], 2: ['3', '4', '5'], 3: ['4'], 4: ['5', '6'], 5: [], 6: []}if I enter find_all_paths(2,5,graph) in the shell I should get back all the paths that go from key 2 in the graph dictionary to the the 5 valuesa proper result should be something likepath=[[2,5],[2,3,4,5][2,4,5]]the code keeps giving values errors such as for node in graph[start]:TypeError: 'int' object has no attribute '__getitem__'could someone please help me get this thing running
There are several awkwardness and errors:Instead of initialising the path parameter with a list, use None.And create the empty list in the function body.def find_all_paths(start, end, graph, path=None): path = path or []The values passed to the find_all_paths doesn't respect the signature.Write this instead:newpaths = find_all_paths(node, end, graph, path)Since the values are integer, the graph must contain int instead of strings.graph = {1: [2], 2: [3, 4, 5], 3: [4], 4: [5, 6], 5: [], 6: []}Here is the fixed version of your code:def find_all_paths(start, end, graph, path=None): path = path or [] path.append(start) if start == end: return [path] paths = [] for node in graph[start]: if node not in path: newpaths = find_all_paths(node, end, graph, path) for newpath in newpaths: paths.append(newpath) return pathsIf you try this:graph = {1: [2], 2: [3, 4, 5], 3: [4], 4: [5, 6], 5: [], 6: []}print(find_all_paths(2, 5, graph))You'll get:[[2, 3, 4, 5, 6]]
Loading SQLite3 values into Python variables I have a project built in Python 2.7, using SQLite3 as the database. I need to know how to load an item from a particular row and column into an existing Python variable.TY!
Here are the basic steps:import sqlite3conn = sqlite3.connect(':memory:')curs = conn.cursor()results = curs.execute( """SELECT mycol FROM mytable WHERE somecol = ?;""", (some_var,) ).fetchall()curs.close()conn.close()For further research you can look into using a context manager (with statement), and how to fetch results into a dict. Here's an example:with sqlite3.connect(':memory:') as conn: curs = conn.cursor() curs.row_factory = sqlite3.Row try: results = curs.execute( """SELECT mycol FROM mytable WHERE somecol = ?;""", (some_var,) ).fetchall() # you would put your exception-handling code here finally: curs.close()The benefits of the context manager are many, including the fact that your connection is closed for you. The benefit of mapping the results in a dict is that you can access column values by name as opposed to a less-meaningful integer.
I am getting pip version upgrade message while installing pygmaps I am getting this error while installing pygmaps package in pycharm.Could not find a version that satisfies the requirement pygmaps (from versions: )No matching distribution found for pygmapsYou are using pip version 10.0.1, however version 19.1.1 is available.You should consider upgrading via the 'python -m pip install --upgrade pip' command.I have already upgraded pip version to 19.1.1, but still showing this error.
pip install git+https://github.com/thearn/pygmaps-extended can you try this one ? if it doesnt work https://code.google.com/archive/p/pygmaps/downloads go to this link and download it manually and add to your site-packages
Where is spark/pyspark saving my parquet files? I'm saving a dataframe in pyspark to a particular location, but cannot see the file/files in the directory. Where are they? How do I get to them out side of pyspark? And how do I delete them? And what is it that I am missing about how spark works? Here's how I save them...df.write.format('parquet').mode('overwrite').save('path/to/filename')And subsequently the following works...df_ntf = spark.read.format('parquet').load('path/to/filename')But no files ever appear in path/to/filename. This is on a cloudera cluster, let me know if any other details are needed to diagnose the problem.EDIT - This is the command I use to set up my spark contexts.os.environ['SPARK_HOME'] = "/opt/cloudera/parcels/Anaconda/../SPARK2/lib/spark2/"os.environ['PYSPARK_PYTHON'] = "/opt/cloudera/parcels/Anaconda/envs/python3/bin/python" conf = SparkConf()conf.setAll([('spark.executor.memory', '3g'), ('spark.executor.cores', '3'), ('spark.num.executors', '29'), ('spark.cores.max', '4'), ('spark.driver.memory', '2g'), ('spark.pyspark.python', '/opt/cloudera/parcels/Anaconda/envs/python3/bin/python'), ('spark.dynamicAllocation.enabled', 'false'), ('spark.sql.execution.arrow.enabled', 'true'), ('spark.sql.crossJoin.enabled', 'true') ])print("Creating Spark Context at {}".format(datetime.now()))spark_ctx = SparkContext.getOrCreate(conf=conf)spark = SparkSession(spark_ctx)hive_ctx = HiveContext(spark_ctx)sql_ctx = SQLContext(spark_ctx)
Ok, a colleague and I have figured it out. It's not complicated but we are but simple data scientists so it wasn't obvious to us.Basically the files were being saved in a different hdfs drive, not the drive from which we run our queries using Jupyter notebooks.We found them by doing;hdfs dfs -ls -h /user/my.name/path/to
How does one fit multiple independent and overlapping Lorentzian peaks in a set of data? I need to fit several Lorentzian peaks in the same dataset, some of which are overlapping. What I need most from the function is the peak positions (centers) however I can't seem to fit all the peaks in these data. I first tried using scipy's optimize curve fit, however I wasn't able to get the bounds to work and it would try to fit the full range of spectra. I've been using the python package lmfit with decent results, however I seem to be unable to get the fit to pick the overlapping peaks well. you can see the raw spectra with marked peaks hereand the results of my fitting hereYou can find the data I am working withhereimport osimport matplotlib.pyplot as pltimport numpy as npfrom lmfit.models import LorentzianModeltest=np.loadtxt('filename.txt')plt.figure()#lz1 = LorentzianModel(prefix='lz1_')pars=lz1.guess(y,x=x)pars.update(lz1.make_params())pars['lz1_center'].set(0.61, min=0.5, max=0.66)pars['lz1_amplitude'].set(0.028)pars['lz1_sigma'].set(0.7)lz2 = LorentzianModel(prefix='lz2_')pars.update(lz2.make_params())pars['lz2_center'].set(0.76, min=0.67, max=0.84)pars['lz2_amplitude'].set(0.083)pars['lz2_sigma'].set(0.04)lz3 = LorentzianModel(prefix='lz3_')pars.update(lz3.make_params())pars['lz3_center'].set(0.85,min=0.84, max=0.92)pars['lz3_amplitude'].set(0.048)pars['lz3_sigma'].set(0.05)lz4 = LorentzianModel(prefix='lz4_')pars.update(lz4.make_params())pars['lz4_center'].set(0.98, min=0.94, max=1.0)pars['lz4_amplitude'].set(0.028)pars['lz4_sigma'].set(0.02)lz5 = LorentzianModel(prefix='lz5_')pars.update(lz5.make_params())pars['lz5_center'].set(1.1, min=1.0, max=1.2)pars['lz5_amplitude'].set(0.037)pars['lz5_sigma'].set(0.07)lz6 = LorentzianModel(prefix='lz6_')pars.update(lz6.make_params())pars['lz6_center'].set(1.4, min=1.2, max=1.5)pars['lz6_amplitude'].set(0.048)pars['lz6_sigma'].set(0.45)lz7 = LorentzianModel(prefix='lz7_')pars.update(lz7.make_params())pars['lz7_center'].set(1.54,min=1.4, max=1.6)pars['lz7_amplitude'].set(0.037)pars['lz7_sigma'].set(0.03)lz8 = LorentzianModel(prefix='lz8_')pars.update(lz8.make_params())pars['lz8_center'].set(1.7, min=1.6, max=1.8)pars['lz8_amplitude'].set(0.04)pars['lz8_sigma'].set(0.17)mod = lz1 + lz2 + lz3 + lz4 + lz5 + lz6 +lz7 + lz8init = mod.eval(pars,x=x)out=mod.fit(y,pars,x=x)print(out.fit_report(min_correl=0.5))plt.scatter(x,y, s=1)plt.plot(x,init,'k:')plt.plot(x,out.best_fit, 'r-')
Actually, just adding a quadratic background and lifting the bounds on the centroids should give a decent fit.Using your data, I modified your example a little::#!/usr/bin/env pythonimport matplotlib.pyplot as pltimport numpy as npfrom lmfit.models import LorentzianModel, QuadraticModeltest = np.loadtxt('spectra.txt')xdat = test[0, :]ydat = test[1, :]def add_peak(prefix, center, amplitude=0.005, sigma=0.05): peak = LorentzianModel(prefix=prefix) pars = peak.make_params() pars[prefix + 'center'].set(center) pars[prefix + 'amplitude'].set(amplitude) pars[prefix + 'sigma'].set(sigma, min=0) return peak, parsmodel = QuadraticModel(prefix='bkg_')params = model.make_params(a=0, b=0, c=0)rough_peak_positions = (0.61, 0.76, 0.85, 0.99, 1.10, 1.40, 1.54, 1.7)for i, cen in enumerate(rough_peak_positions): peak, pars = add_peak('lz%d_' % (i+1), cen) model = model + peak params.update(pars)init = model.eval(params, x=xdat)result = model.fit(ydat, params, x=xdat)comps = result.eval_components()print(result.fit_report(min_correl=0.5))plt.plot(xdat, ydat, label='data')plt.plot(xdat, result.best_fit, label='best fit')for name, comp in comps.items(): plt.plot(xdat, comp, '--', label=name)plt.legend(loc='upper right')plt.show()which prints a report of[[Model]] ((((((((Model(parabolic, prefix='bkg_') + Model(lorentzian, prefix='lz1_')) + Model(lorentzian, prefix='lz2_')) + Model(lorentzian, prefix='lz3_')) + Model(lorentzian, prefix='lz4_')) + Model(lorentzian, prefix='lz5_')) + Model(lorentzian, prefix='lz6_')) + Model(lorentzian, prefix='lz7_')) + Model(lorentzian, prefix='lz8_'))[[Fit Statistics]] # fitting method = leastsq # function evals = 1101 # data points = 800 # variables = 27 chi-square = 7.3824e-04 reduced chi-square = 9.5504e-07 Akaike info crit = -11062.6801 Bayesian info crit = -10936.1956[[Variables]] bkg_c: 0.03630504 +/- 9.4269e-04 (2.60%) (init = 0) bkg_b: -0.05150031 +/- 0.00272084 (5.28%) (init = 0) bkg_a: 0.02285577 +/- 0.00109543 (4.79%) (init = 0) lz1_sigma: 0.03853490 +/- 0.00224206 (5.82%) (init = 0.05) lz1_center: 0.60596282 +/- 0.00101699 (0.17%) (init = 0.61) lz1_amplitude: 0.00121362 +/- 8.0862e-05 (6.66%) (init = 0.005) lz1_fwhm: 0.07706979 +/- 0.00448412 (5.82%) == '2.0000000*lz1_sigma' lz1_height: 0.01002487 +/- 3.1221e-04 (3.11%) == '0.3183099*lz1_amplitude/max(2.220446049250313e-16, lz1_sigma)' lz2_sigma: 0.03534226 +/- 3.5893e-04 (1.02%) (init = 0.05) lz2_center: 0.76784323 +/- 1.9002e-04 (0.02%) (init = 0.76) lz2_amplitude: 0.00738785 +/- 8.9378e-05 (1.21%) (init = 0.005) lz2_fwhm: 0.07068452 +/- 7.1786e-04 (1.02%) == '2.0000000*lz2_sigma' lz2_height: 0.06653864 +/- 3.6663e-04 (0.55%) == '0.3183099*lz2_amplitude/max(2.220446049250313e-16, lz2_sigma)' lz3_sigma: 0.03948780 +/- 0.00111507 (2.82%) (init = 0.05) lz3_center: 0.85427526 +/- 5.4206e-04 (0.06%) (init = 0.85) lz3_amplitude: 0.00317016 +/- 1.1244e-04 (3.55%) (init = 0.005) lz3_fwhm: 0.07897560 +/- 0.00223015 (2.82%) == '2.0000000*lz3_sigma' lz3_height: 0.02555459 +/- 3.9771e-04 (1.56%) == '0.3183099*lz3_amplitude/max(2.220446049250313e-16, lz3_sigma)' lz4_sigma: 0.02983045 +/- 0.00283845 (9.52%) (init = 0.05) lz4_center: 0.99544342 +/- 0.00142552 (0.14%) (init = 0.99) lz4_amplitude: 6.9114e-04 +/- 7.6016e-05 (11.00%) (init = 0.005) lz4_fwhm: 0.05966089 +/- 0.00567690 (9.52%) == '2.0000000*lz4_sigma' lz4_height: 0.00737492 +/- 3.6918e-04 (5.01%) == '0.3183099*lz4_amplitude/max(2.220446049250313e-16, lz4_sigma)' lz5_sigma: 0.06666333 +/- 0.00196152 (2.94%) (init = 0.05) lz5_center: 1.10162076 +/- 7.8293e-04 (0.07%) (init = 1.1) lz5_amplitude: 0.00522275 +/- 2.2587e-04 (4.32%) (init = 0.005) lz5_fwhm: 0.13332666 +/- 0.00392304 (2.94%) == '2.0000000*lz5_sigma' lz5_height: 0.02493807 +/- 4.7491e-04 (1.90%) == '0.3183099*lz5_amplitude/max(2.220446049250313e-16, lz5_sigma)' lz6_sigma: 0.11712113 +/- 0.00307555 (2.63%) (init = 0.05) lz6_center: 1.43220451 +/- 0.00102240 (0.07%) (init = 1.4) lz6_amplitude: 0.01215451 +/- 5.1928e-04 (4.27%) (init = 0.005) lz6_fwhm: 0.23424227 +/- 0.00615109 (2.63%) == '2.0000000*lz6_sigma' lz6_height: 0.03303334 +/- 6.2184e-04 (1.88%) == '0.3183099*lz6_amplitude/max(2.220446049250313e-16, lz6_sigma)' lz7_sigma: 0.02603963 +/- 0.00335175 (12.87%) (init = 0.05) lz7_center: 1.55545329 +/- 0.00152567 (0.10%) (init = 1.54) lz7_amplitude: 4.6978e-04 +/- 7.1036e-05 (15.12%) (init = 0.005) lz7_fwhm: 0.05207926 +/- 0.00670351 (12.87%) == '2.0000000*lz7_sigma' lz7_height: 0.00574266 +/- 3.8805e-04 (6.76%) == '0.3183099*lz7_amplitude/max(2.220446049250313e-16, lz7_sigma)' lz8_sigma: 0.11332337 +/- 0.00336106 (2.97%) (init = 0.05) lz8_center: 1.79132485 +/- 0.00117968 (0.07%) (init = 1.7) lz8_amplitude: 0.00700579 +/- 3.2606e-04 (4.65%) (init = 0.005) lz8_fwhm: 0.22664674 +/- 0.00672212 (2.97%) == '2.0000000*lz8_sigma' lz8_height: 0.01967830 +/- 4.2422e-04 (2.16%) == '0.3183099*lz8_amplitude/max(2.220446049250313e-16, lz8_sigma)'[[Correlations]] (unreported correlations are < 0.500) C(bkg_b, bkg_a) = -0.993 C(bkg_c, bkg_b) = -0.981 C(bkg_c, bkg_a) = 0.966 C(lz6_sigma, lz6_amplitude) = 0.963 C(lz8_sigma, lz8_amplitude) = 0.935 C(lz5_sigma, lz5_amplitude) = 0.933 C(bkg_b, lz6_amplitude) = -0.907 C(lz3_sigma, lz3_amplitude) = 0.905 <snip>and shows a plot ofThat may not be perfect, but should give you a pretty good start.
Matplot: How to plot true/false or active/deactive data? I want to plot a true/false or active/deactive binary data similar to the following picture:The horizontal axis is time and the vertical axis is some entities(Here some sensors) which is active(white) or deactive(black). How can I plot such a graphs using pyplot.I searched to find the name of these graphs but I couldn't find it.
What you are looking for is imshow:import matplotlib.pyplot as pltimport numpy as np# get some data with true @ probability 80 %data = np.random.random((20, 500)) > .2fig = plt.figure()ax = fig.add_subplot(111)ax.imshow(data, aspect='auto', cmap=plt.cm.gray, interpolation='nearest')Then you will just have to get the Y labels from somewhere.It seems that the image in your question has some interpolation in the image. Let us set a few more things:import matplotlib.pyplot as pltimport numpy as np# create a bit more realistic-looking data# - looks complicated, but just has a constant switch-off and switch-on probabilities# per column# - the result is a 20 x 500 array of booleansp_switchon = 0.02p_switchoff = 0.05data = np.empty((20,500), dtype='bool')data[:,0] = np.random.random(20) < .2for c in range(1, 500): r = np.random.random(20) data[data[:,c-1],c] = (r > p_switchoff)[data[:,c-1]] data[-data[:,c-1],c] = (r < p_switchon)[-data[:,c-1]]# create some labelslabels = [ "label_{0:d}".format(i) for i in range(20) ]# this is the real plotting partfig = plt.figure()ax = fig.add_subplot(111)ax.imshow(data, aspect='auto', cmap=plt.cm.gray)ax.set_yticks(np.arange(len(labels)))ax.set_yticklabels(labels)createsHowever, the interpolation is not necessarily a good thing here. To make the different rows easier to separate, one might use colors:import matplotlib.pyplot as pltimport matplotlib.colorsimport numpy as np# create a bit more realistic-looking data# - looks complicated, but just has a constant switch-off and switch-on probabilities# per column# - the result is a 20 x 500 array of booleansp_switchon = 0.02p_switchoff = 0.05data = np.empty((20,500), dtype='bool')data[:,0] = np.random.random(20) < .2for c in range(1, 500): r = np.random.random(20) data[data[:,c-1],c] = (r > p_switchoff)[data[:,c-1]] data[-data[:,c-1],c] = (r < p_switchon)[-data[:,c-1]]# create some labelslabels = [ "label_{0:d}".format(i) for i in range(20) ]# create a color map with random colorscolmap = matplotlib.colors.ListedColormap(np.random.random((21,3)))colmap.colors[0] = [0,0,0]# create some colorful data:data_color = (1 + np.arange(data.shape[0]))[:, None] * data# this is the real plotting partfig = plt.figure()ax = fig.add_subplot(111)ax.imshow(data_color, aspect='auto', cmap=colmap, interpolation='nearest')ax.set_yticks(np.arange(len(labels)))ax.set_yticklabels(labels)createsOf course, you will want to use something less strange as the coloring scheme, but that is really up to your artistic views. Here the trick is that all True elements on row n have value n+1 and, and all False elements are 0 in data_color. This makes it possible to create a color map. Naturally, if you want a cyclic color map with two or three colors, just use the modulus of data_color in imshow by, e.g. data_color % 3.
Can not clone cStringIO object properly I have the following code to get an image from a url:im = cStringIO.StringIO(image_buffer)now i have to do different operations on the original image such as:Image.open(im).crop(box=(1, 1, 1, 1) but this will edit the im itsself so i can't reuse the Image.open command.Therefore i would like to clone the im object. i have tried that by using the following:copy.deepcopy(im)copy.copy(im)im[:]But none of those seem to work, the copy ones even throw the following exception:object.__new__(cStringIO.StringI) is not safe, use cStringIO.StringI.__new__()I have tried to search for this error but it's not clear to me why it refuses to clone the im object.This is written in python (using the django framework)I am using the PIL library for image manipulations
You can create a copy of a cStringIO.StringIO file object by simply getting out the string value and creating a new object, using the StringIO.getvalue() method:new_file = cStringIO.StringIO(original.getvalue())That said, store a reference to the image object instead, and apply operations to that:image = Image.open(im)image.crop(box=(1, 1, 1, 1))This then allows you to also save the image to a new file (in-memory or otherwise) after you applied all the transformations.You can more easily create additional copies of an image object with the Image.copy() method:image = Image.open(im)image_copy = image.copy()image.crop(box=(1, 1, 1, 1))Here image_copy remains uncropped.
Add wT*x+b after CNN Python I have a problem.I have to take the output of last conv layer of EfficientNet(shape=(,7,7,1280), I call this x) and then calculate H = wT*x+b.My w is [49,49]. After that I have to apply softmax on H and then do .H and x have the same shape=[49,1280].I can't find anything that help me to translate this in code python.Can you help me?Thanks.
I see you use Tensorflow only (I mean without Keras). If you want to multiply H and X elementwise, and H and X are tensors with the same shape, you may use the elementwise multiplication functionality available in Tensorflow. If they are not tensors, you may transform the variables in tensors. Check here the official documentation with all the information.
Removing lines from a text file using python and regular expressions I have some text files, and I want to remove all lines that begin with the asterisk (“*”).Made-up example:words*remove mewordswords*remove me My current code fails. It follows below:import reprogram = open(program_path, "r")program_contents = program.readlines()program.close() new_contents = []pattern = r"[^*.]"for line in program_contents: match = re.findall(pattern, line, re.DOTALL) if match.group(0): new_contents.append(re.sub(pattern, "", line, re.DOTALL)) else: new_contents.append(line)print new_contentsThis produces ['', '', '', '', '', '', '', '', '', '', '*', ''], which is no goo.I’m very much a python novice, but I’m eager to learn. And I’ll eventually bundle this into a function (right now I’m just trying to figure it out in an ipython notebook).Thanks for the help!
You don't want to use a [^...] negative character class; you are matching all characters except for the * or . characters now.* is a meta character, you want to escape that to \*. The . 'match any character' syntax needs a multiplier to match more than one. Don't use re.DOTALL here; you are operating on a line-by-line basis but don't want to erase the newline.There is no need to test first; if there is nothing to replace the original line is returned.pattern = r"^\*.*"for line in program_contents: new_contents.append(re.sub(pattern, "", line))Demo:>>> import re>>> program_contents = '''\... words... *remove me... words... words... *remove me ... '''.splitlines(True)>>> new_contents = []>>> pattern = r"^\*.*">>> for line in program_contents:... new_contents.append(re.sub(pattern, "", line))... >>> new_contents['words\n', '\n', 'words\n', 'words\n', '\n']
Matplotlib -- mplot3d: triplot projected on z=0 axis in 3d plot? I'm trying to plot a function in two variables, piecewise defined on a set of known triangles, more or less like so:import matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3Dimport randomdef f( x, y): if x + y < 1: return 0 else: return 1x = [0, 1, 1, 0]y = [0, 0, 1, 1]tris = [[0, 1, 3], [1, 2,3]]fig = plt.figure()ax = fig.add_subplot( 121)ax.triplot( x, y, tris)xs = [random.random() for _ in range( 100)]ys = [random.random() for _ in range( 100)]zs = [f(xs[i], ys[i]) for i in range( 100)]ax2 = fig.add_subplot( 122, projection='3d')ax2.scatter( xs, ys, zs)plt.show()Ideally, I'd combine both subplots into one by projecting the triangles onto the axis z=0. I know this is possible with other variants of 2d plots, but not with triplot. Is it possible to get what I want?PS. this is a heavily simplified version of the actual implementation I am using right now, therefore the random scattering might seem a bit weird.
I'm not an expert, but this was an interesting problem. After doing some poking around, I think I got something close. I made the Triangulation object manually and then passed it and a z list of zeros into plot_trisurf, and it put the triangles in the right place on z=0.import matplotlib.pyplot as pltimport matplotlib.tri as trifrom mpl_toolkits.mplot3d import Axes3Dimport randomdef f( x, y): if x + y < 1: return 0 else: return 1x = [0, 1, 1, 0]y = [0, 0, 1, 1]tris = [[0, 1, 3], [1, 2,3]]z = [0] * 4triv = tri.Triangulation(x, y, tris)fig = plt.figure()ax = fig.add_subplot( 111, projection='3d')trip = ax.plot_trisurf( triv, z )trip.set_facecolor('white')xs = [random.random() for _ in range( 100)]ys = [random.random() for _ in range( 100)]zs = [f(xs[i], ys[i]) for i in range( 100)]ax.scatter( xs, ys, zs)plt.show()ETA: Added a call to set_facecolor on the Poly3DCollection to make it white rather than follow a colormap. Can be futzed with for desired effect...
Calling a method from an existing instance My understanding of Object Orientated Programming is a little shaky so if you have any links that would help explain the concepts it would be great to see them!I've shortened the code somewhat. The basic principle is that I have a game that starts with an instance of the main Controller class. When the game is opened the Popup class is opened. The events happens as follows:The start button on the popup is clickedThe method start_click() runsWhich calls the method start_game() in the Controller instanceWhich in turn changes the game state to 'True' in the original Controller instanceMy problem is with step 3. The error message I get is:TypeError: unbound method start_game() must be called with Controller instance as first argument (got nothing instead)I guess there needs to be some reference to the Controller class in the StartPopUp class. But I don't quite understand how to create that reference? import kivykivy.require('1.8.0')from kivy.app import Appfrom kivy.uix.widget import Widgetfrom kivy.clock import Clockfrom kivy.properties import BooleanProperty, NumericProperty, ObjectPropertyfrom kivy.uix.popup import Popupfrom kivy.lang import BuilderBuilder.load_string('''<StartPopUp> size_hint: .2, .2 auto_dismiss: False title: 'Welcome' Button: text: 'Play' on_press: root.start_click() on_press: root.dismiss()''')class StartPopUp(Popup): def __init__(self, **kw): super(StartPopUp, self).__init__(**kw) def start_click(self): Controller.start_game() class Controller(Widget): playing_label = BooleanProperty(False) #Intitial phase of game is off def __init__(self, **kw): super(Controller, self).__init__(**kw) def start_popup(self, dt): sp = StartPopUp() sp.open() def start_game(self): self.playing_label = True print self.playing_label class MoleHuntApp(App): def build(self): game = Controller() Clock.schedule_once(game.start_popup, 1) return gameif __name__ == '__main__': MoleHuntApp().run() Thanks in advance!
You can pass the instance like thisclass StartPopUp(Popup): def __init__(self, controller, **kw): super(StartPopUp, self).__init__(**kw) self.controller = controller def start_click(self): self.controller.start_game()and in Controllerdef start_popup(self, dt): sp = StartPopUp(self) sp.open()
Compare 2 files in Python I am trying to compare two files, A and C, in Python and for some reason the double for loop doesn't seem to work properly:with open(locationA + filenameC,'r') as fileC, open(locationA + filenameA,'r') as fileA: for lineC in fileC: fieldC = lineC.split('#') for lineA in fileA: fieldA = lineA.split('#') print 'UserID Clicks' + fieldC[0] print 'UserID Activities' + fieldA[0] if (fieldC[0] == fieldA[0]) and (fieldC[2] == fieldA[2]): print 'OK'Here, only the line of C seems to be compared, but for the other lines, the "A loop" seems to be ignored.Can anyone help me with this?
Your problem is that once you iterate over fileA once you need to change the pointer to the beginning of the file again. So what you might do is create two lists from both files and iterate over them as many times as you want. For example:fileC_list = fileC.readlines()fileA_list = fileA.readlines()for lineC in fileC_list: # do something for lineA in fileA_list: # do somethins
Connectionn error Jupyter and Elasticsearch (Docker) I am trying to make a connection from Jupyter Notebook to Elasticsearch both in Docker containers but connected to the same network (bridge).Here is my code:elastic_client = Elasticsearch(hosts=["http://localhost:9200/"], http_auth=('generator', 'generator'))elastic_index='data_generator_nvdi_geoloc'df_out = pandas.read_csv(local_destination_path)I get this error:ConnectionError: Connection error caused by: ConnectionError(Connection error caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f7ada675710>: Failed to establish a new connection: [Errno 111] Connection refused))I think it may be a problem of the containers, but both are connected to the same network and I don't know how to solve it.
Issue in your case is using localhost:9200 as connection string. Because ES is not "localhost" inside your Jupiter container. Eeach container gets its own localhost reference. You need to adjust the connection string to your Docker's DNS record so service/container name you set up inside your docker-compose.yml file. Based on your comment you should use elasticsearch:9200 for your connection string.
Return entire row and append to value in a dataframe I am trying to write a function that searches a data frame row by row for a values in a column then appends entire row to the right side of the value if that value is found in any row.Dataframe1Col1 Col2 Col3 Lookup400 60 80 9050 90 68 80What I want is a following dataframe:Dataframe 2Lookup Col1 Col2 Col390 50 90 6880 400 60 80Any help is much appreciated.
You can try this out;df1 = df.iloc[:,0:-1]new = pd.DataFrame()for val in df['Lookup']: s = df1[df1.eq(val).any(1)] new = new.append(s,ignore_index = True)new.insert(0,'Lookup',df['Lookup'])print(new)# Lookup Col1 Col2 Col3# 0 90 50 90 68# 1 80 400 60 80
To sum up values of same items in a list of tuples while they are string If I have list of tuples like this: my_list = [('books', '$5'), ('books', '$10'), ('ink', '$20'), ('paper', '$15'), ('paper', '$20'), ('paper', '$15')] how can I turn the list to this:[('books', '$15'), ('ink', '$20'), ('paper', '$50')]i.e. to add the expense of same item while both the items are string in the tuples. I have problem with the price items being string. Any hint would be greatly appreciated. Thanks a lot!I am getting the first list in this way:my_list=[]for line in data: item, price = line.strip('\n').split(',') cost = ["{:s}".format(item.strip()), "${:.2f}".format(float(price))] my_list.append(tuple(cost))Now my_list should look like given above.
You can use defaultdict to do this:>>> from collections import defaultdict>>> my_list = [('books', '$5'), ('books', '$10'), ('ink', '$20'), ('paper', '$15'), ('paper', '$20'), ('paper', '$15')] >>> res = defaultdict(list)>>> for item, price in my_list:... res[item].append(int(price.strip('$')))... >>> total = [(k, "${}".format(sum(v))) for k, v in res.items()]>>> total[('ink', '$20'), ('books', '$15'), ('paper', '$50')]
How can resolve recursion depth exceeded (Goose-extractor) I am one problem with goose-extractorThis is my code: for resultado in soup.find_all('a', href=True,text=re.compile(llave)): url = resultado['href'] article = g.extract(url=url) print article.titleand take a look at my problem. RuntimeError: maximum recursion depth exceededany suggestions ?I am a lousy programmer or hidden errors are not visible in python
As mentioned in the comments, you can increase the recursion limit with sys.setrecursionlimit() (2/3):import syssys.setrecursionlimit(10**5)You can check what the default limit is with sys.getrecursionlimit() (2/3).Of course, this won't fix whatever's causing this recursion (there's no way to know what's wrong without more details), and might crash your computer if you don't fix that.
How do I enable Pylint in VSCode? I can't get pylint errors to show up in VSCode. I installed pylint globally (sudo apt install pylint), I created venv and installed it there with pip, I selected pylint as linter in VSCode, enabled it, ran it, and it doesnt show any errors in my file. If I check from the command line, it shows many errors in my file.This was working earlier, but not now on VSCode version 1.46.1 and 1.45.1 installed using snap.Same results with the Microsoft and the Jedi python language server.I found the pylint command in the developer console:~/Documents/work/python/.venv/bin/python ~/.vscode/extensions/ms-python.python-2020.6.89148/pythonFiles/pyvsc-run-isolated.py pylint --disable=all --enable=F,unreachable,duplicate-key,unnecessary-semicolon,global-variable-not-assigned,unused-variable,unused-wildcard-import,binary-op-exception,bad-format-string,anomalous-backslash-in-string,bad-open-mode,E0001,E0011,E0012,E0100,E0101,E0102,E0103,E0104,E0105,E0107,E0108,E0110,E0111,E0112,E0113,E0114,E0115,E0116,E0117,E0118,E0202,E0203,E0211,E0213,E0236,E0237,E0238,E0239,E0240,E0241,E0301,E0302,E0303,E0401,E0402,E0601,E0602,E0603,E0604,E0611,E0632,E0633,E0701,E0702,E0703,E0704,E0710,E0711,E0712,E1003,E1101,E1102,E1111,E1120,E1121,E1123,E1124,E1125,E1126,E1127,E1128,E1129,E1130,E1131,E1132,E1133,E1134,E1135,E1136,E1137,E1138,E1139,E1200,E1201,E1205,E1206,E1300,E1301,E1302,E1303,E1304,E1305,E1306,E1310,E1700,E1701 --msg-template='{line},{column},{category},{symbol}:{msg}' --reports=n --output-format=text ~/Documents/work/python/micro.py So pylint is indeed executed! If I run it like this from the command line, the output is:Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)But if I execute pylint micro.py I get:Your code has been rated at -2.50/10 (previous run: 10.00/10, -12.50)Why is VSCode using that command line? I am testing now without a .pylintrc, but even when I had it, VSCode showed no errors, only the command line! However I just tried it again, added a .pylintrc and now the errors do show up in the editor for some reason!But this is only with the Jedi server, when trying with the Microsoft server, linting cannot be enabled with its command, nothing happens and it stays off.My .vscode/settings.json:{ "python.linting.pylintEnabled": true, "python.linting.enabled": true, "python.linting.pylintArgs": [ "--rcfile", "${workspaceFolder}/backend/.pylintrc" ]}
Simplest way using UI:Press "Ctrl + Shift + P" to get Command PaletteType "Lint"Select "Python : Enable/Disable Linting", click on "Enable"Repeat Step 1 & 2, now select "Python : Select Linter", Select pylint from optionsAbove steps will add below lines in "settings.json" under .vscode dir
pyparsing nestedExpr and double closing characters I am trying to parse nested column type definitions such as1 string2 struct<col_1:string,col_2:int>3 row(col_1 string,array(col_2 string),col_3 boolean)4 array<struct<col_1:string,col_2:int>,col_3:boolean>5 array<struct<col_1:string,col2:int>>Using nestedExpr works as expected for cases 1-4, but throws a parse error on case 5. Adding a space between double closing brackets like "> >" seems work, and might be explained by this quote from the author.By default, nestedExpr will look for space-delimited words of printableshttps://sourceforge.net/p/pyparsing/bugs/107/I'm mostly looking for alternatives to pre and post processing the input stringtype_str = type_str.replace(">", "> ")# parse string heretype_str = type_str.replace("> ", ">")I've tried using the infix_notation but I haven't been able to figure out how to use it in this situation. I'm probably just using this the wrong way...Code snippetarray_keyword = pp.Keyword('array')row_keyword = pp.Keyword('row')struct_keyword = pp.Keyword('struct')nest_open = pp.Word('<([')nest_close = pp.Word('>)]')col_name = pp.Word(pp.alphanums + '_')col_type = pp.Forward()col_type_delimiter = pp.Word(':') | pp.White(' ')column = col_name('name') + col_type_delimiter + col_type('type')col_list = pp.delimitedList(pp.Group(column))struct_type = pp.nestedExpr( opener=struct_keyword + nest_open, closer=nest_close, content=col_list | col_type, ignoreExpr=None)row_type = pp.locatedExpr(pp.nestedExpr( opener=row_keyword + nest_open, closer=nest_close, content=col_list | col_type, ignoreExpr=None))array_type = pp.nestedExpr( opener=array_keyword + nest_open, closer=nest_close, content=col_type, ignoreExpr=None)col_type <<= struct_type('children') | array_type('children') | row_type('children') | scalar_type('type')
nestedExpr and infixNotation are not really appropriate for this project. nestedExpr is generally a short-cut expression for stuff you don't really want to go into details parsing, you just want to detect and step over some chunk of text that happens to have some nesting in opening and closing punctuation. infixNotation is intended for parsing expressions with unary and binary operators, usually some kind of arithmetic. You might be able to treat the punctuation in your grammar as operators, but it is a stretch, and definitely doing things the hard way.For your project, you will really need to define the different elements, and it will be a recursive grammar (since the array and struct types will themselves be defined in terms of other types, which could also be arrays or structs).I took a stab at a BNF, for a subset of your grammar using scalar types int, float, boolean, and string, and compound types array and struct, with just the '<' and '>' nesting punctuation. An array will take a single type argument, to define the type of the elements in the array. A struct will take one or more struct fields, where each field is an identifier:type pair.scalar_type ::= 'int' | 'float' | 'string' | 'boolean'array_type ::= 'array' '<' type_defn '>'struct_type ::= 'struct' '<' struct_element (',' struct_element)... '>'struct_element ::= identifier ':' type_defntype_defn ::= scalar_type | array_type | struct_type(If you later want to add a row definition also, think about what the row is supposed to look like, and how its elements would be defined, and then add it to this BNF.)You look pretty comfortable with the basics of pyparsing, so I'll just start you off with some intro pieces, and then let you fill in the rest.# define punctuationLT, GT, COLON = map(pp.Suppress, "<>:")ARRAY = pp.Keyword('array')STRUCT = pp.Keyword('struct')# create a Forward that will be used in other type expressionstype_defn = pp.Forward()# here is the array type, you can fill in the other types following this model# and the definitions in the BNFarray_type = pp.Group(ARRAY + LT + type_defn + GT)...# then finally define type_defn in terms of the other type expressionstype_defn <<= scalar_type | array_type | struct_typeOnce you have that finished, try it out with some tests:type_defn.runTests("""\ string struct<col_1:string,col_2:int> array<struct<col_1:string,col2:int>> """, fullDump=False)And you should get something like:string['string']struct<col_1:string,col_2:int>['struct', [['col_1', 'string'], ['col_2', 'int']]]array<struct<col_1:string,col2:int>>['array', ['struct', [['col_1', 'string'], ['col2', 'int']]]]>Once you have that, you can play around with extending it to other types, such as your row type, maybe unions, or arrays that take multiple types (if that was your intention in your posted example). Always start by updating the BNF - then the changes you'll need to make in the code will generally follow.
Python Loop, .remove, and List Exercise So we have an exercise that we are giving to the children and we need to use a list(or array), as well as the '.remove()' method, and a loop. I have the following code and the code isn't working. usernames = [ 'Steph','JHG','Greg','Matt','Rodney','David', 'Chris','Sally','Gemma','Pam','Daniel','JHG', 'JHG','Ishmael','Sam', 'JHG','Jacob']for i in range(0,3): for name in usernames: usernames.remove('JHG')print(usernames)Checkerprint ('Success! all the JHG values have been deleted from the list, onto the next!')
Or as a while loop:while 'JHG' in usernames: usernames.remove('JHG')
How can I delete multiple files using a Python script? I am playing around with some python scripts and I ran into a problem with the script I'm writing. It's supposed to find all the files in a folder that meets the criteria and then delete it. However, it finds the files, but at the time of deleting the file, it says that the file is not found.This is my code:import osfor filename in os.listdir('C:\\New folder\\'): if filename.endswith(".rdp"): os.unlink(filename)And this is the error I get after running it:FileNotFoundError: [WinError 2] The system cannot find the file specified:Can somebody assist with this?
os.unlink takes the path to the file, not only its filename. Try pre-pending your filename with the dirname. Like thisimport osdirname = 'C:\\New folder\\'for filename in os.listdir(dirname): if filename.endswith(".rdp"): # Add your "dirname" to the file path os.unlink(dirname + filename)
How to scrape wikipedia infobox and store it into a csv file I already done scraping of wikipedia's infobox but I don't know how to store taht data in csv file. Please help me out.from bs4 import BeautifulSoup as bsfrom urllib.request import urlopendef infobox(query) : query = query url = 'https://en.wikipedia.org/wiki/'+query raw = urlopen(url) soup = bs(raw) table = soup.find('table',{'class':'infobox vcard'}) for tr in table.find_all('tr') : print(tr.text)infobox('Infosys')
You have to collect the required data and write in csv file, you can use csv module see below example:from bs4 import BeautifulSoup as bsfrom urllib import urlopenimport csvdef infobox(query) : query = query content_list = [] url = 'https://en.wikipedia.org/wiki/'+query raw = urlopen(url) soup = bs(raw) table = soup.find('table',{'class':'infobox vcard'}) for tr in table.find_all('tr') : if len(tr.contents) > 1: content_list.append([tr.contents[0].text.encode('utf-8'), tr.contents[1].text.encode('utf-8')]) elif tr.text: content_list.append([tr.text.encode('utf-8')]) write_csv_file(content_list)def write_csv_file(content_list): with open(r'd:\Test.csv', mode='wb') as csv_file: writer = csv.writer(csv_file, delimiter=',') writer.writerows(content_list)infobox('Infosys')
Append input to existing list I'm running through a beginners guide to Python and I'm currently working on lists. Now I've created this sample code, but I can't seem to add the user input dynamically to the list that I've created. If you enter an item from the list, you get a success message, but if the item isn't on the list I try to append it to the current list and then I return an error message that it's not in the inventory. The last line is just printing out the list in hopes that the new addition is there. I've tried the append method and even tried extending into another list. Can someone spot where I'm going wrong? topping_list_one = ['pepperoni', 'sausage', 'cheese', 'peppers']error_message = "Sorry we don't have "success_message = "Great, we have "first_topping = raw_input ('Please give me a topping: ')if ( first_topping in topping_list_one ) : print '{}!'.format(success_message + first_topping) elif ( first_topping in topping_list_one ): topping_list_one.append('first_topping')else : print '{}'.format(error_message + first_topping)print 'Heres a list of the items now in our inventory: {}'.format(topping_list_one)
I think you mean to sayelif ( first_topping not in topping_list_one ): topping_list_one.append(first_topping)ie. "not in" instead of "in" and remove the quotes from 'first_topping'
How to change some characters at the same time in Vscode? i have Vscode and Anaconda.There are 50+ ipynb tutorial files that i studied. I work with cells.These files have some Turkih characters that i want to change.These characters are both in uppercase and lowercase.Ç --> CĞ --> GÖ --> OŞ --> SÜ --> Uİ --> Iç --> cğ --> gö --> oş --> sü --> u ı --> iIn Vscode there are replace function.How can i change all these characters at the same time for a ipynb file or for all ipynb filesThanks very much.
use the extension Replace RulesAdd the following to your settings: "replacerules.rules": { "Replace Turkih": { "find": ["Ç", "Ğ", "Ö", "Ş", "Ü", "İ", "ç", "ğ", "ö", "ş", "ü", "ı"], "replace": ["C", "G", "O", "S", "U", "I", "c", "g", "o", "s", "u", "i"] } }Open the fileExecute the command: Replace Rule: Run Rule...Select the Replace Turkih rule.With extension Command on All Files you can apply a command on a selection of files in the workspace.We need the extension multi-command because the have to add arguments to the command.Add the following to your settings: "multiCommand.commands": [ { "command": "multiCommand.replaceTurkih", "sequence": [ { "command": "replacerules.runRule", "args": { "ruleName": "Replace Turkih" } } ] } ], "commandOnAllFiles.commands": { "Add Hello to the End": { "command": "multiCommand.replaceTurkih", "includeFileExtensions": [".ipynb"] } }
Is it possible to get the xpath (source) of an element using selenium in python? If you take a look at this site, you will see the title/text "Example Domain". Is it possible to get its xpath which is /html/body/div/h1 using selenium? Is there any other possibilities? I mean I want to get xpath itself and not its content!I know we can get the page_soruce using driver.page_source but this not what I'm looking for. I simply expect an output as /html/body/div/h1.I tried this:test = driver.page_sourceps = str(test)root = etree.fromstring(ps)tree = etree.ElementTree(root)find_text = etree.XPath("//p[text()='my_target_text']") # in our case Example Domainfor target in find_text(root): print(tree.getpath(target))It returns:lxml.etree.XMLSyntaxError: Opening and ending tag mismatch
What you need (based on how you worded your question: I honestly doubt this is what you really need, and I'm sure that if you would state your end goal, someone would put you on the right path) is this:https://gist.github.com/ergoithz/6cf043e3fdedd1b94fcfI figured this would actually represent a full answer to your question as asked, so posting it as a response.
python PIL image.getdata() returns data returns more data than is expected the following:image = Image.open(name, 'r')data = np.array(image.getdata())data.reshape(image.size)returns:Traceback (most recent call last): File "/home/usr/colorviewer/main.py", line 207, in <module> print(getPalette()) File "/home/usr/colorviewer/main.py", line 152, in getPalette data.reshape(image.size)ValueError: cannot reshape array of size 921600 into shape (640,480)why does it say the array size is 921600 instead of 307200 as the size would suggest, and how might the image data be reshaped into its normal resolution?
Every pixel uses three values R,G,B so you have 640*480*3 which gives 921600 bytes.So you need reshape into (640,480,3). And it needs * to unpack size.data.reshape(*image.size, 3)If image can be transparent then it may have 640*480*4 bytes. And it can be safer to reshape (640,480,-1) and it should automatically ise 3 or 4data.reshape(*image.size, -1)Or you should skip .getdata() and you get already reshaped arraydata = np.array(image)
How to compress 300GB file using python I am trying to compress a virtual machine file with size 300GB.Every single time the python script is killed because the actually memory usage ofthe gzip module exceeds 30GB (virtual memory). Is there any way to achieve large file(300GB to 64TB) compression using python?def gzipFile(fileName): startTime = time.time() with open(fileName,'rb') as fileHandle: compressedFileName = "%s-1.gz" % fileName with gzip.open(compressedFileName, 'wb') as compressedFH: compressedFH.writelines(fileHandle) finalTime = time.time() - startTime print("gzipFile=%s fileName=%s" % (finalTime,compressFileName))
with gzip.open(compressedFileName, 'wb') as compressedFH: compressedFH.writelines(fileHandle)writes the file fileHandle line by line, i. e. splits it into chunks separated by the \n character.While it is quite probable that this character occurs from time to time in a binary file as well, this is not guaranteed.It might be better to dowith gzip.open(compressedFileName, 'wb') as compressedFH: while True: chunk = fileHandle.read(65536) if not chunk: break # the while loop compressedFH.write(chunk)or, as tqzf writes in a comment,with gzip.open(compressedFileName, 'wb') as compressedFH: shutil.copyfileobj(fileHandle, compressedFileName)
Python: Find all pairwise distances between points and return points along with distance I have a list containing list of points with a name and coordinates in 3D. Something like this with a much larger length of the list:group=[[gr1, 5, 8, 9], [gr2, 7, 4, 5], [gr3, 3, 8, 1], [gr4, 3, 4, 8]]I want to calculate all possible pairwise distances among the coordinates and return the distance along with the corresponding points. Something like this:distances=[[gr1, gr2, 6.],[gr1, gr3, 8.24621125], [gr1, gr4, 4.58257569], [gr2, gr3, 6.92820323], [gr2, gr4, 5.], [gr3, gr4, 8.06225775]]I tried using scipy.spatial.distance.pdist but that only returns me thisarray([ 6. , 8.24621125, 4.58257569, 6.92820323, 5. , 8.06225775]) How do I extract the information along with the groups that were considered for each distance value?I am a beginner and I am using python 3. Thanks
Maybe you could try two nested for-loops to combine every element in "group" except the last one with every other element to the right:group=[["gr1", 5, 8, 9], ["gr2", 7, 4, 5], ["gr3", 3, 8, 1], ["gr4", 3, 4, 8]]distances=[]for i,g in enumerate(group[:-1]): for h in group[i+1:]: d = ((g[1]-h[1])**2+(g[2]-h[2])**2+(g[3]-h[3])**2)**0.5 distances.append([g[0],h[0],d])print(*distances,sep="\n")Result:['gr1', 'gr2', 6.0]['gr1', 'gr3', 8.246211251235321]['gr1', 'gr4', 4.58257569495584]['gr2', 'gr3', 6.928203230275509]['gr2', 'gr4', 5.0]['gr3', 'gr4', 8.06225774829855]
completely connected subgraphs from a larger graph in networkx I have tried not to repost here, but I think my request is very simple and I am just inexperienced with network graphs. When using the networkx module in python, I would like to recover, from a connected graph, the subgraphs where all nodes are connected to each other (where the number of nodes is greater than 2). Is there a simple way to do this?Here is my example:A simple graph with seven nodes. Nodes 1,2,3 are shared connections, nodes 1,2,4 all share connections, and nodes 5,6,7 all share connections. import networkx as nxG=nx.Graph() #Make the graphG.add_nodes_from([1,2,3,4,5,6,7]) #Add nodes, although redundant because of the line belowG.add_edges_from([(1,2),(1,3),(2,3),(1,4),(2,4),(1,5),(5,6),(5,7),(6,7)]) # Adding the edgesMy desired output would be: ([1,2,3],[1,2,4],[5,6,7])I can think of slightly laborious methods for writing this but was wondering if there was a simple inbuilt function for it.
It sounds like you want to discover the cliques in your graph. For this you could use nx.clique.find_cliques():>>> list(nx.clique.find_cliques(G))[[1, 2, 3], [1, 2, 4], [1, 5], [6, 5, 7]]nx.clique.find_cliques() returns a generator which will yield all cliques in the graph. You can filter out the cliques with fewer than three nodes using list comprehension:>>> [g for g in nx.clique.find_cliques(G) if len(g) > 2][[1, 2, 3], [1, 2, 4], [6, 5, 7]]
create a multidimensional random matrix in spark With the python API of Spark I am able to quickly create an RDD vector with random normal number and perform a calculation with the following code: from pyspark.mllib.random import RandomRDDsRandomRDDs.uniformRDD(sc, 1000000L, 10).sum()where sc is an available SparkContext. The upside of this approach is that it is very performant, the downside is that I am not able to create a random matrix this way. You could create use numpy again, but this isn't performant.%%timesc.parallelize(np.random.rand(1000000,2)).sum()array([ 499967.0714618 , 499676.50123474])CPU times: user 52.7 ms, sys: 31.1 ms, total: 83.9 msWall time: 669 msFor comparison with Spark: %%timeRandomRDDs.uniformRDD(sc, 2000000, 10).sum()999805.091403467CPU times: user 4.54 ms, sys: 1.89 ms, total: 6.43 msWall time: 183 msIs there a performant way to create random matrices/RDD's that contain more than one dimension with the Python Spark API?
Spark evolved a bit since this question was asked and Spark will probably have better support still in the future. In the meantime you can be a bit creative with the .zip method of RDD's as well as DataFrames to get close to what numpy can do. It is a bit more verbose, but it works. n = 100000p1 = RandomRDDs.uniformRDD(sc, n).zip(RandomRDDs.uniformRDD(sc, n))p2 = RandomRDDs.uniformRDD(sc, n).zip(RandomRDDs.uniformRDD(sc, n))point_rdd = p1.zip(p2)\.map(lambda r: Row(x1=r[0][0], y1 = r[0][1], x2=r[1][0], y2 = r[1][1]))
Writing to an Excel File With Python I am doing some webscraping with BeautifulSoup and Selenium and I want to write my data to an excel file # coding: utf-8import requestsimport bs4from datetime import datetimeimport reimport osimport urllibimport urllib2from bs4 import BeautifulSoupfrom selenium import webdriverimport timeinitialpage = 'https://www.boxofficemojo.com/yearly/chart/?yr=2017&p=.htm'res = requests.get(initialpage, timeout=None)soup = bs4.BeautifulSoup(res.text, 'html.parser')pages = []pagelinks=soup.select('a[href^="/yearly/chart/?page"]')for i in range(int(len(pagelinks)/2)): pages.append(str(pagelinks[i])[9:-14]) pages[i]=pages[i].replace("amp;","") pages[i]= "https://www.boxofficemojo.com" + pages[i] pages[i]=pages[i][:-1]pages.insert(0, initialpage)date_dic = {}movie_links = []titles = []Domestic_Gross_Arr=[]Genre_Arr=[]Release_Date_Arr = []Theaters_Arr=[]Budget_Arr = []Views_Arr = []Edits_Arr = []Editors_Arr = []for i in range(int(len(pagelinks)/2 + 1)): movie_count=0; res1 = requests.get(pages[i]) souppage=bs4.BeautifulSoup(res1.text, 'html.parser') for j in souppage.select('tr > td > b > font > a'): link = j.get("href")[7:].split("&") str1 = "".join(link) final = "https://www.boxofficemojo.com/movies" + str1 if "/?id" in final: movie_links.append(final) movie_count += 1 number_of_theaters=souppage.find("tr", bgcolor="#dcdcdc") for k in range(movie_count): #print(number_of_theaters.next_sibling.contents[4].text) Theaters_Arr.append(number_of_theaters.next_sibling.contents[4].text) number_of_theaters=number_of_theaters.next_siblingk=0path = os.getcwd() path = path + '/movie_pictures'os.makedirs(path)os.chdir(path)while(k < 2): j = movie_links[k] try: res1 = requests.get(j) soup1 = bs4.BeautifulSoup(res1.text, 'html.parser') c = soup1.select('td[width="35%"]') d=soup1.select('div[class="mp_box_content"]') genre = soup1.select('td[valign="top"]')[5].select('b') image = soup1.select('img')[6].get('src') budget = soup1.select('tr > td > b') domestic = str(c[0].select('b'))[4:-5] release = soup1.nobr.a title = soup1.select('title')[0].getText()[:-25] print ("-----------------------------------------") print ("Title: " +title) titles.append(title) print ("Domestic Gross: " +domestic) Domestic_Gross_Arr.append(domestic) print ("Genre: "+genre[0].getText()) Genre_Arr.append(genre[0].getText()) print ("Release Date: " +release.contents[0]) Release_Date_Arr.append(release.contents[0]) print ("Production Budget: " +budget[5].getText()) Budget_Arr.append(budget[5].getText()) year1=str(release.contents[0])[-4:] a,b=str(release.contents[0]).split(",") month1, day1=a.split(" ") datez= year1 + month1 + day1 new_date= datetime.strptime(datez , "%Y%B%d") date_dic[title]=new_date with open('pic' + str(k) + '.png', 'wb') as handle: response = requests.get(image, stream=True) if not response.ok: print response for block in response.iter_content(1024): if not block: break handle.write(block) except: print("Error Occured, Page Or Data Not Available") k+=1def subtract_one_month(t): import datetime one_day = datetime.timedelta(days=1) one_month_earlier = t - one_day while one_month_earlier.month == t.month or one_month_earlier.day > t.day: one_month_earlier -= one_day year=str(one_month_earlier)[:4] day=str(one_month_earlier)[8:10] month=str(one_month_earlier)[5:7] newdate= year + "-" + month +"-" + day return newdatenumber_of_errors=0browser = webdriver.Chrome("/Users/Gokce/Downloads/chromedriver")browser.maximize_window() browser.implicitly_wait(20)for i in titles: try: release_date = date_dic[i] i = i.replace(' ', '_') i = i.replace("2017", "2017_film") #end = datetime.strptime(release_date, '%B %d, %Y') end_date = release_date.strftime('%Y-%m-%d') start_date = subtract_one_month(release_date) url = "https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&start="+ start_date +"&end="+ end_date + "&pages=" + i browser.get(url) page_views_count = browser.find_element_by_css_selector(" .summary-column--container .legend-block--pageviews .linear-legend--counts:first-child span.pull-right ") page_edits_count = browser.find_element_by_css_selector(" .summary-column--container .legend-block--revisions .linear-legend--counts:first-child span.pull-right ") page_editors_count = browser.find_element_by_css_selector(" .summary-column--container .legend-block--revisions .legend-block--body .linear-legend--counts:nth-child(2) span.pull-right ") print (i) print ("Number of Page Views: " +page_views_count.text) Views_Arr.append(page_views_count.text) print ("Number of Edits: " +page_edits_count.text) Edits_Arr.append(page_edits_count.text) print ("Number of Editors: " +page_editors_count.text) Editors_Arr.append(page_editors_count.text) except: print("Error Occured for this page: " + str(i)) number_of_errors += 1 Views_Arr.append(-1) Edits_Arr.append(-1) Editors_Arr.append(-1)time.sleep(5)browser.quit()import xlsxwriteros.chdir("/home")workbook = xlsxwriter.Workbook('WebScraping.xlsx')worksheet = workbook.add_worksheet()worksheet.write(0,0, "Hello")worksheet.write(0,1, 'Genre')worksheet.write(0,2, 'Production Budget')worksheet.write(0,3, 'Domestic Gross')worksheet.write(0,4, 'Release Date')worksheet.write(0,5, 'Number of Wikipedia Page Views')worksheet.write(0,6, 'Number of Wikipedia Edits')worksheet.write(0,7, 'Number of Wikipedia Editors')row=1for i in range(len(titles)): worksheet.write(row, 0, titles[i]) worksheet.write(row, 1, Genre_Arr[i]) worksheet.write(row, 2, Budget_Arr[i]) worksheet.write(row, 3, Domestic_Gross_Arr[i]) worksheet.write(row, 4, Release_Date_Arr[i]) worksheet.write(row, 5, Theaters_Arr[i]) worksheet.write(row, 6, Views_Arr[i]) worksheet.write(row, 7, Edits_Arr[i]) worksheet.write(row, 8, Editors_Arr[i]) row += 1workbook.close()The code works until import xlsxwriter, then I get this error:---------------------------------------------------------------------------IOError Traceback (most recent call last)<ipython-input-9-c99eea52d475> in <module>() 27 28 ---> 29 workbook.close()/Users/Gokce/anaconda2/lib/python2.7/site-packages/xlsxwriter/workbook.pyc in close(self) 309 if not self.fileclosed: 310 self.fileclosed = 1--> 311 self._store_workbook() 312 313 def set_size(self, width, height):/Users/Gokce/anaconda2/lib/python2.7/site-packages/xlsxwriter/workbook.pyc in _store_workbook(self) 638 639 xlsx_file = ZipFile(self.filename, "w", compression=ZIP_DEFLATED,--> 640 allowZip64=self.allow_zip64) 641 642 # Add XML sub-files to the Zip file with their Excel filename./Users/Gokce/anaconda2/lib/python2.7/zipfile.pyc in __init__(self, file, mode, compression, allowZip64) 754 modeDict = {'r' : 'rb', 'w': 'wb', 'a' : 'r+b'} 755 try:--> 756 self.fp = open(file, modeDict[mode]) 757 except IOError: 758 if mode == 'a':IOError: [Errno 45] Operation not supported: 'WebScraping.xlsx' What might be the problem? If if cut off the last part and run in a new IDLE with fake data, it works. But it does not work in the main IDLE. So the problem must be in the previous part I believe
The error is triggered when the code tries to write the file. Confirm that you have write permissions to that directory, and that the file doesn't already exist. It's unlikely that you have access to /home.
unable to run print statements from loss function when calling model.fit in Keras I have created a custom loss function called def customLoss(true, pred) //do_stuff //print(variables)return lossNow I'm calling compile as model.compile(optimizer='Adamax', loss = customLoss)EDIT: I tried tf.Print and this is my result. def customLoss(params): def lossFunc(true, pred): true = tf.Print(true, [true.shape],'loss-func') #obviously this won't work because the tensors aren't the same shape; however, this is what I want to do. #stuff return loss return lossFunc model = Model(inputs=[inputs], outputs=[outputs]) parallel_model = multi_gpu_model(model, gpus=8) parallel_model.compile(opimizer='Adam', loss = customLoss(params), metrics = [mean_iou) history = parallel_model.fit(X_train, Y_train, validation_split=0.25, batch_size = 32, verbose=1)and the output is Epoch 1/101159/1159 [==============================] - 75s 65ms/step - loss: 0.1051 - mean_iou: 0.4942 - val_loss: 0.0924 - val_mean_iou: 0.6933Epoch 2/101152/1159 [============================>.] - ETA: 0s - loss: 0.0408 - mean_iou: 0.7608The print statements still aren't printing. Am I missing something - are my inputs into tf.Print not proper?
It's not because Keras dumps buffers or does magic, it simply doesn't call them! The loss function is called once to construct the computation graph and then the symbolic tensor that represents the loss value is returned. Tensorflow uses that to compute the loss, gradients etc.You might instead be interested tf.Print that is null operation with a side effect that prints the arguments passed. Since tf.Print is part of the computation graph it will be run when training as well. From the documentation: Prints a list of tensors. This is an identity op (behaves like tf.identity) with the side effect of printing data when evaluating.
Issue when trying to login with Docusign API by official python lib on live account I have issue when trying to login with API on live account, while on sandbox all works great.When doing request on login, I must get on response data object login_accounts. Data in object looks like this(I've delete few symbols in password for sequrity reasons)``` {'api_password': 'ZQQ+oSUO1alRWlCapJ0=', 'login_accounts': [{'account_id': '4342454', 'account_id_guid': '6765dcc3-5dc6-4340-8240-9a53d3e728ab', 'base_url': 'https://demo.docusign.net/restapi/v2/accounts/4342454', 'email': '[email protected]', 'is_default': 'true', 'login_account_settings': None, 'login_user_settings': None, 'name': 'Moonshot Capital', 'site_description': '', 'user_id': '1c960342-458b-4b33-b7ae-68ff9817bbb6', 'user_name': 'Here some name'}]}```That was my sandbox data.But in live account I've get only empty object{'api_password': None, 'login_accounts': None}If that was an error with login, I sould get something like 'bad auth' error code or something like this, but everywhere is 'http200 ok'. In system logs, that I can download from docusign service, I get only one error 404 on image object because in profile there no image, all other requests have code 'http200'. I thought that problem may be in integrator key and i've create another one, but it gives me same empty object. Also all my integration was working in free trial period on live account, and stops working after swithcing to BasicAPI billing plan.I am using official python lib https://github.com/docusign/docusign-python-clientHere is my code sampledef setUp(): # setting local configuration api_client = docusign.ApiClient(BASE_URL) oauth_login_url = api_client.get_jwt_uri(integrator_key, redirect_uri, oauth_base_url) try: api_client.configure_jwt_authorization_flow(private_key_filename, oauth_base_url, integrator_key, api_username, 3600) docusign.configuration.api_client = api_client return api_client except: logger.exception('') print(("If you login for first time please follow the url and give the" " access for app.\n"), oauth_login_url)def docusign_login(api_client): auth_api = AuthenticationApi() try: login_info = auth_api.login(api_password='true', include_account_id_guid='true') assert login_info is not None assert len(login_info.login_accounts) > 0 login_accounts = login_info.login_accounts assert login_accounts[0].account_id is not None logger.info(login_info) base_url, _ = login_accounts[0].base_url.split('/v2') api_client.host = base_url docusign.configuration.api_client = api_client return login_accounts except ApiException: logger.exception('')def request_signature_for_template(client_id): api_client = setUp() try: login_accounts = docusign_login(api_client) envelopes_api = EnvelopesApi() envelope_summary = envelopes_api.create_envelope( login_accounts[0].account_id, envelope_definition=envelope_definition)
The AuthenticationApi.login() method is intended to be used with Legacy Header authentication. For JWT/OAuth you would want to use the Get UserInfo method. Unfortunately, that method hasn't yet been implemented in the Python client. You'll need to make that call manually into the SDK is updated to include that functionality. Here's an example of that using the Requests package. oauth_base_url = "account-d.docusign.com" #use account.docusign.com for Prodapi_client.configure_jwt_authorization_flow(private_key, oauth_base_url, integrator_key, user_id, 3600)docusign.configuration.api_client = api_client# using Requests to manually call userinfo endpointuser_info_url = "https://" + oauth_base_url + "/oauth/userinfo"request_auth_header = {'Authorization' : api_client.default_headers['Authorization']}r = requests.get(user_info_url, headers=request_auth_header)# parse response to pull default accountfor a in r.json().get('accounts'): if a.get('is_default'): account_id = a.get('account_id') new_base_url = a.get('base_uri') + "/restapi"api_client.host = new_base_urldocusign.configuration.api_client = api_client
How to tune scipy interpolate function? I'm not sure why it's doing such a crappy job. Here's the set of 189 data points I was hoping to get smoothed. Why is it lagging so much?y = datax = range(len(y))tck, _ = splprep([x,y])x2, y2 = splev(np.linspace(0,1,len(y)), tck)plt.plot(y, 'b')plt.plot(y2, 'g')plt.show()
Smoothing is a fairly common problem in time series analysis. Have you tried out exponential smoothing? The package StatsModels has a lot of callable smoothing functions.from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holty_hat_avg = test.copy()fit2 = SimpleExpSmoothing(np.asarray(train['Count'])).fit(smoothing_level=0.6,optimized=False)y_hat_avg['SES'] = fit2.forecast(len(test))plt.figure(figsize=(16,8))plt.plot(train['Count'], label='Train')plt.plot(test['Count'], label='Test')plt.plot(y_hat_avg['SES'], label='SES')plt.legend(loc='best')plt.show()
Read n tables in csv file to separate pandas DataFrames I have a single .csv file with four tables, each a different financial statement four Southwest Airlines from 2001-1986. I know I could separate each table into separate files, but they are initially downloaded as one.I would like to read each table to its own pandas DataFrame for analysis.Here is a subset of the data:Balance Sheet Report Date 12/31/2001 12/31/2000 12/31/1999 12/31/1998Cash & cash equivalents 2279861 522995 418819 378511Short-term investments - - - -Accounts & other receivables 71283 138070 73448 88799Inventories of parts... 70561 80564 65152 50035Income Statement Report Date 12/31/2001 12/31/2000 12/31/1999 12/31/1998Passenger revenues 5378702 5467965 4499360 3963781Freight revenues 91270 110742 102990 98500Charter & other - - - -Special revenue adjustment - - - -Statement of Retained Earnings Report Date 12/31/2001 12/31/2000 12/31/1999 12/31/1998Previous ret earn... 2902007 2385854 2044975 1632115Cumulative effect of.. - - - -Three-for-two stock split 117885 - 78076 -Issuance of common.. 52753 75952 45134 10184The tables each have 17 columns, the first the line item description, but varying numbers of rows i.e. the balance sheet is 100 rows whereas the statement of cash flows is 65What I've Doneimport pandas as pdimport numpy as np# Lines that separate the various financial statementslines_to_skip = [0, 102, 103, 158, 159, 169, 170]with open('LUV.csv', 'r') as file: fin_statements = pd.read_csv(file, skiprows=lines_to_skip)balance_sheet = fin_statements[0:100]I have seen posts with a similar objective noting to utilize nrows and skiprows. I utilized skiprows to read the entire file, then I created the individual financial statement by indexing.I am looking for comments and cconstructive criticism for creating a dataframe for each respective table in better Pythonic style and best practices.
What you want to do if far beyond what read_csv can do. If fact you input file struct can be modeled as:REPEAT: Dataframe name Header line REPEAT: Data line BLANK LINE OR END OF FILEIMHO, the simplest way is to parse the line by hand line by line, feeding a temporary csv file per dataframe, then loading the dataframe. Code could be:df = {} # dictionary of dataframesdef process(tmp, df_name):'''Process the temporary file corresponding to one dataframe''' # print("Process", df_name, tmp.name) # uncomment for debugging if tmp is not None: tmp.close() df[df_name] = pd.read_csv(tmp.name) os.remove(tmp.name) # do not forget to remove the temp filewith open('LUV.csv') as file: df_name = "NONAME" # should never be in resulting dict... tmp = None for line in file: # print(line) # uncomment for debugging if len(line.strip()) == 0: # close temp file on empty line process(tmp, df_name) # and process it tmp = None elif tmp is None: # a new part: store the name df_name = line.strip() state = 1 tmp = tempfile.NamedTemporaryFile("w", delete=False) else: tmp.write(line) # just feed the temp file # process the last part if no empty line was present... process(tmp, df_name)This is not really efficient because each line is written to a temporary file an then read again, but it is simple and robust.A possible improvement would be to initially parse the parts with the csv module (can parse a stream while pandas wants files). The downside is that the csv module only parse into strings and you lose the automatic conversions to numbers of pandas. My opinion is that it is worth it only if the file is large and the full operation will have to be repeated.
Beautiful Soup crashes upon special chars like """ and "<" I'm trying to scrape an atom based RSS feed using beautiful soup, but it's proving difficult. Capturing the data goes just fine until an <item> comes up that breaks the code and crashes the script. Such <item>s consistently have tags (firefox marks them in orange) like "& lt;" or "& quot;", while s without them work fine. I've tried a bunch of stuff like BeautifulStoneSoup, stripping special chars with regex, and setting the "xml" argument, but nothing works and often they just throw a warning about being deprecated in BS4. Why do these characters appear and how can I deal with them effectively?Here's a page I'm trying to scrape: http://www.thestar.com/feeds.articles.news.gta.rssAnd here's my code:news_url = "http://www.thestar.com/feeds.articles.news.gta.rss" # Toronto Star RSS Feedtry: news_rss = urllib2.urlopen(news_url) news = news_rss.read() news_rss.close() soup = BeautifulSoup(news)except: return "error"titles = soup.findAll('title')links = soup.findAll('link')for link in links: link = link.contents # I want the url without the <link> tagsnews_stuff = []for item in titles: if item.text == "TORONTO STAR | NEWS | GTA": # These have <title> tags and I don't want them; just skip 'em. pass else: news_stuff.append((item.text, links[i])) # Here's a news story. Grab it.i = 0for thing in news_stuff: print '<a href="' print thing[1] print '"target="_blank">' print thing[0] print '</a><br/>' i += 1
Not sure which problem you are talking about, but I got this error while running you code:UnicodeEncodeError: 'ascii' codec can't encode character u'\u2018' in position 54: ordinal not in range(128)To fix it I just added encoding:for thing in news_stuff: print '<a href="' print thing[1] print '"target="_blank">' print thing[0].encode("utf-8") print '</a><br/>' i += 1After that script executes without any errors.
Python requests, can't log into a site I am trying to use Python (3.2) requests to login to a site and navigate protected content on subsequent pages. However, when I login it seems to just leave me at the original login page (not navigating to the success page), and the subsequent page call is only showing the unprotected content. Can you please help me identify the bug in my code:import requestsimport sysclass login: def __init__(self): self.payload = None self.c = None def start(self,username,password,loginpage): self.payload = {'login':username,'password':password} self.loginpage = loginpage def login(self,url): self.c = requests.session() response = self.c.post(self.loginpage,data=self.payload) if response.status_code == 200: request = self.c.get(url) print(request.text)if __name__ == '__main__': username = 'username' password = "password" loginpage = "https://www.clubfreetime.com/login/" nextpage = "http://www.clubfreetime.com/new-york-city-nyc/free-theater-performances-shows" login = login() login.start(username,password,loginpage) login.login(nextpage)
I figured out the problem. I needed to change self.payload to:self.payload = {'login':username,'password':password,'submit_login':'Login'}
How can I use python to convert multiple date columns into one date column and sum up their values (example below)? Following is the table I have -Market 05-20 06-20 07-20 08-20HK 5 5 5 5US 2 2 2 2HK 3 3 3 3UK 7 7 7 7UK 2 2 2 2Follwoing is what I want to make of it -Market Date QuantityHK 05-20 8 HK 06-20 8 HK 07-20 8 HK 08-20 8US 05-20 2 US 06-20 2US 07-20 2 US 08-20 2UK 05-20 9 UK 06-20 9 UK 07-20 9 UK 08-20 9How can I use python to convert multiple date columns into one date column and sum up their values (example below)?
First you can use df = df.groupby("Market").sum()Result: 05-20 06-20 07-20 08-20Market HK 8 8 8 8UK 9 9 9 9US 2 2 2 2Next you can df = df.stack()Result:Market HK 05-20 8 06-20 8 07-20 8 08-20 8UK 05-20 9 06-20 9 07-20 9 08-20 9US 05-20 2 06-20 2 07-20 2 08-20 2Now you have to only reset_index(), and add column names.df = df.reset_index()df.columns = ['Market', 'Data', 'Quantity']Result: Market Data Quantity0 HK 05-20 81 HK 06-20 82 HK 07-20 83 HK 08-20 84 UK 05-20 95 UK 06-20 96 UK 07-20 97 UK 08-20 98 US 05-20 29 US 06-20 210 US 07-20 211 US 08-20 2Full example code. I use io>StringIO only to simulate file.text ='''Market 05-20 06-20 07-20 08-20HK 5 5 5 5US 2 2 2 2HK 3 3 3 3UK 7 7 7 7UK 2 2 2 2'''import pandas as pdimport io#df = pd.read_csv("filename.csv")df = pd.read_csv(io.StringIO(text), sep="\s+")df = df.groupby("Market").sum()print(df)df = df.stack()print(df)df = df.reset_index()print(df)df.columns = ['Market', 'Data', 'Quantity']print(df)
Why am I unable to scrape values from a Hidden tooltip of a Highchart using selenium python? I've particularly asked a couple of questions on the same topic before asking it one final time.To begin with, I am scraping values from https://www.similarweb.com/website/zalando.de/#overviewI am trying to scrape the contents from a graph. Take a look at this highchart graph.I want to scrape value like its value as : 27,100,000. from the hidden tooltip. At present I am able to scrape the Months as [Nov '20,....Apr '21], However, I am unable to scrape its values.Here's my complete code:def website_monitoring(): websites = ['https://www.similarweb.com/website/zalando.de/#overview'] options = webdriver.ChromeOptions() options.add_argument('start-maximized') options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option("useAutomationExtension", False) browser = webdriver.Chrome(ChromeDriverManager().install(), options=options) for crawler in websites: browser.get(crawler) wait = WebDriverWait(browser, 10) website_names = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/section[1]/div[1]/div/div[1]/a').get_attribute("href") total_visits = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/div[2]/div[2]/div/div[3]/div/div/div/div[2]/div/span[2]/span[1]').text tooltip = wait.until(EC.presence_of_element_located((By.XPATH, "//*[local-name() = 'svg']/*[local-name()='g'][8]/*[local-name()='text']"))) ActionChains(browser).move_to_element(tooltip).perform() month_value = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//*[local-name() = 'svg']/*[local-name()='g' and @class='highcharts-tooltip']/*[local-name()='text']"))) values = [elem.text for elem in month_value] print('VALUES-->', values) months = browser.find_elements(By.XPATH, "//*[local-name() = 'svg']/*[local-name()='g'][6]/*/*") for date in months: print(date.text) # printing all scraped data print('Website Names:', website_names) print('Total visits:', total_visits)if __name__ == "__main__": website_monitoring()The output that I presently get:VALUES--> ['']Nov '20Dec '20Jan '21Feb '21Mar '21Apr '21The output that I want:VALUES--> ['27,100,000', .....]Nov '20Dec '20Jan '21Feb '21Mar '21Apr '21I am stuck on this issue since 2 days and nothing sofar has worked upon trying. Please, Please help!EDIT: I also tried a method to check if a csv file exists upon inspecting the page and then going to the networks tab as Highcharts graph usually store a csv file but I Guess the site has blocked it. Is this possible by using a json or lxml?
I have a solution that works. I took the time to identify a way to hover over each of the points, print the data, and move to the next. This could be way cleaner, but it works. Here's my python file:from selenium import webdriverimport chromedriver_autoinstallerfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.common.action_chains import ActionChainsimport timedef website_monitoring(): chromedriver_autoinstaller.install() websites = ['https://www.similarweb.com/website/zalando.de/#overview'] options = webdriver.ChromeOptions() options.add_argument('start-maximized') options.add_experimental_option("excludeSwitches", ["enable-automation"]) options.add_experimental_option("useAutomationExtension", False) browser = webdriver.Chrome(options=options) def stringToPrint(): return browser.find_element_by_css_selector('g.highcharts-tooltip > text > tspan:nth-child(1)').text + ': ' + browser.find_element_by_css_selector('tspan:nth-child(3)').text for crawler in websites: browser.get(crawler) wait = WebDriverWait(browser, 10) website_names = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/section[1]/div[1]/div/div[1]/a').get_attribute("href") total_visits = browser.find_element_by_xpath('/html/body/div[1]/main/div/div/div[2]/div[2]/div/div[3]/div/div/div/div[2]/div/span[2]/span[1]').text highchartElement = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, 'g:nth-child(8) > path:nth-child(3)'))) ActionChains(browser).move_to_element(highchartElement).move_by_offset(-300,0).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,-10).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,0).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,-10).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,10).perform() print(stringToPrint()) ActionChains(browser).move_by_offset(120,0).perform() print(stringToPrint()) # printing all scraped data print('Website Names:', website_names) print('Total visits:', total_visits)if __name__ == "__main__": website_monitoring()And here's the console output:November, 2020: 29,900,000December, 2020: 27,100,000January, 2020: 26,900,000February, 2021: 22,600,000March, 2021: 24,700,000April, 2021: 26,200,000Website Names: http://zalando.de/Total visits: 19.94M
How to multiply tuple values in array? I've tried everything to multiply the values in the tuple, but I get the error: TypeError: can't multiply sequence by non-int of type 'tuple' .from itertools import productarr = 0val_x = []val_y = []n = int(input('n = '))def multiply(product, *nums): factor = product for num in nums: factor *= num return factorif __name__ == "__main__": for x in range(n ** 2): for y in range(n ** 2): if y == 0: arr += 1 if arr == 1: val_x.append(bin(x)[2:].zfill(n)) val_y.append(bin(x)[2:].zfill(n)) arr = 0 res = list(product(val_x, val_y)) print(f'Input x,y = {res}') print(f'Output z = {multiply(*res)}')The result I get is an array which contains tuples, for example [(00, 00), (00, 01)]. How do I multiply them so I get result of [(0000), (0000)] etc. When I run the script, for the last print, I get the error.
As your description in comments, you need to change multiply().In the following implementation, multiply() receives a list of tuples and returns a list by multiplying all elements of each tuple in input list. It is required to cast str to int before multiplication. The output of multiply() is a list of integers, you can convert integers to binary representation if needed.from itertools import productarr = 0val_x = []val_y = []n = int(input('n = '))def multiply(res): output = [] for i in res: output.append(int(i[0], 2)*int(i[1], 2)) return outputif __name__ == "__main__": for x in range(n ** 2): for y in range(n ** 2): if y == 0: arr += 1 if arr == 1: val_x.append(bin(x)[2:].zfill(n)) val_y.append(bin(x)[2:].zfill(n)) arr = 0 res = list(product(val_x, val_y)) print(f'Input x,y = {res}') print(f'Output z = {multiply(res)}')sample for n=2:n = 2Input x,y = [('00', '00'), ('00', '01'), ('00', '10'), ('00', '11'), ('01', '00'), ('01', '01'), ('01', '10'), ('01', '11'), ('10', '00'), ('10', '01'), ('10', '10'), ('10', '11'), ('11', '00'), ('11', '01'), ('11', '10'), ('11', '11')]Output z = [0, 0, 0, 0, 0, 1, 2, 3, 0, 2, 4, 6, 0, 3, 6, 9]
'utf-8' codec can't decode byte 0xff in position 0: invalid start byte / unexpected end of data I am trying to pass some functions from C++ to Python using the Qt library (Pyside2 in python). At the moment everything works correctly passing the code from one side to the other and adapting it to Python, but when I start treating images errors happen.The only thing that I achieve is to correctly parse the shadows of the images, however, the inner part of the image (which would correspond to the rest of the colors is hollow).I should get thisbut I get this insteadAnd every time I treat those bytes, the program crashes with the following errors.'utf-8' codec can't decode byte 0x87 in position 0: invalid start byte'utf-8' codec can't decode byte 0xba in position 0: invalid start byte'utf-8' codec can't decode byte 0xcb in position 0: unexpected end of dataDebugging the program I discovered that only the bytes that correspond to the colors crashes the program, making it impossible to know the RGBA of the pixel. The problem must be with the way I get GB and AR in Python, since the original C ++ program never had this problem in any of the pixels.I am quite a newbie dealing with bytes and bytearrays. Do you think that I can be doing wrong to get GB and AR or what do you think?Thank you all!This is the original function in C++:QImage ImageConverter::convertGBAR4444(QByteArray &array, int width, int height, int startByte)/// GBAR = ARGB (endianness){ qDebug() << "Opened GBAR4444 image."; QImage img(width, height, QImage::Format_ARGB32); img.fill(Qt::transparent); for (int y = 0; y < height; ++y) { for (int x = 0; x < width; ++x) { uchar gb = array.at(startByte + y * 2 * width + x * 2); uchar ar = array.at(startByte + y * 2 * width + x * 2 + 1); uchar g = gb >> 4; uchar b = gb & 0xF; uchar a = ar >> 4; uchar r = ar & 0xF; img.setPixel(x, y, qRgba(r * 0x11, g * 0x11, b * 0x11, a * 0x11)); } } return img;}And this is my code for the Python version of the project:from PySide2 import QtGuifrom PySide2.QtGui import QImage, qRgba, qRgbdef convertGBAR4444(array, width, height, startByte = 0): y = 0 img = QImage(width, height, QImage.Format_ARGB32) img.fill(QtGui.QColor(0,0,0,0)) while (y < height): x = 0 while (x < width): gb = ord(array.at(startByte + y * 2 * width + x * 2)) ar = ord(array.at(startByte + y * 2 * width + x * 2 + 1)) g = gb >> 4 b = gb & 0xF a = ar >> 4 r = ar & 0xF img.setPixel(x, y, qRgba(r * 0x11, g * 0x11, b * 0x11, a * 0x11)) x += 1 y += 1 return imgIf you want to try it yourself this are the data you will need:convertGBAR4444(data, 32, 32, 13)data = b'\x01 \x00 \x00\x10\x00\x10\x00\r\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0 0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11U"\x873\xbaD\xcbD\xcbD\xcb3\xa8"\x87\x00B\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"\x87f\xff\xca\xff\xdb\xff\xca\xff\xeb\xff\xeb\xff\xdb\xff\xca\xff\x b9\xffU\xffD\xdb\x00B\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11D\xdcU\xff\xa9\xff\xfc \xff\xfc\xff\xfc\xff\xfc\xff\xfc\xff\xfc\xff\xfc\xff\xec\xffv\xffU\xffU\xfe"\x86\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x 00\x00\x00\x00\x003\xbaU\xffU\xffU\xff\xb9\xff\xfc\xff\xb9\xff\x98\xffe\xffU\xffU\xffU\xffU\xffU\xffU\xffD\xfe\x11\x95\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"fU\xff\xba\xff\xfc\xff\x98\xffU\xffU\xffU\xffD\xfeD\xfdD\xfdD\xfdU\xffU\xffU\xffU\xffD\xfe3\xfc\x00r\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00D\xdc\xa9\xff\xfc\xff\xfc\xff\xa8\xffU\xffD\xfe3\xfd3\xfc3\xfc3\xfc3\xfc3\xfcD\xfdU\xffU\xffU\xffD\xfd"\xc7\x00@\x00\x10\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"U\xff\xec\xff\xfc\xff\xfc\xffv\xffD\xfe3\xfc3\xfb"\xd7"\xd83\xfb3\xfc3\xfc3\xfcU\xffU\xffU\xffD\xfe3\xea\x00`\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"vU\xff\xa9\xff\xfc\xff\xa9\xffU\xffD\xfd3\xea\x00\x90\x00\x80\x00p\x00p3\xfb3\xfc3\xfcU\xffU\xffU\xffD\xfe 3\xea\x00p\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11TU\xfeU\xffU\xffU\xffD\xfeU\xff"\xc6\x00p\x00P\x000\x00@3\xec3\xfc3\xfcU\xffU\xffU\xffD\ xfe3\xfb\x00\x81\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00RD\xfeU\xffU\xffU\xff3\xd9\x00\x81\x00P\x00 \x00 3\xbaD\xfd3\xfcD\xfeU\xff U\xffU\xffD\xfd"\xd8\x00p\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00@\x00`\x00p\x00p\x00`\x00P\x000\x00!D\xdcD\xfe3\xfcD\xfeU\xf fU\xffU\xffD\xfe3\xfb\x00\xa1\x00`\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00 \x000\x000\x000\x00 \x11CU\xfeU\xffD\xf eU\xffU\xffU\xffU\xffD\xfe3\xfb\x11\xc5\x00p\x00@\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x10\x 00\x10\x11CU\xfeU\xffU\xffU\xffU\xffU\xffU\xffD\xfd3\xfc\x11\xb3\x00p\x00P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x11D\xedU\xffU\xffU\xffU\xffU\xffU\xff3\xfc3\xfb\x11\xb3\x00p\x00P\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0 0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"fU\xffU\xffU\xffU\xffU\xffU\xffD\xfe3\xfb\x00\xb2\x00p\x00P\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00D\xccU\xffU\xffU\xffU\xffU\xffU\xff3\xfa"\xe9\x00\x80\x00P\x00 \x00\x10\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00D\xeeU\xffU\xffU\xffU\xffU\xffD\xfd3\xfb\x11\xc5\x00`\x000\x 00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x003\xcaD\xfc3\xfc3\xfc3 \xfc3\xfc3\xfb"\xe8\x00\x81\x00P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x 00\x00\x00\x00\x00\x000\x00a\x00\x81\x00\x91\x00\x91\x00\x91\x00\x91\x00\x81\x00P\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11C"\x863\xca3\xda"\xc8\x11\x93\x00r\x00P\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10U\xff\xa9\xff\xa8\xffU\xffU\xffD\xfdD\xfd\x00@\x00 \x00\x10\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x003\xcb\x98\xff\xfc\xff\xeb\x ffU\xffU\xffD\xfe3\xfc\x11\xa5\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00U\xffe\xff\xb9\xffv\xffU\xffU\xffD\xfe3\xfc3\xea\x00P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10U\xffU\xffU\xffU\xffU\xffU\xff3\xfc3\xfc3\xfb\x00`\x000\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x103\xba3\xfc3\xfc3\xfc3\xfc3\xfc3\xfc3\xfc"\xd7\x00`\x000\x00\x00\x00\x00\x00\x00\x00\x00\ x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0003\xfc3\xfc3\xfc3\xfc3\xfc3\xfc3\xfc\x00\x90\x00 P\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x000 3\xda3\xfc3\xfc3\xfc3\xea\x00\x90\x00`\x000\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x000\x00P\x00p\x00p\x00p\x00`\x00@\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00 \x000\x000\x000\x00 \x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x 00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x10\x00\ x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
The code you’ve posted doesn’t work because there are several errors, but none that would cause the error message you’re observing.Here’s the code with those errors fixed. This should work (though it doesn’t work with the data you provided, since that is truncated):def convertGBAR4444(array, width, height, startByte = 0): img = QImage(width, height, QImage.Format_ARGB32) img.fill(QtGui.QColor(0,0,0,0)) for y in range(height): for x in range(width): gb = array[startByte + y * 2 * width + x * 2] ar = array[startByte + y * 2 * width + x * 2 + 1] #print(gb, ar) g = gb >> 4 b = gb & 0xF a = ar >> 4 r = ar & 0xF img.setPixel(x, y, qRgba(r * 0x11, g * 0x11, b * 0x11, a * 0x11)) return img
server doesn't send data to clients I have this piece of code for server to handle clients. it properly receive data but when i want to send received data to clients nothing happens.serverimport socketfrom _thread import *class GameServer: def __init__(self): # Game parameters board = [None] * 9 turn = 1 # TCP parameters specifying self.tcp_ip = socket.gethostname() self.tcp_port = 9999 self.buffer_size = 2048 self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: self.s.bind((self.tcp_ip, self.tcp_port)) except: print("socket error, Please try again! ") self.s.listen(5) print('Waiting for a connection...') def messaging(self, conn): while True: data = conn.recv(self.buffer_size) if not data: break print("This data from client:", data) conn.send(data) def thread_run(self): while True: conn, addr = self.s.accept() print('connected to: ' + addr[0] + " : " + str(addr[1])) start_new_thread(self.messaging, (conn,))def main(): gameserver = GameServer() gameserver.thread_run()if __name__ == '__main__': main()'I want to if data received completely send to clients by retrieve the address of sender and send it to other clients by means of conn.send() but seems there is no way to do this with 'send()' method.The piece of client side code'def receive_parser(self): global turn rcv_data = self.s.recv(4096) rcv_data.decode() if rcv_data[:2] == 'c2': message = rcv_data[2:] if message[:3] == 'trn': temp = message[3] if temp == 2: turn = -1 elif temp ==1: turn = 1 elif message[:3] == 'num': self.set_text(message[3]) elif message[:3] == 'txt': self.plainTextEdit_4.appendPlainText('client1: ' + message[3:]) else: print(rcv_data)'the receiver method does not receive any data.
I modified your code a little(as I have python 2.7) and conn.send() seems to work fine. You can also try conn.sendall(). Here is the code I ran:Server code:import socketfrom thread import *class GameServer: def __init__(self): # Game parameters board = [None] * 9 turn = 1 # TCP parameters specifying self.tcp_ip = "127.0.0.1"#socket.gethostname() self.tcp_port = 9999 self.buffer_size = 2048 self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: self.s.bind((self.tcp_ip, self.tcp_port)) except: print("socket error, Please try again! ") self.s.listen(5) print('Waiting for a connection...') def messaging(self, conn): while True: data = conn.recv(self.buffer_size) if not data: break print("This data from client:", data) conn.send(data) def thread_run(self): while True: conn, addr = self.s.accept() print('connected to: ' + addr[0] + " : " + str(addr[1])) start_new_thread(self.messaging, (conn,))def main(): gameserver = GameServer() gameserver.thread_run()main()Client code:import sockets=socket.socket(socket.AF_INET, socket.SOCK_STREAM)s.connect(("127.0.0.1", 9999))def receive_parser(): #global turn s.sendall("hello world") rcv_data = s.recv(4096) # rcv_data.decode() # if rcv_data[:2] == 'c2': # message = rcv_data[2:] # if message[:3] == 'trn': # temp = message[3] # if temp == 2: # turn = -1 # elif temp ==1: # turn = 1 # elif message[:3] == 'num': # self.set_text(message[3]) # elif message[:3] == 'txt': # self.plainTextEdit_4.appendPlainText('client1: ' + message[3:]) # else: print(rcv_data)receive_parser()
Invert alternation regex I have an alternation regex that I want to invert but can't seem to get it working, it looks like this:( |\w+-\w+| \+\w+|\w)which will extract all special characters except for - in the middle of a word or + in front of a word. The problem is that I want to remove everything that is not covered by this regex but the simple solution of putting ?! in front of this doesn't work.sample input: -xxx xxx- xx-xx +xxx xxx+ xx+xxdesired output: xxx xxx xx-xx +xxx xxx xxxxThanks for the help,Mattias
Your question is a bit unclear, are you looking for this?a = "abc def,ghi remove - this keep-that foo + bar +keep!"import reprint re.sub(r'[^\w\s+-]|(?<!\w)-(?!\w)|\+(?!\w)', '', a)#abc defghi remove this keep-that foo bar +keepThe more accurate regexp:[^\w\s+-]|^-|-$|\+$|(?<=\W)-|-(?=\W)|\+(?=\W)|(?<=\w)\+
Creating a New DataFrame Column by Using a Comparison Operator I have a DataFrame that looks like something similar to this: 00 31 112 73 15And I want to add a column using two comparison operators. Something like this: df[1] = np.where(df[1]<= 10,1 & df[1]>10,0)I want my return to look like this: 0 10 3 11 11 02 7 13 15 0But, I get this error message:TypeError: cannot compare a dtyped [float64] array with a scalar of type [bool]Any help would be appreciated!
Setupdf = pd.DataFrame({'0': {0: 3, 1: 11, 2: 7, 3: 15}})Out[1292]: 00 31 112 73 15Solution#compare df['0'] to 10 and convert the results to int and assign it to df['1']df['1'] = (df['0']<10).astype(int)dfOut[1287]: 0 10 3 11 11 02 7 13 15 0
Return boolean in for-loop evaluating multiple lists I'm attempting to iterate over multiple text articles, comparing whether these articles have keywords in 2 disparate lists. If the article has a keyword from both lists, then it should return 'true.' If an article only has a keyword from one list, then it should be 'false'. Note: I'm breaking down a larger for-loop into smaller bits to see if I can get it to work, which is why I'm not splitting this into 2 for loops which would check each list and return a '1' for each and then subsetting out anything less than a '2'...which still may be the way to go even if it's a large dataset?Example of Data:Data:Text result The co-worker ate all of the candy. FalseBluejays love peanuts. FalseWesties will eat avocado, even figs. TrueHere is my code, but I'm struggling with my for loop. def z(etext):words = ['candy', 'chocolate', 'mints', 'figs', 'avocado']words2 = ['eat', 'slurp', 'chew', 'digest']for keywords in words and words2: return Truedf['result'] = df['Keyterm'].apply(z)This code returns 'true' for every row of my dataframe, which is not correct. Each row has a list of text in it. EDIT: The solution: def z(etext): words = ['candy', 'chocolate', 'mints', 'figs', 'avocado'] words2 = ['eat', 'slurp', 'chew', 'digest'] for keyword in words: index = etext.find(keyword) if index != -1: for anotherword in words2: index2 = etext.find(anotherword) if index2 != -1: return Truedf['result'] = df['Text'].apply(z)
what about Westies will eat avocado, even figs. [eat, avocado, figs]which has multiple keyterms, do you want to check each one of them. I mean return True when each keyterm is present in both lists or what? Check if the solution works for you? Text = ["The co-worker ate all of the candy.", "Bluejays love peanuts.","Westies will eat avocado, even figs."]Keyterm = [["candy"], [], ["eat", "avocado", "figs"]]data = pd.DataFrame({'Text': Text, 'Keyterm': Keyterm}) words = ['candy', 'chocolate', 'mints', 'figs', 'avocado']words2 = ['eat', 'slurp', 'chew', 'digest', 'candy', 'figs']def checkList(word, lists): if word in lists: return True else: return Falsedef z(etext): res = [] for keyword in etext: ############# Using function checkList here ############## if checkList(keyword, words) and checkList(keyword, words2): res.append(True) else: res.append(False) return resdata['result'] = data['Keyterm'].apply(z)
Pip install for warrant fails I tried installing warrant using pip and got the following error:Command "c:\...\venv\scripts\python.exe -u -c "import setuptools, tokenize;__file__='C:\\...\\AppData\\Local\\Temp\\1\\pip-install-lahy2d9f\\pycryptodome\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\...\AppData\Local\Temp\1\pip-record-2t9higml\install-record.txt --single-version-externally-managed --compile --install-headers c:\\...\venv\include\site\python3.6\pycryptodome" failed with error code 1 in C:\\...\AppData\Local\Temp\1\pip-install-lahy2d9f\pycryptodome\Has anyone else faced this issue before?python version: 3.6pip version: 19.0.1
I had the same issue when I ran pip3 install warrant. I fixed the issue by installing a C compiler. Try installing Visual Studio build tools which provides a bunch of compilers.Link to Visual Studio build tools download
JSON file loaded as one row, one column using pandas read_json; expecting a full dataframe I was provided with a JSON file which looks something like below when opened with Atom:["[{\"column1\":value1,\"column2\":value2,\"column3\":value3 ...I tried loading it in Jupyter with pandas read_json as such:data = pd.read_json('filename.json', orient = 'records')And when I print data.head(), it shows the result below:screenshot of resultsI have also tried the following:import jsonwith open('filename.json', 'r') as file: data = json.load(file)When I check with type(data) I see that it is a list. When I check with data[0][1], it returns me { i.e. it seems that the characters in the file has been loaded as a single element in the list?Just wondering if I am missing anything? I am expecting the JSON file to be loaded as a dataframe so that I can analyze the data inside. Appreciate any guidance and advice. Thanks in advance!
Ok so I think as head() only shows one entry that the outer brackets are not needed. I would try to read your file as a string and change the string to something that pd.read_json() can parse. I assume that your file contains data in a form like this:["[{\"column1\":2,\"column2\":\"value2\",\"column3\":4}, {\"column1\":4,\"column2\":\"value2\",\"column3\":8}]"]Now, I would read it and remove trailing \n if they exist and correct the automatic escaping of the read() method. Then I remove [" and "] from the string with this code:with open('input.json', 'r') as file: data = file.read().rstrip()cleaned_string = data.replace('\\', '')[2:-2]The result is now a valid json string that looks like this:'[{"column1":2,"column2":"value2","column3":4}, {"column1":4,"column2":"value2","column3":8}]'This string can now be easily read by pandas with this line:pd.read_json(cleaned_string, orient = 'records')Output: column1 column2 column30 2 value2 41 4 value2 8The specifics (e.g. the indices to remove unused characters) could be different for your string as I do not know your input. However, I think this approach allows you to read your data.
PyQT - setting the text color for a QTabWidget Is there any way to set the text color of a certain tab that's part of a QTabWidget? QTabBar seems to have a method to set the tab text color, but I do not see a similar method for QTabWidget.
The tab text color can be set via the tab-widget's tabBar method:tabwidget.tabBar().setTabTextColor(index, color)
Django related key of the same model I'm working on a feature for an app much like a Twitter Retweet.In the model for Item, I want to add a related field for reposted_from that will reference another Item. I dont think I use ForeignKey for this, since it's the same Model, but what do I use instead?
It is common to add a foreign key to self as such:class Item(models.Model): parent = models.ForeignKey('self')You may specify a related name as such:class Item(models.Model): parent = models.ForeignKey('self', related_name='children')Because an Item may not have a parent, don't forget null=True and blank=True as such:class Item(models.Model): parent = models.ForeignKey('self', related_name='children', null=True, blank=True)Then you will be able to query children as such:item.childrenYou might as well use django-mptt and benefit of some optimization and extra tree features:from mptt.models import MPTTModel, TreeForeignKeyclass Item(MPTTModel): parent = TreeForeignKey('self', null=True, blank=True, related_name='children')
Class and Array Problems I am absolutely useless with python and I'm struggling to do what seems to be simple things.I need to read a text file which contains a network routing table which contains the distance between each node on the network (below)0,2,4,1,6,0,02,0,0,0,5,0,04,0,0,0,0,5,01,0,0,0,1,1,06,5,0,1,0,5,50,0,5,1,5,0,00,0,0,0,5,0,0I then need to assign it to a two dimensional array which I have done with the code i have written below..Network = []NodeTable = []def readNetwork(): myFile = open('network.txt','r') for line in myFile.readlines(): line.strip(' \n' '\r') line = line.split(',') line = [int(num) for num in line] Network.append(line)Once that has been done I then need to iterate through the Network array and add each horizontal line to another array which will hold information about the nodes, but as far as I have been able to get with that is here:class Node(object): index = #Needs to start from A and increase with each node previousNode = invalid_node distFromSource = infinity visited = FalseNodeTable.append(Node())So that array will be initialised as:A invalid_node infinity FalseB invalid_node infinity FalseC invalid_node infinity False...etcCould anyone give me a hand with creating each node in the NodeTable array?
Redundant lineStrings in Python are immutable, thus with the following line:line.strip(' \n' '\r') you are only getting a copy of the line string, stripped of some characters, but you do not assign it to anything. Change it into:line = line.strip(' \n' '\r') As DSM pointed out in the comments, it will not change much, as int will just ignore redundant whitespaces.Mapping string to intsYou also are mapping strings to ints like that:line = [int(num) for num in line]which could be replaced by clearer:line = map(int, line)and should give you some slight performance gain also. To shorten your code, you can also replace the following lines:line.strip(' \n' '\r') line = line.split(',')line = [int(num) for num in line]Network.append(line)with the following:Network.append(map(int, line.split(',')))How to increase Node's index attribute with each instanceThis could be done like that:>>> class Node(object): baseindex = '@' # sign before "A" def __init__(self): cls = self.__class__ cls.baseindex = chr(ord(cls.baseindex) + 1) self.index = self.baseindex self.previousNode = 'invalid_node' self.distFromSource = 'infinity' self.visited = False>>> a = Node()>>> a.index'A'>>> b = Node()>>> b.index'B'>>> a.index'A'As you can see baseindex is attached to the class, and index is attached to the class's instance. I suggest you should attach every instance-specific variable to the instance, as shown in the example above.Adding Node into the list as listOne of the easiest ways to insert it as list into another list, is to add a method returning it as list (see as_list() method):>>> class Node(object): baseindex = '@' # sign before "A" def __init__(self): cls = self.__class__ cls.baseindex = chr(ord(cls.baseindex) + 1) self.index = self.baseindex self.previousNode = 'invalid_node' self.distFromSource = 'infinity' self.visited = False def as_list(self): return [self.index, self.previousNode, self.distFromSource, self.visited]>>> a = Node()>>> a.index'A'>>> a.as_list()['A', 'invalid_node', 'infinity', False]so you should be able to add nodes like this:NodeTable.append(Node().as_list())But remember - after doing the above, you will not get list of Node instances, you will get list of lists.
Repeating a for in line loop python How would I repeat this (excluding the opening of the file and the setting of the variables)?this is my code in python3 file = ('file.csv','r') count = 0 #counts number of times i was equal to 1 i = 0 #column number for line in file: line = line.split(",") if line[i] == 1: count = count + 1 i = i+1
If I understand the question, try this and adjust for however you want to format. Replace NUM_COLUMNS with the number of times you want it repeatingfile = open('file.csv','r')data = file.readlines()for i in range(NUM_COLUMNS): count = 0 for line in data: line = line.split(",") if line[i] == ("1"): count = count + 1 print count
Python syntax for an unless statement My request is simple but I do not know how to proceed:I would like to translate an unless statement in python as followed:taken_asks -= 1 unless taken_asks == 0This is just one line of code which is part of a very big function. Any idea? Thank you in advance !
taken_asks -= (1 if taken_asks != 0 else 0)
How to make data persistent when setData method is used The code below creates a single QComboBox. The combo's QStandardItems are set with data_obj using setData method. Changing combo's current index triggers run method which iterates combo' and prints the data_obj which turns to Python dictionary. How to make the data_obj persistent?app = QApplication(list())class DataObj(dict): def __init__(self, **kwargs): super(DataObj, self).__init__(**kwargs)class Dialog(QDialog): def __init__(self, parent=None): super(Dialog, self).__init__(parent) self.setLayout(QVBoxLayout()) self.combo = QComboBox(self) for i in range(5): combo_item = QStandardItem('item_%s' % i) data_obj = DataObj(foo=i) print '..out: %s' % type(data_obj) combo_item.setData(data_obj, Qt.UserRole + 1) self.combo.model().appendRow(combo_item) self.combo.currentIndexChanged.connect(self.run) self.layout().addWidget(self.combo) def run(self): for i in range(self.combo.count()): item = self.combo.model().item(i, 0) data_obj = item.data(Qt.UserRole + 1) print ' ...in: %s' % type(data_obj)if __name__ == '__main__': gui = Dialog() gui.resize(400, 100) gui.show() qApp.exec_()
Below is the working solution to this problem:app = QApplication(list())class DataObj(dict): def __init__(self, **kwargs): super(DataObj, self).__init__(**kwargs)class Object(object): def __init__(self, data_obj): super(Object, self).__init__() self.data_obj = data_objclass Dialog(QDialog): def __init__(self, parent=None): super(Dialog, self).__init__(parent) self.setLayout(QVBoxLayout()) self.combo = QComboBox(self) for i in range(5): combo_item = QStandardItem('item_%s' % i) obj = Object(data_obj=DataObj(foo=i)) print '..out: %s' % type(obj.data_obj) combo_item.setData(obj, Qt.UserRole + 1) self.combo.model().appendRow(combo_item) self.combo.currentIndexChanged.connect(self.run) self.layout().addWidget(self.combo) def run(self): for i in range(self.combo.count()): item = self.combo.model().item(i, 0) obj = item.data(Qt.UserRole + 1) print ' ...in: %s' % type(obj.data_obj)if __name__ == '__main__': gui = Dialog() gui.resize(400, 100) gui.show() qApp.exec_()
How to properly sample truncated distributions? I am trying to learn how to sample truncated distributions. To begin with I decided to try a simple example I found here exampleI didn't really understand the division by the CDF, therefore I decided to tweak the algorithm a bit. Being sampled is an exponential distribution for values x>0 Here is an example python code:# Sample exponential distribution for the case x>0import numpy as npimport matplotlib.pyplot as pltfrom scipy.stats import normdef pdf(x): return x*np.exp(-x)xvec=np.zeros(1000000)x=1.for i in range(1000000): a=x+np.random.normal() xs=x if a > 0. : xs=a A=pdf(xs)/pdf(x) if np.random.uniform()<A : x=xs xvec[i]=xx=np.linspace(0,15,1000)plt.plot(x,pdf(x))plt.hist([x for x in xvec if x != 0],bins=150,normed=True)plt.show()Ant the output is:The code above seems to work fine only for when using the condition if a > 0. :, i.e. positive x, choosing another condition (e.g. if a > 0.5 :) produces wrong results. Since my final goal was to sample a 2D-Gaussian - pdf on a truncated interval I tried extending the simple example using the exponential distribution (see the code below). Unfortunately, since the simple case didn't work, I assume that the code given below would yield wrong results.I assume that all this can be done using the advanced tools of python. However, since my primary idea was to understand the principle behind, I would greatly appreciate your help to understand my mistake.Thank you for your help. EDIT:# code updated according to the answer of CrazyIvan from scipy.stats import multivariate_normalRANGE=100000a=2.06072E-02b=1.10011E+00a_range=[0.001,0.5]b_range=[0.01, 2.5]cov=[[3.1313994E-05, 1.8013737E-03],[ 1.8013737E-03, 1.0421529E-01]]x=ay=bj=0for i in range(RANGE): a_t,b_t=np.random.multivariate_normal([a,b],cov)# accept if within bounds - all that is neded to truncate if a_range[0]<a_t and a_t<a_range[1] and b_range[0]<b_t and b_t<b_range[1]: print(dx,dy) EDIT:I changed the code by norming the analytic pdf according to this scheme, and according to the answers given by, @Crazy Ivan and @Leandro Caniglia , for the case where the bottom of the pdf is removed. That is dividing by (1-CDF(0.5)) since my accept condition is x>0.5. This seems again to show some discrepancies. Again the mystery prevails .. import numpy as npimport matplotlib.pyplot as pltfrom scipy.stats import normdef pdf(x): return x*np.exp(-x)# included the corresponding cdfdef cdf(x): return 1. -np.exp(-x)-x*np.exp(-x)xvec=np.zeros(1000000)x=1.for i in range(1000000): a=x+np.random.normal() xs=x if a > 0.5 : xs=a A=pdf(xs)/pdf(x) if np.random.uniform()<A : x=xs xvec[i]=xx=np.linspace(0,15,1000)# new part norm the analytic pdf to fix the areaplt.plot(x,pdf(x)/(1.-cdf(0.5)))plt.hist([x for x in xvec if x != 0],bins=200,normed=True)plt.savefig("test_exp.png")plt.show()It seems that this can be cured by choosing larger shift sizeshift=15.a=x+np.random.normal()*shift.which is in general an issue of the Metropolis - Hastings. See the graph below: I also checked shift=150Bottom line is that changing the shift size definitely improves the convergence. The misery is why, since the Gaussian is unbounded.
You say you want to learn the basic idea of sampling a truncated distribution, but your source is a blog post about Metropolis–Hastings algorithm? Do you actually need this "method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult"? Taking this as your starting point is like learning English by reading Shakespeare. Truncated normalFor truncated normal, basic rejection sampling is all you need: generate samples for original distribution, reject those outside of bounds. As Leandro Caniglia noted, you should not expect truncated distribution to have the same PDF except on a shorter interval — this is plain impossible because the area under the graph of a PDF is always 1. If you cut off stuff from sides, there has to be more in the middle; the PDF gets rescaled. It's quite inefficient to gather samples one by one, when you need 100000. I would grab 100000 normal samples at once, accept only those that fit; then repeat until I have enough. Example of sampling truncated normal between amin and amax:import numpy as npn_samples = 100000amin, amax = -1, 2samples = np.zeros((0,)) # empty for nowwhile samples.shape[0] < n_samples: s = np.random.normal(0, 1, size=(n_samples,)) accepted = s[(s >= amin) & (s <= amax)] samples = np.concatenate((samples, accepted), axis=0)samples = samples[:n_samples] # we probably got more than needed, so discard extra onesAnd here is the comparison with the PDF curve, rescaled by division by cdf(amax) - cdf(amin) as explained above. from scipy.stats import norm_ = plt.hist(samples, bins=50, density=True)t = np.linspace(-2, 3, 500)plt.plot(t, norm.pdf(t)/(norm.cdf(amax) - norm.cdf(amin)), 'r')plt.show()Truncated multivariate normalNow we want to keep the first coordinate between amin and amax, and the second between bmin and bmax. Same story, except there will be a 2-column array and the comparison with bounds is done in a relatively sneaky way:(np.min(s - [amin, bmin], axis=1) >= 0) & (np.max(s - [amax, bmax], axis=1) <= 0)This means: subtract amin, bmin from each row and keep only the rows where both results are nonnegative (meaning we had a >= amin and b >= bmin). Also do a similar thing with amax, bmax. Accept only the rows that meet both criteria. n_samples = 10amin, amax = -1, 2bmin, bmax = 0.2, 2.4mean = [0.3, 0.5]cov = [[2, 1.1], [1.1, 2]]samples = np.zeros((0, 2)) # 2 columns nowwhile samples.shape[0] < n_samples: s = np.random.multivariate_normal(mean, cov, size=(n_samples,)) accepted = s[(np.min(s - [amin, bmin], axis=1) >= 0) & (np.max(s - [amax, bmax], axis=1) <= 0)] samples = np.concatenate((samples, accepted), axis=0)samples = samples[:n_samples, :]Not going to plot, but here are some values: naturally, within bounds.array([[ 0.43150033, 1.55775629], [ 0.62339265, 1.63506963], [-0.6723598 , 1.58053835], [-0.53347361, 0.53513105], [ 1.70524439, 2.08226558], [ 0.37474842, 0.2512812 ], [-0.40986396, 0.58783193], [ 0.65967087, 0.59755193], [ 0.33383214, 2.37651975], [ 1.7513789 , 1.24469918]])
Unique ID in html for generating Buttons sorry if the title is misleading.I'm having the following problem. I am creating multiple rows in HTML using Genshi. For each row I have a button at the end of the row for delete purposes. The code looks like this:<form action="/deleteAusleihe" method="post"> <table> <tr> <th>ID</th> <th>Person</th> <th>Buch</th> <th></th> </tr><tr py:for="v in verleihen"> <input type = "hidden" value="v.id" name="toDelete"/> <td py:content="v.id">Vorname der Person</td> <td py:content="v.kundeID">Name der Person</td> <td py:content="v.buchID">Straße der Person</td> <td> <input type="submit" name="submit" value="Löschen"/> </td> <br/> </tr></table></form>The input type ="hidden" should store the value of each id so I am able to identify the row later on. When I try to delete now, and lets assume I have 2 rows filled, I get 2 id's as a paramater, which is logical to me, but I don't know how to solve it.The deleteAusleihe function looks like this:@expose()def deleteAusleihe(self,toDelete,submit): Verleih1 = DBSession.query(Verleih).filter_by(id=toDelete) for v in Verleih1: DBSession.delete(v) DBSession.flush() transaction.commit() redirect("/Verleih")Thanks in advance for your help!
The issue is that all the hidden inputs inside the <form> element get submitted at once.There are various ways you could solve this. Probably the easiest would be to move the form tag inside the loop, so that there are multiple forms and each one only wraps a single input and button.
Setuptools pip failed with error code 1 when installing Hue browser for Apache Hadoop I'm trying to install Hue browser for Apache Hadoop on my mac. So I retrieve the git folder :git clone https://github.com/cloudera/hue.gitI followed this tutorial here But when doing make apps I end up with the following error :python2.7 /Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py \ -qq --no-site-packages /Users/leo/Downloads/hue-3.8.1/build/envTraceback (most recent call last):File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 2355, in <module>main()File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 827, in mainsymlink=options.symlink)File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 995, in create_environmentinstall_wheel(to_install, py_executable, search_dirs)File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 963, in install_wheel'PIP_NO_INDEX': '1'File "/Users/leo/Downloads/hue-3.8.1/tools/virtual-bootstrap/virtual-bootstrap.py", line 905, in call_subprocess% (cmd_desc, proc.returncode))OSError: Command /Users/leo/Downloads...ld/env/bin/python2.7 -c "import sys, pip; sys...d\"] + sys.argv[1:]))" setuptools pip failed with error code 1I don't understand what the problem is. Thanks for any help on this.
try sudo make appsit works for me on Sierra.
Create Panorama from Non-Sequential Video Frames There is a similar question (not that detailed and no exact solution). I want to create a single panorama image from video frames. And for that, I need to get minimum non-sequential video frames at first. A demo video file is uploaded here.What I NeedA mechanism that can produce not-only non-sequential video frames but also in such a way that can be used to create a panorama image. A sample is given below. As we can see to create a panorama image, all the input samples must contain minimum overlap regions to each other otherwise it can not be done.So, if I have the following video frame's orderA, A, A, B, B, B, B, C, C, A, A, C, C, C, B, B, B ...To create a panorama image, I need to get something as follows - reduced sequential frames (or adjacent frames) but with minimum overlapping. [overlap] [overlap] [overlap] [overlap] [overlap] A, A,B, B,C, C,A, A,C, C,B, ...What I've Tried and StuckA demo video clip is given above. To get non-sequential video frames, I primarily rely on ffmpeg software.Trial 1 Ref.ffmpeg -i check.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -map 0:v out.mp4After that, on the out.mp4, I applied slice the video frames using opencvimport cv2, os from pathlib import Pathvframe_dir = Path("vid_frames/")vframe_dir.mkdir(parents=True, exist_ok=True)vidcap = cv2.VideoCapture('out.mp4')success,image = vidcap.read()count = 0while success: cv2.imwrite(f"{vframe_dir}/frame%d.jpg" % count, image) success,image = vidcap.read() count += 1Next, I rotated these saved images horizontally (as my video is a vertical view).vframe_dir = Path("out/")vframe_dir.mkdir(parents=True, exist_ok=True)vframe_dir_rot = Path("vframe_dir_rot/")vframe_dir_rot.mkdir(parents=True, exist_ok=True)for i, each_img in tqdm(enumerate(os.listdir(vframe_dir))): image = cv2.imread(f"{vframe_dir}/{each_img}")[:, :, ::-1] # Read (with BGRtoRGB) image = cv2.rotate(image,cv2.cv2.ROTATE_180) image = cv2.rotate(image,cv2.ROTATE_90_CLOCKWISE) cv2.imwrite(f"{vframe_dir_rot}/{each_img}", image[:, :, ::-1]) # Save (with RGBtoBGR)The output is ok for this method (with ffmpeg) but inappropriate for creating the panorama image. Because it didn't give some overlapping frames sequentially in the results. Thus panorama can't be generated. Trail 2 - Refffmpeg -i check.mp4 -vf decimate=cycle=2,setpts=N/FRAME_RATE/TB -map 0:v out.mp4didn't work at all.Trail 3ffmpeg -i check.mp4 -ss 0 -qscale 0 -f image2 -r 1 out/images%5d.pngNo luck either. However, I've found this last ffmpeg command was close by far but wasn't enough. Comparatively to others, this gave me a small amount of non-duplicate frames (good) but the bad thing is still do not need frames, and I kinda manually pick some desired frames, and then the opecv stitching algorithm works. So, after picking some frames and rotating (as mentioned before):stitcher = cv2.Stitcher.create()status, pano = stitcher.stitch(images) # images: manually picked video frames -_- UpdateAfter some trials, I am kinda adopting the non-programming solution. But would love to see an efficient programmatic approach.On the given demo video, I used Adobe products (premiere pro and photoshop) to do this task, video instruction. But the issue was, I kind of took all video frames at first (without dropping to any frames and that will computationally cost further) via premier and use photoshop to stitching them (according to the youtube video instruction). It was too heavy for these editor tools and didn't look better way but the output was better than anything until now. Though I took few (400+ frames) video frames only out of 1200+.Here are some big challenges. The original video clips have some conditions though, and it's too serious. Unlike the given demo video clips:It's not straight forward, i.e. camera shakingLighting condition, i.e. causes different visual look at the same spotCameral flickering or bandingThis scenario is not included in the given demo video. And this brings additional and heavy challenges to create panorama images from such videos. Even with the non-programming way (using adobe tools) I couldn't make it any good.However, for now, all I'm interest to get a panorama image from the given demo video which is without the above condition. But I would love to know any comment or suggestion on that.
My approach to decimating the video is to pretty much do what a stitching program would do to try and stitch two frames together. I look for matching feature points and I only save frames once the number of matched points dip below what I think is an acceptable level.To stitch, I just used OpenCV's built-in stitcher. If you want to avoid OpenCV's solution, I can redo the code to go without it (though I won't be able to replicate all of the nice cleaning steps that opencv does). The decimate program is honestly already most of the way there towards doing a generic stitch.I got the video from here: https://www.videezy.com/nature/48905-rain-forest-pan-shotAnd this is the panorama (decimated to 7 frames at cutoff = 50)This is a pretty ideal case though, so this strategy might fail for a more difficult video like the one you described. If you can post that video then we can test out this solution on the actual use case and modify it if need be.I like this program. And these panning shots are cool. Here's another one from this video: https://www.videezy.com/abstract/41671-pan-of-bryce-canyon-in-utah-4k(decimated to 4 frames at cutoff = 50)https://www.videezy.com/nature/11664-panning-shot-of-red-peaks-and-green-valleys-in-4k(decimated to 4 frames at cutoff = 150)Decimateimport cv2import numpy as npimport osimport shutil# rescale the imagesdef rescale(img): scale = 0.5; h,w = img.shape[:2]; h = int(h*scale); w = int(w*scale); return cv2.resize(img, (w,h));# delete and create directoryfolder = "frames/";if os.path.isdir(folder): shutil.rmtree(folder);os.mkdir(folder);# open vidcapcap = cv2.VideoCapture("PNG_7501.mp4"); # your video herecounter = 0;# make an orb feature detector and a brute force matcherorb = cv2.ORB_create();bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=False);# store the first frame_, last = cap.read();last = rescale(last);cv2.imwrite(folder + str(counter).zfill(5) + ".png", last);# get the first frame's stuffkp1, des1 = orb.detectAndCompute(last, None);# cutoff, the minimum number of keypointscutoff = 50; # Note: this should be tailored to your video, this is high here since a lot of this video looks like# count number of framesprev = None;while True: # get frame ret, frame = cap.read(); if not ret: break; # resize frame = rescale(frame); # count keypoints kp2, des2 = orb.detectAndCompute(frame, None); # match matches = bf.knnMatch(des1, des2, k=2); # lowe's ratio good = [] for m,n in matches: if m.distance < 0.5*n.distance: good.append(m); # check against cutoff print(len(good)); if len(good) < cutoff: # swap and save counter += 1; last = frame; kp1 = kp2; des1 = des2; cv2.imwrite(folder + str(counter).zfill(5) + ".png", last); print("New Frame: " + str(counter)); # show cv2.imshow("Frame", frame); cv2.waitKey(1); prev = frame;# also save last framecounter += 1;cv2.imwrite(folder + str(counter).zfill(5) + ".png", prev);# check number of saved framesprint("Counter: " + str(counter));Stitcherimport cv2import numpy as npimport os# target folderfolder = "frames/";# load imagesfilenames = os.listdir(folder);images = [];for file in filenames: # get image img = cv2.imread(folder + file); # save images.append(img);# use built in stitcherstitcher = cv2.createStitcher();(status, stitched) = stitcher.stitch(images);cv2.imshow("Stitched", stitched);cv2.waitKey(0);
mutagen: how to detect and embed album art in mp3, flac and mp4 I'd like to be able to detect whether an audio file has embedded album art and, if not, add album art to that file. I'm using mutagen1) Detecting album art. Is there a simpler method than this pseudo code:from mutagen import Fileaudio = File('music.ext')test each of audio.pictures, audio['covr'] and audio['APIC:'] if doesn't raise an exception and isn't None, we found album art2) I found this for embedding album art into an mp3 file:How do you embed album art into an MP3 using Python?How do I embed album art into other formats?EDIT: embed mp4audio = MP4(filename)data = open(albumart, 'rb').read()covr = []if albumart.endswith('png'): covr.append(MP4Cover(data, MP4Cover.FORMAT_PNG))else: covr.append(MP4Cover(data, MP4Cover.FORMAT_JPEG))audio.tags['covr'] = covraudio.save()
Embed flac:from mutagen import Filefrom mutagen.flac import Picture, FLACdef add_flac_cover(filename, albumart): audio = File(filename) image = Picture() image.type = 3 if albumart.endswith('png'): mime = 'image/png' else: mime = 'image/jpeg' image.desc = 'front cover' with open(albumart, 'rb') as f: # better than open(albumart, 'rb').read() ? image.data = f.read() audio.add_picture(image) audio.save()For completeness, detect picturedef pict_test(audio): try: x = audio.pictures if x: return True except Exception: pass if 'covr' in audio or 'APIC:' in audio: return True return False
How to make a list of n numbers in Python and randomly select any number? I have taken a count of something and it came out to N.Now I would like to have a list, containing 1 to N numbers in it.Example:N = 5then, count_list = [1, 2, 3, 4, 5]Also, once I have created the list, I would like to randomly select a number from that list and use that number.After that I would like to select another number from the remaining numbers of the list (N-1) and then use that also.This goes on it the list is empty.
You can create the enumeration of the elements by something like this:mylist = list(xrange(10))Then you can use the random.choice function to select your items:import random...random.choice(mylist)As Asim Ihsan correctly stated, my answer did not address the full problem of the OP. To remove the values from the list, simply list.remove() can be called:import random...value = random.choice(mylist)mylist.remove(value)As takataka pointed out, the xrange builtin function was renamed to range in Python 3.
How to get id of the tweet posted in tweepy I want to check if a certain tweet is a reply to the tweet that I sent. Here is how I think I can do it:Step1: Post a tweet and store id of posted tweetStep2: Listen to my handle and collect all the tweets that have my handle in itStep3: Use tweet.in_reply_to_status_id to see if tweet is reply to the stored idIn this logic, I am not sure how to get the status id of the tweet that I am posting in step 1. Is there a way I can get it? If not, is there another way in which I can solve this problem?
What one could do, is get the last nth tweet from a user, and then get the tweet.id of the relevant tweet. This can be done doing:latestTweets = api.user_timeline(screen_name = 'user', count = n, include_rts = False)I, however, doubt that it is the most efficient way.
Using keyboard actions to open files outside of Python Brand new to programming....I am trying to set up a program that can be controlled using keyboard shortcuts. I want the keyboard shortcuts to be linked to specific excel files. I have figured out how to open the excel files by themselves but now I want to attach the shortcuts to them. This is all I have:import osos.system('start C:\mold\MoldFlowMaster_exce.xlsx')
This makes a Tkinter window which you can type in to. If the 'shortcut' (in this code, 'a') is pressed, you can cause something to happen such as opening your Excel file. import tkinter as tkimport osdef onKeyPress(event): if event.char == 'a': os.system('start C:\mold\MoldFlowMaster_exce.xlsx')root = tk.Tk()root.geometry('300x200')text = tk.Text(root, background='black', foreground='white', font=('Comic Sans MS', 12))text.pack()root.bind('<KeyPress>', onKeyPress)root.mainloop()
Pycharm recognizes kwargs in print as wrong (python3) Whenever I do something like this:print("Hello World", flush=True, file=sys.stderr)PyCharm complains aboutEnd of statement expectedStatement expected, found Py:DEDENTStatement expected, found Py:RPARBecause of that "Syntax Error" all definitions after that are buggy and displayed as wrong as well.That confuses me a lot and features (e.g. autocompletion) are not working anymore because of this.Is there any setting I did not set correctly? Is this a bug?
My settings for the python interpreter were wrong.To change it, I went to Settings->Project->Project Interpreter.Everything is working fine now!Thanks for the comments which leads to the solution!
Python MYSQLDB Insert Syntax Error I'm trying to insert data into my database and I get a MYSQL syntax error using this code:import MySQLdbdb=MySQLdb.connect(host="localhost",user="root",passwd="",db="database")cursor = db.cursor()sql = "INSERT INTO table1('col1','col2') values ('val1','val2');"cursor.execute(sql)db.commit()
No quotes around the column names.INSERT INTO table1(col1, col2) VALUES ('val1', 'val2');You could use backticks around the column names, but not single quotes.
Backspace behavior in Python statement, what is correct behavior of printing a '\b' in code? Possible Duplicate: backspace character weirdness I have noticed that1. If I print only backspaces, i.e. a sequence of \b in Python, then it is completely blank.2. If I print characters followed by backspaces i.e. 'sssss\b\b\b\b\b', then it will print the multiple 's' charactersBut if I print something like 'ssss\b\b\b\baaaa', then the backspace, \b, will actually act like I am typing a backspace and delete the 's' characters.I am using Python 2.6 on Windows XP. Is this expected behavior. If I try to get length of backspace character, it is printed as 1.Here is my test code - >>> print 'ssss\b\b\b\b\baaaaa'aaaaa>>> print 'ssssssss\b\b\b\b\baaaaa'sssaaaaa>>> print 'ssssssss\b\b\b\b\b'ssssssss>>> print 'ssssssss\b\b\b\b\baaaaa'sssaaaaa>>> print '\b\b\b\b\b'>>>My question is- What is the expected behavior when I print '\b' in Python and why the deletion does work in only a particular case?
Expanded answer: The backspace doesn't delete anything, it moves the cursor to the left and it gets covered up by what you write afterwards. If you were writing to a device that can display overstriking (such as an old-fashioned "hard copy" terminal, which works like a typewriter), you'd actually see the new character on top of the old one. That's the real reason backspace has these semantics.On the unix command line, the shell can be set to interpret backspace as meaning "erase"-- unless it's set to only treat delete this way. But that's up to the program reading your input.
Inclusive range of list Python I'm trying to find the minimum elements within a section of a list. In the following example, a is the start and b is the end. I would like these indexes to partition the list inclusively, so that if the list is [1,2,3,9,4,10] the indexes 1 to 4 would include 2 and 4.def minimum (a,b,list): return min(list[a:b])In other words, is there a way to make list[a:b] inclusive?
By default, no.For this case, it is more conventional to do:min(li[a:b + 1])Also beware of naming your variable list, as it can have unintended consequences (silent namespace issues), as "list" also names the built-in list container type.If you simply want to write your own minimum method, you can encapsulate this behavior in your minimum method using the above method so you don't have to think about it ever again.Side note: Standard list-slicing uses O(N) space and can get expensive for large lists if minimum is called over and over gain. A cheaper O(1) space alternative would be:def minimum(a, b, li): min(itertools.islice(li, a, b + 1))EDIT:Don't use islice unless you are slicing starting at the beginning of the list or have tight memory constraints. It first iterates up to a, rather than directly indexing to a, which can take O(b) run-time.A better solution would be something like this, which runs with O(b-a) run-time and O(1) space:def minimum(li, a=0, b=None): if b is None: b = len(li) - 1 if b - a < 0: raise ValueError("minimum() arg is an empty sequence") current_min = li[a] for index in xrange(a, b + 1): current_min = min(current_min, li[index]) return current_minA fancier solution that you can use if the list is static (elements are not deleted and inserted during the series of queries) is to perform Range Minimum Queries using segment trees: http://www.geeksforgeeks.org/segment-tree-set-1-range-minimum-query/Building the tree requires O(N) run-time and O(N) space, although all queries after that require only O(log(N)) run-time and O(1) additional space.
Install specific python library version based on another library version In setup.py I have install_requires=[ "python-consul", "library_a", "library_b" ]library_b is also imported by library_a but it is pinned in library_a. Is it possible to pin library_b to what it is pinned in library_a. I know I can just pin the same version but then everytime library_b pin is updated in library_a I need to repin in my service.
Probably you can just omit one of the libraries, but not sure without concrete examples.Anyhow, you can use requirement specifiers for versions that will define rules for versions and pin this rule for libraries you need. Example:install_requires = [ "python-consul", "library_a >=1.2, <2.0", "library_b >=1.2, <2.0",]It can be exact version, greater/less than some version (by major, minor or even build version). Full list of version specifiers (rules) and examples can be found here.
Git: How to save and return to a specific version I'm new here so I apologize if this isn't the place to ask this question. I'm writing a python script for my company that looks through files in certain commits and compares them. Well I'm not familiar with git and how commits work so maybe someone more knowledgeable than me can help. What I have so far is something along the lines of this:import git # Directory for my reporepo = git.Repo(<repo path>)# Get the current commit and save it for later usecommit = repo.commit()< Here is where I search through the current files to get my info ># Checkout the old commitrepo.git.checkout("HEAD~1", force=True)< Here is where I search through the old files to get my info ># Re-checkout the current commitrepo.git.checkout(commit.hexsha, force=True)< Now I want to be back where I started >This works well and it almost accomplishes what I want it to. However... the whole reason for this script is to check changes to files. In other words, the developer will work on and change many of the files so that they are different from the last commit. The problem is when this checks out the newer commit again, the changes are gone (obviously very frustrating to the developer). So the process overall is something is like this:--> new_commit on local machine (with changes from developer)--> old_commit checked out to see what changed--> new_commit checked back out (as if the developer never worked on it)Overall, my question is: is there any way that I can save this new commit with the changes so that when re-checked out it still has the changes? Thank you for any help!Edit: What I want to achieve is storing the uncommitted/unstaged changes to the version currently checked out, then checkout and older version, and finally bring back the uncommitted/unstaged changes.
For anyone looking for the answer to this question, what worked was LeGEC's solution in the comments. I used:import subprocess# Here is where I got info on the current commit with changessubprocess.run(["git", "stash"])# Here is where I got info on the older commit versionsubprocess.run(["git", "stash", "pop"])# Back to the commit with changes
try and except in while-loop python I am working on a live plot of incoming data. The data comes from a spectrum analyser and sometimes I get faulty data. Faulty in the meaning that there are on some positions letters instead of numbers. I save the incoming data as a list and then I convert it to a numpy.array withtrace = np.array(trace, np.float)So when there are letters instead of numbers in one of the entries a ValueError is raised and the program is canceled and doesn't plot anymore. So I was thinking about using try and except inside my while-loop. But here the problem arises: The plot doesn't look anymore like it should. My idea was, that if the data is faulty, the live plot just should not plot the data at all and just skip the drawing. The pieces with wrong data remain white or aren't updated. Thats how the plot normally should look like:I hope you get the idea... With every new piece of data the next sixteenths part of the circle is drawn. But with try and except it looks like this:and only the part on the negative y-axes is updated. Oh and I forgot to mention that the while-loop has no breaking condition. Maybe I have the wrong idea of how try and except work. But I hope you can help me.The code of the while-loopwhile True : try: trace = inst.query(':TRACe:DATA? TRACE1').partition(' ')[2][:-2].split(', ')# the first & last 2 entries are cut off, are random numbers f = open(timestamp,'a') # open file for value in trace : #write to file f.write(value) f.write('\n') zeroarray = np.zeros(200) #change the length of zeroarray to gain a bigger circle in the middle trace = np.array(trace, np.float) indexmax = np.argsort(trace) #gives us the index array of the sorted vector maximum maximum = np.sort(trace) #sorts the array with the values print 'The four maxima are' # prints the four biggest values for i in range(-1,-5,-1): if indexmax[i] == 0 : frequency = start elif indexmax[i] == 600 : frequency = stop else : frequency = ( indexmax[i] + 1 ) * (start -stop)/601 print maximum[i], 'dB at', frequency ,' Hz ' print '\n' trace = np.insert(trace,0,zeroarray) a = np.linspace(i*np.pi/8+np.pi/16-np.pi/8, i*np.pi/8+np.pi/16, 2)#Angle, circle is divided into 16 pieces b = np.linspace(start -scaleplot, stop,801) #points of the frequency + 200 more points to gain the inner circle A, B = np.meshgrid(a, trace) #actual plotting ax = plt.subplot(111, polar=True) ctf = ax.contourf(a, b, B, cmap=cm.jet) xCooPoint = i*np.pi/8 + np.pi/16 #shows the user the position of the plot yCooPoint = stop ax.plot(xCooPoint, yCooPoint, 'or', markersize = 15) xCooWhitePoint = (i-1) * np.pi/8 + np.pi/16 #this erases the old red points yCooWhitePoint = stop ax.plot(xCooWhitePoint, yCooWhitePoint, 'ow', markersize = 15) plt.draw() except ValueError : print('Some data was wrong') i+=1And thanks for the quick help!
I would recommend putting in the try/except clause only what you expect to raise an exception. The code would be more clear and you would be sure that the exception is raised by what you expect to raise it. Something like:try: trace = np.array(trace, np.float)except ValueError: print('Some data was wrong') i += 1 continue#remaining code...Some more comments:Do you need to open de file at every iteration? Shouldn’t you close it as well?Do you need to create the subplot at every iteration?You are using the i variable in range and at the end of the except. Shouldn't you use different variable names? Are you sure i needs only be increased in case of an exception?
Numpy-style error tracebacks? In numpy, when you make a mistake, the error doesn't tell you about all the numpy internals, just the user-level error made. For example:import numpy as npA = np.ones([1,2])B = np.ones([2,3])A+Bspits backTraceback (most recent call last): File "/home/roderic/Desktop/scratchpad.py", line 5, in <module> A+BValueError: operands could not be broadcast together with shapes (1,2) (2,3) Notice how it doesn't tell you about all the internal bouncing around that numpy did in order to determine that you are multiplying incompatible matrices, nor where the ValueError was raised exactly. I want to do the same for my project, where the traceback should stop outside of the module internals (unless I am on debug mode). So, if the traceback is 10 steps long, and the first 4 are on user level, and the last 6 are internal processing from my library, I only want to feature the first 4.I know how to extract the stack, but I don't know how to modify it and re-inject it before raising the exception. I also assume this is considered a bad idea, and if so, I'd like to know what my other options are.My horrible temporary solution is looking like this: except AssertionError as error: # something went wrong, the input was not correct print( "Traceback (most recent call last):") for filepath, line_no, namespace, line in traceback.extract_stack(): if os.path.basename(filepath)=='MyModuleName.py': break print( ' File "{filepath}", line {line_no}, in {namespace}\n' ' {line}'.format(**locals())) exit()
The only reason that A+B doesn't show any internal stack frames is that numpy.ndarray.__add__() happens to be implemented in C, so there are no Python stack frames after the one containing the A+B to show. numpy is not doing anything special to clean up the stack trace.
Dissecting a permutation algorithm in Python I am trying to get my head around how this permutation algorithm works:def perm(n, i): if i == len(n) - 1: print n else: for j in range(i, len(n)): n[i], n[j] = n[j], n[i] perm(n, i + 1) n[i], n[j] = n[j], n[i] # swap back, for the next loopperm([1, 2, 3], 0)Output:[1, 2, 3][1, 3, 2][2, 1, 3][2, 3, 1][3, 2, 1][3, 1, 2]QuestionHow is it that the original list is the first line printed?In this example, the length of n is 3. Initially, i is 0. The code should skip the if statement, and then first iteration mutates the list. How do we get [1, 2, 3] as the first line of output?
It does skip the if at the top level. It drops into the else and iterates j through the list. The first iteration has i == j == 0, so the swap does nothing, and you recur with ([1, 2, 3], 1).This process repeats for the that instance, having i == j == 1. That recurs with ([1, 2, 3], 2) That instance is the one that print [1, 2, 3] as the first line of output.Does that clear it up?If not, learn how to insert useful print statements to trace execution.Perhaps this makes it more clear.indent = ""def perm(n, i): global indent indent += " " print indent, "ENTER", n, i if i == len(n) - 1: print n else: for j in range(i, len(n)): print indent, "RECUR", i, j n[i], n[j] = n[j], n[i] perm(n, i + 1) n[i], n[j] = n[j], n[i] # swap back, for the next loop indent = indent[2:]perm([1, 2, 3], 0)Output: ENTER [1, 2, 3] 0 RECUR 0 0 ENTER [1, 2, 3] 1 RECUR 1 1 ENTER [1, 2, 3] 2[1, 2, 3] RECUR 1 2 ENTER [1, 3, 2] 2[1, 3, 2] RECUR 0 1 ENTER [2, 1, 3] 1 RECUR 1 1 ENTER [2, 1, 3] 2[2, 1, 3] RECUR 1 2 ENTER [2, 3, 1] 2[2, 3, 1] RECUR 0 2 ENTER [3, 2, 1] 1 RECUR 1 1 ENTER [3, 2, 1] 2[3, 2, 1] RECUR 1 2 ENTER [3, 1, 2] 2[3, 1, 2]
Add element to a bibtexfile in Python I have created a script which scrapes many pdfs for abstract and keywords. I also have a collection of bibtex-files in which I want to place the texts I've extracted. What I'm looking for is a way of adding elements to the bibtex files. I have written a short parser: #!/usr/bin/python#-*- coding: utf-8import osfrom pybtex.database.input import bibtexdir_path = "nime_archive/nime/bibtex/"num_texts = 0class Bibfile: def __init__(self,bibs): self.bibs = bibs for a in self.bibs.entries.keys(): num_text += 1 print bibs.entries[a].fields['title'] #Need to implement a way of getting just the nime-identificator try: print bibs.entries[a].fields['url'] except: print "couldn't find URL for text: %s " % a print "creating new bibfile"bibfiles = []parser = bibtex.Parser()for infile in os.listdir(dir_path): if infile.endswith(".bib"): print infile bibfiles = Bibfile(parser.parse_file(dir_path+infile))My question is if there is possible to use Pybtex to add elements into the existing bibtex-files (or create a copy) so I can merge my extractions with what is already available. If this is not possible in Pybtex, what other bibtex parser can I use?
I've never used pybtex, but from a quick glance, you can add entries. Since self.bibs.entries appears to be a dict, you can come up with a unique key, and add more entries to it. Example:key = "some_unique_string"new_entry = Entry('article', fields={ 'language': u'english', 'title': u'Predicting the Diffusion Coefficient in Supercritical Fluids', 'journal': u'Ind. Eng. Chem. Res.', 'volume': u'36', 'year': u'1997', 'pages': u'888-895', }, persons={'author': [Person(u'Liu, Hongquin'), Person(u'Ruckenstein, Eli')]}, )self.bibs.entries[key] = new_entry(caveat: untested)If you wonder where I got this example form: have a look in the tests/ subdirectory of the source of pybtex. I got the above code example mainly from tests/database_test/data.py. Tests can be a good source of documentation if the actual documentation is lacking.
Xlib control keyboard events How does one simulate keyboard key presses in python (Xlib)I have been using Xlib-python for simulating mouse pointer events such as movements and clicks. But I haven't been able to find enough help for doing a similar thing for keyboard presses.Preferred platform : python on linux
I'm no expert on Xlib, but managed to piece together this code for the PyAutoGUI module. Here's the minimum viable example that can simulate a keyDown() and keyUp() for a keyboard key:# You must run `pip3 install python3-xlib` to get the Xlib modules.import osfrom Xlib.display import Displayfrom Xlib import Xfrom Xlib.ext.xtest import fake_inputimport Xlib.XK_display = Display(os.environ['DISPLAY'])# Create the keyboard mapping.KEY_NAMES = ['\t', '\n', '\r', ' ', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', 'a', 'b', 'c', 'd', 'e','f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '{', '|', '}', '~', 'accept', 'add', 'alt', 'altleft', 'altright', 'apps', 'backspace', 'browserback', 'browserfavorites', 'browserforward', 'browserhome', 'browserrefresh', 'browsersearch', 'browserstop', 'capslock', 'clear', 'convert', 'ctrl', 'ctrlleft', 'ctrlright', 'decimal', 'del', 'delete', 'divide', 'down', 'end', 'enter', 'esc', 'escape', 'execute', 'f1', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f2', 'f20', 'f21', 'f22', 'f23', 'f24', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'final', 'fn', 'hanguel', 'hangul', 'hanja', 'help', 'home', 'insert', 'junja', 'kana', 'kanji', 'launchapp1', 'launchapp2', 'launchmail', 'launchmediaselect', 'left', 'modechange', 'multiply', 'nexttrack', 'nonconvert', 'num0', 'num1', 'num2', 'num3', 'num4', 'num5', 'num6', 'num7', 'num8', 'num9', 'numlock', 'pagedown', 'pageup', 'pause', 'pgdn', 'pgup', 'playpause', 'prevtrack', 'print', 'printscreen', 'prntscrn', 'prtsc', 'prtscr', 'return', 'right', 'scrolllock', 'select', 'separator', 'shift', 'shiftleft', 'shiftright', 'sleep', 'space', 'stop', 'subtract', 'tab', 'up', 'volumedown', 'volumemute', 'volumeup', 'win', 'winleft', 'winright', 'yen', 'command', 'option', 'optionleft', 'optionright']keyboardMapping = dict([(key, None) for key in KEY_NAMES])keyboardMapping.update({ 'backspace': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('BackSpace')), '\b': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('BackSpace')), 'tab': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Tab')), 'enter': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), 'return': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), 'shift': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Shift_L')), 'ctrl': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Control_L')), 'alt': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Alt_L')), 'pause': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Pause')), 'capslock': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Caps_Lock')), 'esc': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Escape')), 'escape': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Escape')), 'pgup': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Up')), 'pgdn': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Down')), 'pageup': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Up')), 'pagedown': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Page_Down')), 'end': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('End')), 'home': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Home')), 'left': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Left')), 'up': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Up')), 'right': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Right')), 'down': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Down')), 'select': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Select')), 'print': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'execute': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Execute')), 'prtsc': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'prtscr': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'prntscrn': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'printscreen': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Print')), 'insert': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Insert')), 'del': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Delete')), 'delete': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Delete')), 'help': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Help')), 'winleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Super_L')), 'winright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Super_R')), 'apps': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Super_L')), 'num0': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_0')), 'num1': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_1')), 'num2': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_2')), 'num3': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_3')), 'num4': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_4')), 'num5': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_5')), 'num6': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_6')), 'num7': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_7')), 'num8': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_8')), 'num9': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_9')), 'multiply': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Multiply')), 'add': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Add')), 'separator': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Separator')), 'subtract': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Subtract')), 'decimal': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Decimal')), 'divide': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('KP_Divide')), 'f1': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F1')), 'f2': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F2')), 'f3': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F3')), 'f4': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F4')), 'f5': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F5')), 'f6': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F6')), 'f7': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F7')), 'f8': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F8')), 'f9': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F9')), 'f10': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F10')), 'f11': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F11')), 'f12': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F12')), 'f13': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F13')), 'f14': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F14')), 'f15': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F15')), 'f16': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F16')), 'f17': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F17')), 'f18': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F18')), 'f19': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F19')), 'f20': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F20')), 'f21': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F21')), 'f22': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F22')), 'f23': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F23')), 'f24': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('F24')), 'numlock': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Num_Lock')), 'scrolllock': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Scroll_Lock')), 'shiftleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Shift_L')), 'shiftright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Shift_R')), 'ctrlleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Control_L')), 'ctrlright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Control_R')), 'altleft': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Alt_L')), 'altright': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Alt_R')), # These are added because unlike a-zA-Z0-9, the single characters do not have a ' ': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('space')), 'space': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('space')), '\t': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Tab')), '\n': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), # for some reason this needs to be cr, not lf '\r': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Return')), '\e': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('Escape')), '!': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('exclam')), '#': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('numbersign')), '%': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('percent')), '$': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('dollar')), '&': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('ampersand')), '"': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('quotedbl')), "'": _display.keysym_to_keycode(Xlib.XK.string_to_keysym('apostrophe')), '(': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('parenleft')), ')': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('parenright')), '*': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('asterisk')), '=': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('equal')), '+': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('plus')), ',': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('comma')), '-': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('minus')), '.': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('period')), '/': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('slash')), ':': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('colon')), ';': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('semicolon')), '<': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('less')), '>': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('greater')), '?': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('question')), '@': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('at')), '[': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('bracketleft')), ']': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('bracketright')), '\\': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('backslash')), '^': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('asciicircum')), '_': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('underscore')), '`': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('grave')), '{': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('braceleft')), '|': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('bar')), '}': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('braceright')), '~': _display.keysym_to_keycode(Xlib.XK.string_to_keysym('asciitilde')),})for c in """abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890""": keyboardMapping[c] = _display.keysym_to_keycode(Xlib.XK.string_to_keysym(c))def isShiftCharacter(character): """Returns True if the key character is uppercase or shifted.""" return character.isupper() or character in '~!@#$%^&*()_+{}|:"<>?'def keyDown(key): """Performs a keyboard key press without the release. This will put that key in a held down state. NOTE: For some reason, this does not seem to cause key repeats like would happen if a keyboard key was held down on a text field. Args: key (str): The key to be pressed down. The valid names are listed in pyautogui.KEY_NAMES. Returns: None """ if key not in keyboardMapping or keyboardMapping[key] is None: return if type(key) == int: fake_input(_display, X.KeyPress, key) _display.sync() return needsShift = isShiftCharacter(key) if needsShift: fake_input(_display, X.KeyPress, keyboardMapping['shift']) fake_input(_display, X.KeyPress, keyboardMapping[key]) if needsShift: fake_input(_display, X.KeyRelease, keyboardMapping['shift']) _display.sync()def keyUp(key): """Performs a keyboard key release (without the press down beforehand). Args: key (str): The key to be released up. The valid names are listed in KEY_NAMES. Returns: None """ """ Release a given character key. Also works with character keycodes as integers, but not keysyms. """ if key not in keyboardMapping or keyboardMapping[key] is None: return if type(key) == int: keycode = key else: keycode = keyboardMapping[key] fake_input(_display, X.KeyRelease, keycode) _display.sync()
How to print a file to stdout? I've searched and I can only find questions about the other way around: writing stdin to a file.Is there a quick and easy way to dump the contents of a file to stdout?
Sure. Assuming you have a string with the file's name called fname, the following does the trick.with open(fname, 'r') as fin: print(fin.read())
Why my Python test generator simply doesn't work? This is a sample script to test the use of yield... am I doing it wrong? It always returns '1'...#!/usr/bin/pythondef testGen(): for a in [1,2,3,4,5,6,7,8,9,10]: yield aw = 0while w < 10: print testGen().next() w += 1
You're creating a new generator each time. You should only call testGen() once and then use the object returned. Try:w = 0g = testGen()while w < 10: print g.next() w += 1Then of course there's the normal, idiomatic generator usage:for n in testGen(): print nNote that this will only call testGen() once at the start of the loop, not once per iteration.
Packaging a Python library I have a few Munin plugins which report stats from an Autonomy database. They all use a small library which scrapes the XML status output for the relevant numbers.I'm trying to bundle the library and plugins into a Puppet-installable RPM. The actual RPM-building should be straightforward; once I have a distutils-produced distfile I can make it into an RPM based on a .spec file pinched from the Dag or EPEL repos [1]. It's the distutils bit I'm unsure of -- in fact I'm not even sure my library is correctly written for packaging. Here's how it works:idol7stats.py:import datetimeimport osimport statimport sysimport timeimport urllibimport xml.saxclass IDOL7Stats: cache_dir = '/tmp' def __init__(self, host, port): self.host = host self.port = port # ... def collect(self): self.data = self.__parseXML(self.__getXML()) def total_slots(self): return self.data['Service:Documents:TotalSlots']Plugin code:from idol7stats import IDOL7Statsa = IDOL7Stats('db.example.com', 23113)a.collect()print a.total_slots()I guess I want idol7stats.py to wind up in /usr/lib/python2.4/site-packages/idol7stats, or something else in Python's search path. What distutils magic do I need? This:from distutils.core import setupsetup(name = 'idol7stats', author = 'Me', author_email = '[email protected]', version = '0.1', py_modules = ['idol7stats'])almost works, except the code goes in /usr/lib/python2.4/site-packages/idol7stats.py, not a subdirectory. I expect this is down to my not understanding the difference between modules/packages/other containers in Python.So, what's the rub?[1] Yeah, I could just plonk the library in /usr/lib/python2.4/site-packages using RPM but I want to know how to package Python code.
You need to create a package to do what you want. You'd need a directory named idol7stats containing a file called __init__.py and any other library modules to package. Also, this will affect your scripts' imports; if you put idol7stats.py in a package called idol7stats, then your scripts need to "import idol7stats.idol7stats".To avoid that, you could just rename idol7stats.py to idol7stats/__init__.py, or you could put this line into idol7stats/__init__.py to "massage" the imports into the way you expect them:from idol7stats.idol7stats import *
removing outline color of scatter plot in matplotlib python Suppose I have gridded data with dimensions (x,y) and values are in z.so simply we can make scatter plot for third dimension by:import numpy as npimport matplotlib.pyplot as pltx = np.random.random(10)y = np.random.random(10)z = np.random.random(10)plt.scatter(x, y, c = z, s=150, cmap = 'jet')plt.show()what i am thinking now is to remove the line color of each circular scatter plot. And also instead of circle can we make it square??I did not find any way to do that. your help will be highly appreciated.
Pass the argument edgecolors='none' to plt.scatter. The patch boundary will not be drawn. Pass the argument marker='s' to plt.scatter. The marker style will be square. Then, we have,The source code,import numpy as npimport matplotlib.pyplot as pltx = np.random.random(10)y = np.random.random(10)z = np.random.random(10)plt.scatter(x, y, c = z, s=150, cmap = 'jet', edgecolors='none', marker='s')plt.show() Refer to matplotlib.pyplot.scatter for more information.
Error compiling python Whenever I try to execute this code: name = input("What's your name?")print("Hello World", name)By running the command python myprogram.py on the command line, it gives me this error: What's your name?John Traceback (most recent call last): File "HelloWorld.py", line 1, in <module> name = input("What's your name?") File "<string>", line 1, in <module> NameError: name 'John' is not definedIt asks me the name but as soon as I type it and press enter it crashes, what does the error mean?Thanks.
In Python 2 you should use raw_input instead of input in this case.
Python: Add strings to a list full of integers Simple example:I got a list full of integers which looks like this:mylist1 = [1, 2, 3, 4, 5]print mylist1[1, 2, 3, 4, 5]Now I want to add a string to every integer in the list. It should look like this afterwards:['1 Hi', '2 How', '3 Are', '4 You', '5 Doing']By now I should have a list full of strings. How do I do that?
>>> mylist1 = [1, 2, 3, 4, 5]>>> mylist2 = ['Hi', 'How', 'Are', 'You', 'Doing']>>> map(lambda x,y:str(x)+" "+y, mylist1,mylist2)['1 Hi', '2 How', '3 Are', '4 You', '5 Doing']
Why is re.findall matching the string, but not returning the results correctly? I want to find a substring of the pattern ([A-Z][0-9]+)+ in another string.One way to do this would be:import rere.findall("([A-Z][0-9]+)+", "asdf A0B52X4 asdf")[0]Curiously, this yields 'X4', not 'A0B52X4', which was the result I expected.Digging a bit into this, I also tried to just match the simple groups the string is composed of:re.findall("[A-Z][0-9]+", "asdf A0B52X4 asdf")Which yields the expected result: ['A0', 'B52', 'X4']And even more interesting:re.findall("([A-Z][0-9]+){3,}", "asdf A0B52X4 asdf")Which yields ['X4'], but still seems to match the whole string I'm interested in, which is confirmed by trying re.search and using the result to obtain the substring manually:m = re.search("([A-Z][0-9]+)+", "asdf A0B52X4 asdf")m.string[m.start():m.end()]This yields 'A0B52X4'.Now from what I know about regular expressions in python, parentheses not only just match the RE inside them, but also declare a "group" which lets you do all sorts of things with it. My theory would be that for some reason, re.findall only puts the last match of a group into the result string as opposed to the complete match.Why does re.findall behave like this?
It's because your matching group only matches one instance of the pattern at a time. The + just means to match all of them that occur in a row. It still only captures the first part of the match at one time.Wrap your regex in an outer group, like this:((?:[A-Z][0-9]+)+)Demo
Python sqlite query based on logged username I am struggling to understand why I cannot get the expected result from my query.I am using flask with SQLite and can easily return the username to the webpage with the "userlogin = session['username']"What i am trying to get is to query the database based on the username of the logged user in order to only show information related to this specific user.mytable is configured with username column used for the [email protected]('/dashboard') @is_logged_indef dashboard(): from functions.sqlquery import sql_query userlogin = session['username'] results = sql_query('''SELECT * FROM mytable WHERE username IS "userlogin"''') return render_template('dashboard.html', results=results)sql_query.pydef sql_query(query): cur = conn.cursor() cur.execute(query) rows = cur.fetchall() return rows
The sql query isn't correct. You use should = instead of IS.I would recommend making the following changes:1) use a parameterised query to avoid sql injection attacks. So pass the parameters to sql_query() as a tuple:def sql_query(query, params): cur = conn.cursor() cur.execute(query, params) rows = cur.fetchall() return rows2) change the call to sql_queryresults = sql_query("SELECT * FROM mytable WHERE username = ?", (userlogin,))
Use built-in setattr simultaneously with index slicing A class I am writing requires the use of variable-name attributes storing numpy arrays. I would like to assign values to slices of these arrays. I have been using setattr so that I can leave the attribute name to vary. My attempts to assign values to slices are these:class Dummy(object): def __init__(self, varname): setattr(self, varname, np.zeros(5))d = Dummy('x')### The following two lines are incorrectsetattr(d, 'x[0:3]', [8,8,8])setattr(d, 'x'[0:3], [8,8,8])Neither of the above uses of setattr produce the behavior I want, which is for d.x to be a 5-element numpy array with entries [8,8,8,0,0]. Is it possible to do this with setattr?
Think about how you would normally write this bit of code:d.x[0:3] = [8, 8, 8]# an index operation is really a function call on the given object# eg. the following has the same effect as the aboved.x.__setitem__(slice(0, 3, None), [8, 8, 8])Thus, to do the indexing operating you need to get the object refered to by the name x and then perform an indexing operation on it. eg.getattr(d, 'x')[0:3] = [8, 8, 8]
Python function got 2 lists but only changes 1 why does Lista1 get changed but Lista2 doesn't? which methods change directly the list?def altera(L1, L2): for elemento in L2: L1.append(elemento) L2 = L2 + [4] L1[1]= 10 del L2[0] return L2[:]Lista1 = [1, 2, 3]Lista2 = [1, 2, 3]Lista3 = altera(Lista1, Lista2)print(Lista1)print(Lista2)print(Lista3)
L2 = L2 + [4]reassigns the address of L2 so it is a different list than was passed in .... thats the easy explanation at leastyou can see this by printing id(L2) before the assignment and afterif you changed it to L2.append(4)then it would indeed change Lista2
Stuck parallelisation with sklearn with large number of features (n_jobs=-1) When trying to run a simple GridSearchCV with n_job=-1 often results in stuck processing. For example, >> parameters_SGD = {'clf__l1_ratio': np.linspace(0,1,30), 'clf__alpha': np.logspace(-5,-1,5), 'clf__penalty':['elasticnet'], 'clf__class_weight': [None, 'balanced'],'clf__loss':['log','hinge']}>> pipe_SGD = Pipeline([('scl', StandardScaler()),('clf', linear_model.SGDClassifier())])>> grid_search_SGD = GridSearchCV(estimator=pipe_SGD, param_grid=parameters_SGD, verbose=1, scoring=make_scorer(f1_score, average='weighted', pos_label=1), n_jobs = -1)executing on some data (X_train, y_train):>> grid_search_SGD.fit(X_train, y_train)may result in frozen computations -> CPU usage drops to 1-3% and nothing happens.When it happens: if the number of features of X is (relatively) large (>100). The CPU usage climbs up to 99% (which means all cores work) and then suddenly drops down to 1-3%.If I use only small subset of features (randomly selected), then parallelisation works perfectly (99-100% of CPU and I can see jobs done in parallel).Does anyone have any idea why it happens? What may cause parallel jobs to be stuck?(sklearn v 0.18, mac osx)
ReasonParallelization in this case is based on copying all the data and send a copy to each of the different parallel processes (sklearn is based on joblib). This means using X cores needs at least x-times the memory of the naive one.So in your case your memory probably is exhausted and trashing occurs.What you can doStick with smaller size of samples / less featuresYou already observed that this worksTune sklearn's GridSearchCV paramsAs explained here, the parameter pre_dispatch can be very important: Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs An int, giving the exact number of total jobs that are spawned A string, giving an expression as a function of n_jobs, as in ‘2*n_jobs’I would recommend trying something like:pre_dispatch=‘1*n_jobs’(A sidenote: without any links available, i'm very positive that OS X is the OS with the most problems in regards to sklearn's parallelization-implementation; maybe check the issues on sklearn's github)
is there a way to make movement smoother in the Python Tk canvas? I am making a dot move around a screen, but it seems to pause(stop moving) for a bit when changing direction. Is there a better way to make the movement smoother, or just stop the delay in changing directions?Here is what i am using to move it:def keypress(event): key = (event.keysym) if key == "w": canvas.move(player,0,-20) if key == "a": canvas.move(player,-20,0) if key == "s": canvas.move(player,0,20) if key == "d": canvas.move(player,20,0)canvas.bind_all("<Key>", keypress)
Naming constants makes it easier to change them and experiment, especially when the same constant is used in multiple places in the code. In the code below, you just need to change one copy of 20 to experiment, as Bryan suggested.distance = 20movements = { 'w': (0, -distance), 'a': (-distance, 0), 's': (0, distance), 'd': (distance, 0), }def keypress(event): key = (event.keysym).lower() canvas.move(player, *movements[key])While writing this, I took the opportunity to show how to use a dict to replace multiple conditionals by factoring out the common code from the changing code. The * syntax in the move call separates the tuple into two arguments.
PYTHON - input decimal to fraction When working on python, I was able to convert a fraction to a decimal where the user would input a numerator, then a denominator and then the n/d = the result (fairly simple). But i can't work out how to convert a decimal into a fraction. I want the user to input any decimal ( ie 0.5) and then find the simplest form of x (1/2). Any help would be greatly appreciated. Thanks.
Use the fractions module.from fractions import Fractionf1 = Fraction(14, 8)print(f) # Output: 7/4print(float(f)) # Output: 1.75f1 = Fraction(1.75)print(f) # Output: 7/4print(float(f)) # Output: 1.75It accepts both pairs of numerator/denominator as well as float decimal numbers to construct a Fraction object.
Is there a simple way to copy text from the debug console of PyCharm? Is there a sane way to copy log text from the PyCharm console, instead of selecting it slowly with the mouse (espacially when there's a bundance of text there)? There seem to be no "Select All" from the debug console. Is it on porpose? Is there any way to copy (all of) the text from the console sanely?I do hope the guys and girls at JetBrain do understand that Notepad++ is wayyyy more easy when looking at/analysing logs?
With VIM emulation on:Use scrollbar to scroll to the end of what you want to copy. (click/drag bar) Click and drag up to highlight a few lines.Use scrollbar again to scroll to the start of what you want to copy.Shift/click at the start of the text you want to copy. (should now be highlighted)Right click and select copy.This isn't as quick as Ctrl-A, but quicker than turning VIM emulation off/on.This worked for me in the Python Console, windows 10, PyCharm Community 2018.1.2
List within a dataframe cell - counting the number of items in list I currently have a dataframe that contains a list of floats within a column, and I want to add a second column to the df that counts the length of the list within the first column (the number of items within that list). What would be the easiest way to go about doing this and would I have to write a function that iterates over each item in the column?
This should work:df['list_len'] = df['list_column'].str.len()
How to read specific sheets from My XLS file in Python As of now i can read EXCEL file's all sheet.e.msgbox("select Excel File")updated_deleted_xls = e.fileopenbox()book = xlrd.open_workbook(updated_deleted_xls, formatting_info=True)openfile = e.fileopenbox()for sheet in book.sheets():for row in range(sheet.nrows):for col in range(sheet.ncols):thecell = sheet.cell(row, 0)xfx = sheet.cell_xf_index(row, 0)xf = book.xf_list[xfx]
If you open your editor from the desktop or command line, you would have to specify the file path while trying to read the file:import pandas as pddf = pd.read_excel(r'File path', sheet_name='Sheet name')Alternatively, if you open your editor in the file's directory, then you could read directly using the panda library import pandas as pddf = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='Title Sheet')df1 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx',sheet_name='Transactions')df2 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='NewCustomerList')df3 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='CustomerDemographic')df4 = pd.read_excel('KPMG_VI_New_raw_data_update_final.xlsx', sheet_name='CustomerAddress')
Is there a way to raise normal Django form validation through ajax? I found a similar question which is quite a bit outdated. I wonder if it's possible without the use of another library. Currently, the forms.ValidationError will trigger the form_invalid which will only return a JSON response with the error and status code.I have an ajax form and wonder if the usual django field validations can occur on the form field upon an ajax form submit. My form triggering the error: class PublicToggleForm(ModelForm): class Meta: model = Profile fields = [ "public", ] def clean_public(self): public_toggle = self.cleaned_data.get("public") if public_toggle is True: raise forms.ValidationError("ERROR") return public_toggle The corresponding View's mixin for ajax: from django.http import JsonResponseclass AjaxFormMixin(object): def form_invalid(self, form): response = super(AjaxFormMixin, self).form_invalid(form) if self.request.is_ajax(): return JsonResponse(form.errors, status=400) else: return response def form_valid(self, form): response = super(AjaxFormMixin, self).form_valid(form) if self.request.is_ajax(): print(form.cleaned_data) print("VALID") data = { 'message': "Successfully submitted form data." } return JsonResponse(data) else: return responseThe View: class PublicToggleFormView(AjaxFormMixin, FormView): form_class = PublicToggleForm success_url = '/form-success/' On the browser console, errors will come through as a 400 Bad Request, followed by the responseJSON which has the correct ValidationError message. Edit: Any way to get the field validation to show client-side? edit: Additional code: Full copy of data received on front-end: {readyState: 4, getResponseHeader: ƒ, getAllResponseHeaders: ƒ, setRequestHeader: ƒ, overrideMimeType: ƒ, …}abort:ƒ (a)always:ƒ ()catch:ƒ (a)done:ƒ ()fail:ƒ ()getAllResponseHeaders:ƒ ()getResponseHeader:ƒ (a)overrideMimeType:ƒ (a)pipe:ƒ ()progress:ƒ ()promise:ƒ (a)readyState:4responseJSON:public:["ERROR"]__proto__:ObjectresponseText:"{"public": ["ERROR"]}"setRequestHeader:ƒ (a,b)state:ƒ ()status:400statusCode:ƒ (a)statusText:"Bad Request"then:ƒ (b,d,e)__proto__:Object The form in the template is rendered using Django's {{as_p}}: {% if request.user == object.user %} Make your profile public? <form class="ajax-public-toggle-form" method="POST" action='{% url "profile:detail" username=object.user %}' data-url='{% url "profile:public_toggle" %}'> {{public_toggle_form.as_p|safe}} </form> {% endif %} Javascript: $(document).ready(function(){ var $myForm = $('.ajax-public-toggle-form') $myForm.change(function(event){ var $formData = $(this).serialize() var $endpoint = $myForm.attr('data-url') || window.location.href // or set your own url $.ajax({ method: "POST", url: $endpoint, data: $formData, success: handleFormSuccess, error: handleFormError, }) }) function handleFormSuccess(data, textStatus, jqXHR){ // no need to do anything here console.log(data) console.log(textStatus) console.log(jqXHR) } function handleFormError(jqXHR, textStatus, errorThrown){ // on error, reset form. raise valifationerror console.log(jqXHR) console.log("==2" + textStatus) console.log("==3" + errorThrown) $myForm[0].reset(); // reset form data }})
Sou you have your error response in JSON formatted as {field_key: err_codes, ...}. Then all you have to do is for example create <div class="error" style="display: none;"></div> under every rendered form field, which can be done by manually rendering the form field by field or you can create a block with errors below the form such as:<div id="public_toggle_form-errors" class="form-error" style="display: none;"><div>add some css to the form:div.form-error { margin: 5px; -webkit-box-shadow: 0px 0px 5px 0px rgba(255,125,125,1); -moz-box-shadow: 0px 0px 5px 0px rgba(255,125,125,1); box-shadow: 0px 0px 5px 0px rgba(255,125,125,1);}so it'll look like something wrong happened, and then add to the handleFormError function code:function handleFormError(jqXHR, textStatus, errorThrown){ ... $('#public_toggle_form-errors').text(textStatus.["public"]); $('#public_toggle_form-errors').show(); ...}I think you'll get the idea.
How to create a variable containing input from a list in Python I've a lists containing .las Files of different length. I couldn't figure out how it is possible to create a variable containing all the list entries separated by a ";" ? Thanks for your help,Mauro
Well am not sure if I get you but:some_list = ['file.las', 'another_file.las', 'something.las']e = ';'.join(some_list)
Regex Search program, how not to duplicate answers during iterate thru text? (Python3) I am working on a 'Regex Search' project from the book Automate boring stuff with python. I tried searching for answer, but I failed to find related thread in python.The task is: "Write a program that opens all .txt files in a folder and searches for any line that matches a user-supplied regular expression. The results should be printed to the screen."I am sending below the part of code that I have problem with: import glob, os, reos.chdir(r'C:\Users\PythonScripts')for file in glob.glob("*.txt"): content = open(file) text = content.read() print(text) for i in text: whatToFind = re.compile(r'panda|by|NOUN') finded = whatToFind.findall(text) print(finded)I would like to find that 3 words: panda|by|NOUNAfter iterate thru text, I get output with repeated answers couple times. I get the answer 'by' two times, but it should be only once. For example for text: 'The ADJECTIVE panda walked to the NOUN and then VERB. A nearby NOUN was unaffected by these events.'I get: ['panda', 'NOUN', 'by', 'NOUN', 'by']I should get only 4 first strings. I tried to fix it but I have no idea how to do that. Can anyone tell me what I am doing wrong?
That's because you are missing the word boundaries in your regular expression pattern and by from the "nearby" word was also matched:In [3]: import reIn [4]: whatToFind = re.compile(r'panda|by|NOUN')In [5]: s = 'The ADJECTIVE panda walked to the NOUN and then VERB. A nearby NOUN was unaffected by these events.'In [6]: whatToFind.findall(s) # no word boundariesOut[6]: ['panda', 'NOUN', 'by', 'NOUN', 'by']In [7]: whatToFind = re.compile(r'\b(panda|by|NOUN)\b')In [8]: whatToFind.findall(s) # word boundariesOut[8]: ['panda', 'NOUN', 'NOUN', 'by']Note that there is probably a better way to look for words in an English text - using a natural language processing toolkit (nltk) and its word_tokenize() function:In [9]: from nltk import word_tokenizeIn [10]: desired_words = {'panda', 'by', 'NOUN', 'cookie'}In [11]: set(word_tokenize(s)) & desired_words # note: "cookie" was not foundOut[11]: {'NOUN', 'by', 'panda'}
Create instances from list of classes How do I create instances of classes from a list of classes? I've looked at other SO answers but did understand them.I have a list of classes:list_of_classes = [Class1, Class2]Now I want to create instances of those classes, where the variable name storing the class is the name of the class. I have tried:for cls in list_of_classes: str(cls) = cls()but get the error: "SyntaxError: can't assign to function call". Which is of course obvious, but I don't know what to do else.I really want to be able to access the class by name later on. Let's say we store all the instance in a dict and that one of the classes are called ClassA, then I would like to be able to access the instance by dict['ClassA'] later on. Is that possible? Is there a better way?
You say that you want "the variable name storing the class [to be] the name of the class", but that's a very bad idea. Variable names are not data. The names are for programmers to use, so there's seldom a good reason to generate them using code.Instead, you should probably populate a list of instances, or if you are sure that you want to index by class name, use a dictionary mapping names to instances.I suggest something like:list_of_instances = [cls() for cls in list_of_classes]Or this:class_name_to_instance_mapping = {cls.__name__: cls() for cls in list_of_classes}One of the rare cases where it can sometimes make sense to automatically generate variables is when you're writing code to create or manipulate class objects themselves (e.g. producing methods automatically). This is somewhat easier and less fraught than creating global variables, since at least the programmatically produced names will be contained within the class namespace rather than polluting the global namespace.The collections.NamedTuple class factory from the standard library, for example, creates tuple subclasses on demand, with special descriptors as attributes that allow the tuple's values to be accessed by name. Here's a very crude example of how you could do something vaguely similar yourself, using getattr and setattr to manipulate attributes on the fly:def my_named_tuple(attribute_names): class Tup: def __init__(self, *args): if len(args) != len(attribute_names): raise ValueError("Wrong number of arguments") for name, value in zip(attribute_names, args): setattr(self, name, value) # this programmatically sets attributes by name! def __iter__(self): for name in attribute_names: yield getattr(self, name) # you can look up attributes by name too def __getitem__(self, index): name = attribute_names[index] if isinstance(index, slice): return tuple(getattr(self, n) for n in name) return getattr(self, name) return TupIt works like this:>>> T = my_named_tuple(['foo', 'bar'])>>> i = T(1, 2)>>> i.foo1>>> i.bar2