questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
AttributeError: 'Stud' object has no attribute 'sno' at line no.11 program is:class Stud: def __init__(self): self.displval() print("I am Constructor") self.sno = int(input("Enter the roll number")) self.sname = (input("Enter the Name")) def displval(self): print("="*50) print(self.sno) print(self.sname)so = Stud()
Your main problem is that you are calling self.displval before you set the attributes it tries to display.class Stud: def __init__(self): print("I am Constructor") self.sno = int(input("Enter the roll number")) self.sname = (input("Enter the Name")) self.displval() def displval(self): print("="*50) print(self.sno) print(self.sname)However, __init__ is doing too much work. It should receive values as arguments and simply set the attributes. If you want Stud to provide a way to collect those arguments from the user, define an additional class method. (It's also debatable whether __init__ should be printing anything to standard output, but I'll leave that for now.)class Stud: def __init__(self, sno, sname): self.sno = sno self.sname = sname self.displval() @classmethod def create_with_user_input(cls): sno = int(input("Enter the roll number")) sname = (input("Enter the Name")) return cls(sno, sname) def displval(self): print("="*50) print(self.sno) print(self.sname)so = Stud.create_from_user_input()
How to feed in a time-series into pyunicorn.timeseries.surrogates? I struggle to find out how to feed a time-series that consists of a one-column .txt file into pyunicorn’s timeseries.surrogates. My one-column .txt file contains many numerical datapoints that constitute the time-series.Pyunicorn offers several examples how to apply its surrogate methods in this link: http://www.pik-potsdam.de/~donges/pyunicorn/api/timeseries/surrogates.htmlParadigmatically, the last surrogate option in the link above, namely for white_noise_surrogates(original_data), Pyunicorn offers the following explanatory code.ts = Surrogates.SmallTestData().original_datasurrogates = Surrogates.SmallTestData().white_noise_surrogates(ts)Clearly, the example data SmallTestData() is part of pyunicorn. But how would I have to enter my data, that is, Data_2, into the code above? The codesurrogates = Surrogates.white_noise_surrogates(Data_2) returns the messageTypeError: Surrogates.correlated_noise_surrogates() missing 1 required positional argument: 'original_data'Trying the code in another tryTS = Surrogates.Data_2().original_dataSurrogate = Surrogates.correlated_noise_surrogates(TS) returns into the messageAttributeError: type object 'Surrogates' has no attribute 'Data_2'I assume that there is a simple solution, but I cannot figure it out. Here is an overview of my code:from pyunicorn.timeseries import Surrogatesimport pyunicorn as pnData_2 = np.loadtxt("/path-to-data.txt") # Surrogate time-seriesTS = Surrogates.Data_2().original_dataSurrogate = Surrogates.correlated_noise_surrogates(TS)Does anyone understand how to properly feed or insert a time-series into pyunicorn’s timeseries.surrogates options?
You need to instantiate the class Surrogates with your dataTS = Surrogates(original_data=Data_2) my_surr = TS.correlated_noise_surrogates(Data_2)
is there an API to force facebook to scrape a website automatically I'm aware you can force update a page's cache by entering the URL on Facebook's debugger tool while been logged in as admin for that app/page: https://developers.facebook.com/tools/debugBut what I need is a way to automatically call an API endpoint or something from our internal app whenever somebody from our Sales department updates the main image of one of our pages. It is not an option to ask thousands of sales people to login as an admin and manually update a page's cache whenever they update one of our item's description or image.We can't afford to wait 24 hours for Facebook to update its cache because we're getting daily complaints from our clients whenever they don't see a change showing up as soon as we change it on our side.
This worked a while ago:$.post('https://graph.facebook.com', { id: 'https://www.yourdomain.com/someurl', scrape: true}, (response) => { console.log(response);});In this case, with jQuery - but of course you can also use the fetch API or axios.
Yearly Interest on house and deposit Suppose you currently have $50,000 deposited into a bank account and the account pays you a constant interest rate of 3.5% per year on your deposit. You are planning to buy a house with the current price of $300,000. The price will increase by 1.5% per year. It still requires a minimum down payment of 20% of the house price.Write a while loop to calculate how many (integer) years you need to wait until you can afford the down payment to buy the house.m = 50000 #money you havei = 0.035 #interest rateh = 300000 #house pricef = 0.015 #amount house will increase by per yeard= 0.2 #percent of down payment on housey = 0 #number of yearsx = 0 #money for the down paymentmn = h*d #amount of down paymentwhile m <= mn: m = (m+(m*i)) #money you have plus money you have times interest y = y + 1 #year plus one mn = mn +(h*f*y)print(int(y))The answer you should get is 10.I keep getting the wrong answer, but I am not sure what is incorrect.
You can simplify the code by using the compound interest formula.def compound_interest(amount, rate, years): return amount * (rate + 1) ** yearswhile compound_interest(m, i, y) < d * compound_interest(h, f, y): y += 1If you are allowed to do without the while loop, you can resolve the inequality after the years y. So you get this code snippet:import mathbase = (i + 1) / (f + 1)arg = (d * h) / my = math.ceil(math.log(arg, base))
Problem calling variables in a function. UnboundLocalError: local variable 'prev_avg_gain' referenced before assignment I am trying to write a function to calculate RSI data using input from a list or by pulling live price data from an API.The script worked fine when feeding it data directly from a list, but while I am trying to convert it to a function, I am experiencing issues.The function needs to remember the output from a previous run in order to calculate a smoothed average, but I keep getting an error that the local variable is being called before assignment. I've eliminated a lot of the code to keep the post short, but it errors on the 15th run. On the 14th run I think I am defining the variable and printing the value, so I don't understand why it errors.def calc_rsi(price, i): while price != complete: window.append(price) if i == 14: avg_gain = sum(gains) / len(gains) avg_loss = sum(losses) / len(losses) if i > 14: avg_gain = (prev_avg_gain * (window_length - 1) + gain) / window_length avg_loss = (prev_avg_loss * (window_length - 1) + loss) / window_length if i >= 14: rs = avg_gain / avg_loss rsi = round(100 - (100 / (1 + rs)), 2) prev_avg_gain = avg_gain prev_avg_loss = avg_loss print ("rsi", rsi) print ("gain", prev_avg_gain) print ()The thing that is throwing me for a real loop (pun intended) is that on run 14, my print statement 'print ("gain=", prev_avg_gain)' returns the proper value, so I know that it is assigning a value to the variable.... I've tried adding the 'prev_avg_gain = avg_gain ' to the block of code for 'if i == 14:' and 'if i > 14:' rather than doing it once in the >= block and it throws the same error.I am new to python and scripting so please go easy :)
The code as you have pasted will work, if you make one call to calc_rsi() where i is incremented from <=14 to >14. In this case prev_avg_gain will be remembered. But if you make 2 calls to calc_rsi() then prev_avg_gain will not be remembered between calls.To demonstrate, I use a trimmed down example:def calc_rsi(price, i): while i <= price: if i == 14: avg_gain = 10 if i > 14: avg_gain = (prev_avg_gain * (10 - 1) + 1) / 5 if i >= 14: prev_avg_gain = avg_gain print ("gain", prev_avg_gain) i += 1# i loops from 13 to 16 in one call, this works!calc_rsi(price=16, i=14)# i loops from 13 to 14, prev_avg_gain is setcalc_rsi(price=14, i=13)# i loops from 15 to 16, throws error! because prev_avg_gain is not rememberedcalc_rsi(price=16, i=15)
Getting socket.gaierror: [Errno 8] nodename nor servname provided,or not known I'm trying to set up a proxy and I'm using the socket and httlib modules. I have my web browser pointed to the localhost and port on which the server is running,and I'm handling HTTP requests through the server.I extract the url from the HTTP header when the browser is requesting a web page and then trying to make a request through the proxy in the following manner:-conn = httplib.HTTPSConnection(url,serverPort)conn.request("GET",url)r1 = conn.getresponse()print r1.status,r1.reasonNote that the serverPort parameter in the first line is the port the proxy is on and url is the url extracted from the HTTP header received from the browser when it makes a request for a webpage.So I seem to be getting an error when I run my proxy and have the browser type in an address such as http://www.google.com or http://www.getmetal.org.The error is:-socket.gaierror: [Errno 8] nodename nor servname provided, or not knownThere is also a trace:-http://i.stack.imgur.com/UgZwD.pngIf anyone has any suggestions as to what the problem may be I'd be delighted.Here is code for the proxy server: NOTE: IF you are testing this,there may be some indentation issues due to having to put everything 4 spaces to the right to have it display as code segmentfrom socket import *import httplib import webbrowserimport stringserverPort = 2000serverSocket = socket(AF_INET,SOCK_STREAM)serverSocket.bind(('', serverPort))serverSocket.listen(2)urlList=['www.facebook.com','www.youtube.com','www.twitter.com']print 'The server is ready to receive'while 1: connectionSocket, addr = serverSocket.accept() print addr req= connectionSocket.recv(1024) #parse Get request here abnd extract url reqHeaderData= req.split('\n') newList=[] x=0 while x<len(reqHeaderData): st=reqHeaderData[x] element= st.split(' ') print element newList.append(element) x=x+1 print newList[0][1] url = newList[0][1] url= url[:-1] for i in urlList: if url ==i: raise Exception("The website you are trying to access is blocked") connectionSocket.send('Valid') print(url) conn = httplib.HTTPSConnection(url,serverPort) print conn conn.request("GET",url) print "request printed" r1 = conn.getresponse() print r1.status,r1.reason print r1 #200 OK data = r1.read() x= r1.getheaders() for i in x: print i connectionSocket.close()
Here's a common mistake I see...url="https://foo.tld"port=443conn=httplib.HTTPConnection(url,port)This won't work because of the "https://"...You should do this instead:url="foo.tld"port=443conn=httplib.HTTPConnection(url,port)This would work. I'm not sure if this is your specific problem, but it is certainly something to verify.
copy by value object with no __deepcopy__ attr I'm trying to deepcopy an instance of a class, but I get a:object has no __deepcopy__ atrributeerror.The class is locked away in a .pyd, so it cannot be modified.Is there a way to copy these objects by value without using deepcopy?
You'll have to copy the object state. The easiest way would be to use the pickle module:import picklecopy = pickle.loads(pickle.dumps(someobject))This is not guaranteed to work. All the pickle module does for you in the general case is to pickle the instance attributes, and restore the instance a-new from the class reference and restore the attribute contents on that.Since this is a C extension object, if the instance state is not exposed to you, and pickling is not explicitly supported by the type, this won't work either. In that case, you have no other options, I'm afraid.
Slew rate measuring I have to measure slew rates in signals like the one in the image below. I need the slew rate of the part marked by the grey arrow.At the moment I smoothen the signal with a hann window to get rid of eventual noise and to flatten the peaks. Then I search (starting right) the 30% and 70% points and calculate the slew rate between this two points.But my problem is, that the signal gets flattened after smoothing. Therefore the calculated slew rate is not as high as it should be. An if I reduce smoothing, then the peaks (you can see right side in the image) get higher and the 30% point is eventually found at the wrong position.Is there a better/safer way to find the required slew rate?
If you know between what values your signal is transitioning, and your noise is not too large, you can simply compute the time differences between all crossings of 30% and all crossings of 70% and keep the smallest one:import numpy as npimport matplotlib.pyplot as plts100, s0 = 5, 0signal = np.concatenate((np.ones((25,)) * s100, s100 + (np.random.rand(25) - 0.5) * (s100-s0), np.linspace(s100, s0, 25), s0 + (np.random.rand(25) - 0.5) * (s100-s0), np.ones((25,)) * s0))# Interpolate to find crossings with 30% and 70% of signal# The general linear interpolation formula between (x0, y0) and (x1, y1) is:# y = y0 + (x-x0) * (y1-y0) / (x1-x0)# to find the x at which the crossing with y happens:# x = x0 + (y-y0) * (x1-x0) / (y1-y0)# Because we are using indices as time, x1-x0 == 1, and if the crossing# happens within the interval, then 0 <= x <= 1.# The following code is just a vectorized version of the abovedelta_s = np.diff(signal)t30 = (s0 + (s100-s0)*.3 - signal[:-1]) / delta_sidx30 = np.where((t30 > 0) & (t30 < 1))[0]t30 = idx30 + t30[idx30]t70 = (s0 + (s100-s0)*.7 - signal[:-1]) / delta_sidx70 = np.where((t70 > 0) & (t70 < 1))[0]t70 = idx70 + t70[idx70]# compute all possible transition times, keep the smallestidx = np.unravel_index(np.argmin(t30[:, None] - t70), (len(t30), len(t70),))print t30[idx[0]] - t70[idx[1]]# 9.6plt. plot(signal)plt.plot(t30, [s0 + (s100-s0)*.3]*len(t30), 'go')plt.plot(t30[idx[0]], [s0 + (s100-s0)*.3], 'o', mec='g', mfc='None', ms=10)plt.plot(t70, [s0 + (s100-s0)*.7]*len(t70), 'ro')plt.plot(t70[idx[1]], [s0 + (s100-s0)*.7], 'o', mec='r', mfc='None', ms=10 )plt.show()
unable to iterate over images in the video using moviepy I need to create a video by selecting a series of images in folder and add music to the video. With the below approach, I'm able to generate the video but unable to iterate the images while the video is running.for filename in os.listdir("E://images"): if filename.endswith(".png"): clips.append(ImageClip("E://images//"+filename).set_duration(8)) finalVideo = CompositeVideoClip( clips ).set_duration(8)slides=[finalVideo]final = CompositeVideoClip(slides, size=(100,200)).set_duration(8)audioclip = AudioFileClip("E://songs//new.mp3")videoclip2 = final.set_audio(audioclip)videoclip2.write_videofile("test.mp4",fps=24)I tried with this link as well Convert image sequence to video using Moviepy instead of using CompositeVideoClip i tried with concat_clip = concatenate_videoclips(clips, method="compose")but it's not working for me. Pls suggestThanks
I got it finally !The issue was images i used, earlier the images were 3mb, 1mb etc but later i understood that all images should be above 14KB and below 100KB and possibly jpg filesclips = []for file in os.listdir("E:\\images\\"): if file.endswith(".jpg"): clips.append(VideoFileClip("E:\\images\\"+ file).set_duration(10))video = concatenate_videoclips( clips,method='compose')audioclip = AudioFileClip("back.mp3",fps=44100) videoclip = video.set_audio(audioclip)videoclip.write_videofile('ab23.mp4',codec='mpeg4', fps=24,audio=True)Thanks everybody......my technology stackpython: 3.8.1moviepy:1.0.1
Dataframe horizontal stacked bar plot I am reading the movielens user data. I want to plot the age and occupation grouped by gender (in two separate plots). But I get this error:user_df.groupby(['gender'])['age'].unstack().plot.bar()AttributeError: Cannot access callable attribute 'unstack' of 'SeriesGroupBy' objects, try using the 'apply' methodI would like the plot to be similar to the example in http://benalexkeen.com/bar-charts-in-matplotlib/The data format is like :user_id age gender occupation zipcode0 1 24 M technician 857111 2 53 F other 940432 3 23 M writer 320673 4 24 M technician 435374 5 33 F other 15213
You can try something like this:df.groupby(['occupation'])['user_id'].nunique().plot.bar()For both gender and occupation, you can do:df.groupby(['occupation','gender'])['user_id'].size().unstack().plot.bar()
Parse div element from html with style attributes I'm trying to get the text Something here I want to get inside the div element from a html file using Python and BeautifulSoup.This is how part of the code looks like in html:<div xmlns="" id="idp46819314579224" style="box-sizing: border-box; width: 100%; margin: 0 0 10px 0; padding: 5px 10px; background: #d43f3a; font-weight: bold; font-size: 14px; line-height: 20px; color: #fff;" class="" onclick="toggleSection('idp46819314579224-container');" onmouseover="this.style.cursor='pointer'">Something here I want to get<div id="idp46819314579224-toggletext" style="float: right; text-align: center; width: 8px;"> - </div></div>And this is how I tried to do:vu = soup.find_all("div", {"style" : "background: #d43f3a"})for div in vu: print(div.text)I use loop because there are several div with different id but all of them has the same background colour. It has no errors, but I got no output.How can I get the text using the background colour as the condition?
The style attribute has other content inside itstyle="box-sizing: ....; ....;"Your current code is asking if style == "background: #d43f3a" which it is not.What you can do is ask if "background: #d43f3a" in style -- a sub-string check.One approach is passing a regular expression.>>> import re>>> vu = soup.find_all("div", style=re.compile("background: #d43f3a"))... ... for div in vu:... print(div.text.strip())Something here I want to getYou can also say the same thing using CSS Selectorssoup.select('div[style*="background: #d43f3a"]')Or by passing a function/lambda>>> vu = soup.find_all("div", style=lambda style: "background: #d43f3a" in style)... ... for div in vu:... print(div.text.strip())Something here I want to get
Name some non-trivial sites written using IronPython & Silverlight Just what the title says. It'd be nice to know a few non-trivial sites out there using Silverlight in Python.
My current job is writing business apps for a German / Swiss media media consortium using IronPython and Silverlight. We're gradually moving all our web apps over to IronPython / Silverlight as they are faster to build, look nicer and perform better than the Javascript equivalents.Definitely not trivial, but not public either I'm afraid (although there our main app may be used by customers - advertisers - when we port that over).
How can I break this multithreaded python script into "chunks"? I'm processing 100k domain names into a CSV based on results taken from Siteadvisor using urllib (not the best method, I know). However, my current script creates too many threads and Python runs into errors. Is there a way I can "chunk" this script to do X number of domains at a time (say, 10-20) to prevent these errors? Thanks in advance.import threadingimport urllibclass Resolver(threading.Thread): def __init__(self, address, result_dict): threading.Thread.__init__(self) self.address = address self.result_dict = result_dict def run(self): try: content = urllib.urlopen("http://www.siteadvisor.com/sites/" + self.address).read(12000) search1 = content.find("didn't find any significant problems.") search2 = content.find('yellow') search3 = content.find('web reputation analysis found potential security') search4 = content.find("don't have the results yet.") if search1 != -1: result = "safe" elif search2 != -1: result = "caution" elif search3 != -1: result = "warning" elif search4 != -1: result = "unknown" else: result = "" self.result_dict[self.address] = result except: passdef main(): infile = open("domainslist", "r") intext = infile.readlines() threads = [] results = {} for address in [address.strip() for address in intext if address.strip()]: resolver_thread = Resolver(address, results) threads.append(resolver_thread) resolver_thread.start() for thread in threads: thread.join() outfile = open('final.csv', 'w') outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems())) outfile.close()if __name__ == '__main__': main()Edit: new version, based on andyortlieb's suggestions.import threadingimport urllibimport timeclass Resolver(threading.Thread): def __init__(self, address, result_dict, threads): threading.Thread.__init__(self) self.address = address self.result_dict = result_dict self.threads = threads def run(self): try: content = urllib.urlopen("http://www.siteadvisor.com/sites/" + self.address).read(12000) search1 = content.find("didn't find any significant problems.") search2 = content.find('yellow') search3 = content.find('web reputation analysis found potential security') search4 = content.find("don't have the results yet.") if search1 != -1: result = "safe" elif search2 != -1: result = "caution" elif search3 != -1: result = "warning" elif search4 != -1: result = "unknown" else: result = "" self.result_dict[self.address] = result outfile = open('final.csv', 'a') outfile.write(self.address + "," + result + "\n") outfile.close() print self.address + result threads.remove(self) except: passdef main(): infile = open("domainslist", "r") intext = infile.readlines() threads = [] results = {} for address in [address.strip() for address in intext if address.strip()]: loop=True while loop: if len(threads) < 20: resolver_thread = Resolver(address, results, threads) threads.append(resolver_thread) resolver_thread.start() loop=False else: time.sleep(.25) for thread in threads: thread.join()# removed so I can track the progress of the script# outfile = open('final.csv', 'w')# outfile.write("\n".join("%s,%s" % (address, ip) for address, ip in results.iteritems()))# outfile.close()if __name__ == '__main__': main()
Your existing code will work beautifully - just modify your __init__ method inside Resolver to take in an additional list of addresses instead of one at a time, so instead of having one thread for each address, you have one thread for every 10 (for example). That way you won't overload the threading.You'll obviously have to slightly modify run as well so it loops through the array of addresses instead of the one self.address.I can work up a quick example if you'd like, but from the quality of your code I feel as though you'll be able to handle it quite easily.Hope this helps!EDIT Example below as requested. Note that you'll have to modify main to send your Resolver instance lists of addresses instead of a single address - I couldn't handle this for you without knowing more about the format of your file and how the addresses are stored. Note - you could do the run method with a helper function, but i thought this might be more understandable as an exampleclass Resolver(threading.Thread): def __init__(self, addresses, result_dict): threading.Thread.__init__(self) self.addresses = addresses # Now takes in a list of multiple addresses self.result_dict = result_dict def run(self): for address in self.addresses: # do your existing code for every address in the list try: content = urllib.urlopen("http://www.siteadvisor.com/sites/" + address).read(12000) search1 = content.find("didn't find any significant problems.") search2 = content.find('yellow') search3 = content.find('web reputation analysis found potential security') search4 = content.find("don't have the results yet.") if search1 != -1: result = "safe" elif search2 != -1: result = "caution" elif search3 != -1: result = "warning" elif search4 != -1: result = "unknown" else: result = "" self.result_dict[address] = result except: pass
CODE: Python , Error Type: 'int' object is not subscriptable So, the problem is write a void function which takes a 4 digit number and add the square of first two digit and last two digit.and my solution isdef add():print("Enter a 4 Digit number")num = int(input())if 999 < num < 10000: c = int(num[0:2]) d = int(num[2:4]) e = (c ** 2) + (d ** 2) print(e)else: print("Enter a valid number")add()#it shows error: 'int' object is not subscriptable
This should workdef add(): print("Enter a 4 Digit number") num = int(input()) if 999 < num < 10000: c = int(str(num)[0:2]) #You first need to convert it into str d = int(str(num)[2:4]) #Same here e = (c ** 2) + (d ** 2) print(e) else: print("Enter a valid number")add()
How to predict a single sample with Keras I'm trying to implement a Fully Convolutional Neural Network and can successfully test the accuracy of the model on the test set after training. However, I'd like to use the model to make a prediction on a single sample only. Training was in batches. I believe what I'm missing is related to batch size and input shape. Here is the configuration for the network:def read(file_name): data = np.loadtxt(file_name, delimiter="\t") y = data[:, 0] x = data[:, 1:] return x, y.astype(int)train_data, train_labels = read("FordA_TRAIN.tsv")test_data, test_labels = read("FordA_TEST.tsv")train_data = train_data.reshape((train_data.shape[0], train_data.shape[1], 1))test_data = test_data.reshape((test_data.shape[0], test_data.shape[1], 1))num_classes = len(np.unique(train_labels))#print(train_data[0])# Shuffle the data to prepare for validation_split (and prevent overfitting for class order)idx = np.random.permutation(len(train_data))train_data = train_data[idx]train_labels = train_labels[idx]#Standardize labels to have a value between 0 and 1 rather than -1 and 1.train_labels[train_labels == -1] = 0test_labels[test_labels == -1] = 0def make_model(input_shape): input_layer = keras.layers.Input(input_shape) conv1 = keras.layers.Conv1D(filters=64, kernel_size=3, padding="same")(input_layer) conv1 = keras.layers.BatchNormalization()(conv1) conv1 = keras.layers.ReLU()(conv1) conv2 = keras.layers.Conv1D(filters=64, kernel_size=3, padding="same")(conv1) conv2 = keras.layers.BatchNormalization()(conv2) conv2 = keras.layers.ReLU()(conv2) conv3 = keras.layers.Conv1D(filters=64, kernel_size=3, padding="same")(conv2) conv3 = keras.layers.BatchNormalization()(conv3) conv3 = keras.layers.ReLU()(conv3) gap = keras.layers.GlobalAveragePooling1D()(conv3) output_layer = keras.layers.Dense(num_classes, activation="softmax")(gap) return keras.models.Model(inputs=input_layer, outputs=output_layer)model = make_model(input_shape=train_data.shape[1:])keras.utils.plot_model(model, show_shapes=True)epochs = 500batch_size = 32callbacks = [ keras.callbacks.ModelCheckpoint( "best_model.h5", save_best_only=True, monitor="val_loss" ), keras.callbacks.ReduceLROnPlateau( monitor="val_loss", factor=0.5, patience=20, min_lr=0.0001 ), keras.callbacks.EarlyStopping(monitor="val_loss", mode = 'min', patience=50, verbose=1),]model.compile( optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["sparse_categorical_accuracy"],)history = model.fit( train_data, train_labels, batch_size=batch_size, epochs=epochs, callbacks=callbacks, validation_split=0.2, verbose=1,)model = keras.models.load_model("best_model.h5")test_loss, test_acc = model.evaluate(test_data, test_labels)print("Test accuracy", test_acc)print("Test loss", test_loss)The above code can successfully display where the accuracy converged. Now, I'd like to make predictions on single samples. So far I have:def read(file_name): data = np.loadtxt(file_name, delimiter="\t") y = data[:, 0] x = data[:, 1:] return x, y.astype(int)test_data, test_labels = read("FordA_TEST_B.tsv")test_data = test_data.reshape((test_data.shape[0], test_data.shape[1], 1))test_labels[test_labels == -1] = 0print(test_data)model = keras.models.load_model("forda_original_model.h5")q = model.predict(test_data[0])This raises the error: ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (500, 1)How does the input have to be reshaped and what is the rule to go by? Any help is much appreciated!
Copied from a comment:The model expects a batch dimension. Thus, to predict for a single model, just expand the dimensions to create a single-sized batch by running:q = model.predict(test_data[0][None,...])orq = model.predict(test_data[0][np.newaxis,...])
variable equal to an updated dict does not work, why? Imagine I havedict1 = {'uno':[1,2,3],'dos':[4,5,6]}anddictA = {'AA':'ZZZZZ'}This works:dict1.update(dictA)Result: {'uno': [1, 2, 3], 'dos': [4, 5, 6], 'AA':'ZZZZZ'}But this does not work:B = dict1.update(dictA)The result is not an error but Result is None, which makes this behaviour (IMMO) strange and dangerous since the code does not crash.So Why is returning None and not giving error?Note:It looks like the way to go is:C = dict1.update(dictA)B = {}B.update(dict1)B.update(dictA)BC is noneB is OK here
When using update it update dict1 the dictionary given as parameter and returns None:Docs:dict. update([mapping])mappingRequired. Either another dictionary object or an iterable of key:value pairs > (iterables of length two). If keyword arguments are specified, the dictionary is > then updated with those key:value pairs.Return ValueNoneCode:dict1 = {'uno':[1,2,3],'dos':[4,5,6]}dict1.update({'tres':[7,8,9]})# {'uno': [1, 2, 3], 'dos': [4, 5, 6], 'tres': [7, 8, 9]}print(dict1)
Object of type User is not JSON serializable in DRF I am customizing the API that I give when I send the get request. The following error occurred when the get request was sent after customizing the response value using GenericAPIView.tracebackTraceback (most recent call last): File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\exception.py", line 34, in inner response = get_response(request) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\base.py", line 145, in _get_response response = self.process_exception_by_middleware(e, request) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\core\handlers\base.py", line 143, in _get_response response = response.render() File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\template\response.py", line 105, in render self.content = self.rendered_content File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\response.py", line 70, in rendered_content ret = renderer.render(self.data, accepted_media_type, context) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\renderers.py", line 100, in render ret = json.dumps( File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\utils\json.py", line 25, in dumps return json.dumps(*args, **kwargs) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\__init__.py", line 234, in dumps return cls( File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\encoder.py", line 257, in iterencode return _iterencode(o, 0) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\site-packages\rest_framework\utils\encoders.py", line 67, in default return super().default(obj) File "C:\Users\kurak\AppData\Local\Programs\Python\Python38-32\lib\json\encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} 'TypeError: Object of type User is not JSON serializableWhat's problem in my code? I can't solve this error. Please help me. Here is my code. Thanks in advanceviews.pyclass ReadPostView (GenericAPIView) : serializer_class = PostSerializer permission_classes = [IsAuthenticated] def get (self, serializer) : serializer = self.serializer_class() posts = Post.objects.all() data = [] for post in posts : comments = Comment.objects.filter(post=post) json = { 'pk': post.pk, 'author': { 'email': post.author_email, 'username': post.author_name, 'profile': post.author_profile }, 'like': post.liker.count, 'liker': post.liker, 'text': post.text, 'images': Image.objects.filter(post=post), 'comments_count': comments.count(), 'view': post.view, 'viewer_liked': None, 'tag': post.tag } data.append(json) return Response(data)models.pyclass Post (models.Model): author_name = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='authorName', null=True) author_email = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='authorEmail', null=True) author_profile = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='authorProfile', null=True) title = models.CharField(max_length=40) text = models.TextField(max_length=300) tag = models.CharField(max_length=511, null=True) view = models.IntegerField(default=0) viewer = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name='viewer', blank=True) like = models.IntegerField(default=0) liker = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name='liker', blank=True) def __str__ (self) : return self.titleclass Image (models.Model) : post = models.ForeignKey(Post, on_delete=models.CASCADE) image = models.ImageField(null=True, blank=True)class Comment (models.Model) : author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, null=True) post = models.ForeignKey(Post, on_delete=models.CASCADE) text = models.TextField(max_length=200)
There are a few problems with your code:First, you can't pass an instance or a list of instances to your JSON fields. 'email': post.author_email,, 'username': post.author_name,, 'profile': post.author_profile, 'liker': post.liker, and 'images': Image.objects.filter(post=post),To fix this you either have to create a serializer for their model and pass the serialized data instead or you have to just pass a serializable field of those models like post.liker.emailYou can use DRF ModelSerializer's to make a model serializer: ModelSerializerSecond, you don't need all three fields author_name, author_email, and author_profile in your model. all of them are pointing to your default user model and you can access everywhere from one of them:post.author_profile.email # will give you the user emailpost.author_profile.first_name # will give you the user's first name# and so on ...Third, you can just use ListAPIView to generate a serialized list of your data: ListAPIViewYou are doing the whole thing wrong here. Please consider looking at some more django and rest framework examples.
How to do a listing and justify numbers? I am looking to get some help with this assignment. From the list below, I should find numbers greater than 0, the numbers are written to the file right justified in 10 spaces, with 2 spaces allowed forthe fractional portion of the value, and finally write them into a file.Here is what I've got so far:def formatted_file(file_name, nums_list): ''' Test: >>> formatted_file('out1.txt', [1, 23.999, -9, 327.1]) >>> show_file('out1.txt') 1.00 24.00 327.10 <BLANKLINE> >>> formatted_file('out1.txt',[-1, -98.6]) >>> show_file('out1.txt') <BLANKLINE> >>> formatted_file('out1.txt',[]) >>> show_file('out1.txt') <BLANKLINE> ''' with open('out1.txt', 'w') as my_file: for x in nums_list: if x > 0: a = list() a.append(x) if len(a) > 0: my_file.write(f'{i:10.2f}\n')def show_file(file_name): with open(file_name, 'r') as result_file: print(result_file.read())if __name__ == "__main__": import doctest doctest.testmod(verbose = True)When I run this function, the file that I get is blank. I got it working last night in pycharm, but when I ran it to IDLE it didn't work. And now it's throwing a bunch of error in pycharm as well.Thanks both for your suggestions. Unfortunately none of the methods writes the output in the file :(The test passes in IDLE though.
If you use a string.format, you must put the value in the format function. I also do not understand what len(a) or a in particular a is used for here.This is your function with my modification, which passed the first test (I think the other two were just for debug)def formatted_file(file_name, nums_list): ''' Test: >>> formatted_file('out2.txt', [1, 23.999, -9, 327.1]) >>> show_file('out2.txt') 1.00 24.00 327.10 <BLANKLINE> ''' with open(file_name, 'w') as my_file: for x in nums_list: if x > 0: my_file.write('{:10.2f}\n'.format(x))def show_file(file_name): with open(file_name, 'r') as result_file: print(result_file.read())if __name__ == "__main__": import doctest doctest.testmod(verbose = True)Trying: formatted_file('out2.txt', [1, 23.999, -9, 327.1])Expecting nothingokTrying: show_file('out2.txt')Expecting: 1.00 24.00 327.10 <BLANKLINE>ok2 items had no tests: __main__ __main__.show_file1 items passed all tests:2 tests in __main__.formatted_file2 tests in 3 items.2 passed and 0 failed.Test passed.
Python/ Boto 3: How to retrieve/download files from AWS S3? In Python/Boto 3, Found out that to download a file individually from S3 to local can do the following: bucket = self._aws_connection.get_bucket(aws_bucketname) for s3_file in bucket.list(): if filename == s3_file.name: self._downloadFile(s3_file, local_download_directory) break;And to download all files under one chosen directory: else: bucket = self._aws_connection.get_bucket(aws_bucketname) for s3_file in bucket.list(): self._downloadFile(s3_file, local_download_directory)And helper function _downloadFile(): def _downloadFile(self, s3_file, local_download_destination): full_local_path = os.path.expanduser(os.path.join(local_download_destination, s3_file.name)) try: print "Downloaded: %s" % (full_local_path) s3_file.get_contents_to_filename(full_local_path)But both don’t seem to be working. Using Boto 3 and Python, would like to be able to download all files, as a zip preferably, under a defined directory on S3 to my local. What could I be doing wrong, and what’s the correct implementation of the parameters? Thank you in advance, and will be sure to accept/upvote answerUPDATE CODE: Getting an error: “AttributeError: 'S3' object has no attributeimport sysimport jsonimport osimport subprocessimport boto3from boto.s3.connection import S3Connections3 = boto3.resource('s3')s3client = boto3.client('s3')#This worksfor bucket in s3.buckets.all(): print(bucket.name)def main(): #Getting an error: “AttributeError: 'S3' object has no attribute 'download’” s3client.download('testbucket', 'fileone.json', 'newfile')if __name__ == "__main__": main()
To download files from S3 to Local FS, use the download_file() methods3client = boto3.client('s3')s3client.download_file(Bucket, Key, Filename)If the S3 object is s3://mybucket/foo/bar/file.txt, then the arguments would beBucket --> mybucketKey --> foo/bar/file.txtFilename --> /local/path/file.txtThere aren't any methods to download the entire bucket. An alternative way would be to list all the objects in the bucket and download them individually as files.for obj in s3client.list_objects(Bucket='mybucket')['Contents']: try: filename = obj['Key'].rsplit('/', 1)[1] except IndexError: filename = obj['Key'] localfilename = os.path.join('/home/username/Downloads/', filename) # The local directory must exist. s3client.download_file('mybucket', obj['Key'], localfilename)Note: The response of list_objects() is truncated to 1000 objects. Use the markers in the response to retrieve the remainder of objects in the bucket.
Python Reportlab divide table to fit into different pages I am trying to build a schedule planner, in a PDF file generated with ReportLab. The schedule will have a different rows depending on the hour of the day: starting with 8:00 a.m., 8:15 a.m., 8:30 a.m., and so on.I made a loop in which the hours will be calculated automatically and the schedule will be filled. However, since my table is too long, it doesn't fit completely in the page. (Although the schedule should end on 7:30 p.m., it is cutted at 2:00 p.m.)The desired result is to have a PageBreak when the table is at around 20 activities. On the next page, the header should be exactly the same as in the first page and below, the continuation of the table. The process should repeat every time it is necessary, until the end of the table.The Python code is the following:from reportlab.pdfgen.canvas import Canvasfrom datetime import datetime, timedeltafrom reportlab.platypus import Table, TableStylefrom reportlab.lib import colorsfrom reportlab.lib.pagesizes import letter, landscapeclass Vendedor: """ Información del Vendedor: Nombre, sucursal, meta de venta """ def __init__(self, nombre_vendedor, sucursal, dia_reporte): self.nombre_vendedor = nombre_vendedor self.sucursal = sucursal self.dia_reporte = dia_reporteclass Actividades: """ Información de las Actividades realizadas: Hora de actividad y duración, cliente atendido, tipo de actividad, resultado, monto venta (mxn) + (usd), monto cotización (mxn) + (usd), solicitud de apoyo y comentarios adicionales """ def __init__(self, hora_actividad, duracion_actividad, cliente, tipo_actividad, resultado, monto_venta_mxn, monto_venta_usd, monto_cot_mxn, monto_cot_usd, requiero_apoyo, comentarios_extra): self.hora_actividad = hora_actividad self.duracion_actividad = duracion_actividad self.cliente = cliente self.tipo_actividad = tipo_actividad self.resultado = resultado self.monto_venta_mxn = monto_venta_mxn self.monto_venta_usd = monto_venta_usd self.monto_cot_mxn = monto_cot_mxn self.monto_cot_usd = monto_cot_usd self.requiero_apoyo = requiero_apoyo self.comentarios_extra = comentarios_extraclass PDFReport: """ Crea el Reporte de Actividades diarias en archivo de formato PDF """ def __init__(self, filename): self.filename = filenamevendedor = Vendedor('John Doe', 'Stack Overflow', datetime.now().strftime('%d/%m/%Y'))file_name = 'cronograma_actividades.pdf'document_title = 'Cronograma Diario de Actividades'title = 'Cronograma Diario de Actividades'nombre_colaborador = vendedor.nombre_vendedorsucursal_colaborador = vendedor.sucursalfecha_actual = vendedor.dia_reportecanvas = Canvas(file_name)canvas.setPageSize(landscape(letter))canvas.setTitle(document_title)canvas.setFont("Helvetica-Bold", 20)canvas.drawCentredString(385+100, 805-250, title)canvas.setFont("Helvetica", 16)canvas.drawCentredString(385+100, 785-250, nombre_colaborador + ' - ' + sucursal_colaborador)canvas.setFont("Helvetica", 14)canvas.drawCentredString(385+100, 765-250, fecha_actual)title_background = colors.fidbluehour = 8minute = 0hour_list = []data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'},]i = 0for i in range(47): if minute == 0: if hour <= 12: time = str(hour) + ':' + str(minute) + '0 a.m.' else: time = str(hour-12) + ':' + str(minute) + '0 p.m.' else: if hour <= 12: time = str(hour) + ':' + str(minute) + ' a.m.' else: time = str(hour-12) + ':' + str(minute) + ' p.m.' if minute != 45: minute += 15 else: hour += 1 minute = 0 hour_list.append(time) # I TRIED THIS SOLUTION BUT THIS DIDN'T WORK # if i % 20 == 0: # canvas.showPage() data_actividades.append([hour_list[i], i, i, i, i, i, i, i]) i += 1 table_actividades = Table(data_actividades, colWidths=85, rowHeights=30, repeatRows=1) tblStyle = TableStyle([ ('BACKGROUND', (0, 0), (-1, 0), title_background), ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke), ('ALIGN', (1, 0), (1, -1), 'CENTER'), ('GRID', (0, 0), (-1, -1), 1, colors.black) ]) rowNumb = len(data_actividades) for row in range(1, rowNumb): if row % 2 == 0: table_background = colors.lightblue else: table_background = colors.aliceblue tblStyle.add('BACKGROUND', (0, row), (-1, row), table_background) table_actividades.setStyle(tblStyle) width = 150 height = 150 table_actividades.wrapOn(canvas, width, height) table_actividades.drawOn(canvas, 65, (0 - height) - 240)canvas.save()I tried by adding:if i % 20 == 0: canvas.showPage()However this failed to achieve the desired result.Other quick note: Although I specifically coded the column titles of the table. Once I run the program, the order of the column titles is modified for some reason (see the pasted image). Any idea of why this is happening?data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'},]Thank you very much in advance, have a great day!
You should use templates, as suggested in the Chapter 5 "PLATYPUS - Page Layout and TypographyUsing Scripts" of the official documentation.The basic idea is to use frames, and add to a list element all the information you want to add. In my case I call it "contents", with the command "contents.append(FrameBreak())" you leave the frame and work on the next one, on the other hand if you want to change the type of template you use the command "contents.append(NextPageTemplate('<template_name>'))"My proposal:For your case I used two templates, the first one is the one that contains the header with the sheet information and the first part of the table, and the other template corresponds to the rest of the content. The name of these templates is firstpage and laterpage.The code is as follows:from reportlab.pdfgen.canvas import Canvasfrom datetime import datetime, timedeltafrom reportlab.platypus import Table, TableStylefrom reportlab.lib import colorsfrom reportlab.lib.pagesizes import letter, landscapefrom reportlab.platypus import BaseDocTemplate, Frame, Paragraph, PageBreak, \ PageTemplate, Spacer, FrameBreak, NextPageTemplate, Imagefrom reportlab.lib.pagesizes import letter,A4from reportlab.lib.units import inch, cmfrom reportlab.lib.styles import getSampleStyleSheetfrom reportlab.lib.enums import TA_JUSTIFY, TA_CENTER,TA_LEFT,TA_RIGHTclass Vendedor: """ Información del Vendedor: Nombre, sucursal, meta de venta """ def __init__(self, nombre_vendedor, sucursal, dia_reporte): self.nombre_vendedor = nombre_vendedor self.sucursal = sucursal self.dia_reporte = dia_reporteclass Actividades: """ Información de las Actividades realizadas: Hora de actividad y duración, cliente atendido, tipo de actividad, resultado, monto venta (mxn) + (usd), monto cotización (mxn) + (usd), solicitud de apoyo y comentarios adicionales """ def __init__(self, hora_actividad, duracion_actividad, cliente, tipo_actividad, resultado, monto_venta_mxn, monto_venta_usd, monto_cot_mxn, monto_cot_usd, requiero_apoyo, comentarios_extra): self.hora_actividad = hora_actividad self.duracion_actividad = duracion_actividad self.cliente = cliente self.tipo_actividad = tipo_actividad self.resultado = resultado self.monto_venta_mxn = monto_venta_mxn self.monto_venta_usd = monto_venta_usd self.monto_cot_mxn = monto_cot_mxn self.monto_cot_usd = monto_cot_usd self.requiero_apoyo = requiero_apoyo self.comentarios_extra = comentarios_extraclass PDFReport: """ Crea el Reporte de Actividades diarias en archivo de formato PDF """ def __init__(self, filename): self.filename = filenamevendedor = Vendedor('John Doe', 'Stack Overflow', datetime.now().strftime('%d/%m/%Y'))file_name = 'cronograma_actividades.pdf'document_title = 'Cronograma Diario de Actividades'title = 'Cronograma Diario de Actividades'nombre_colaborador = vendedor.nombre_vendedorsucursal_colaborador = vendedor.sucursalfecha_actual = vendedor.dia_reportecanvas = Canvas(file_name, pagesize=landscape(letter))doc = BaseDocTemplate(file_name)contents =[]width,height = A4left_header_frame = Frame( 0.2*inch, height-1.2*inch, 2*inch, 1*inch )right_header_frame = Frame( 2.2*inch, height-1.2*inch, width-2.5*inch, 1*inch,id='normal' )frame_later = Frame( 0.2*inch, 0.6*inch, (width-0.6*inch)+0.17*inch, height-1*inch, leftPadding = 0, topPadding=0, showBoundary = 1, id='col' )frame_table= Frame( 0.2*inch, 0.7*inch, (width-0.6*inch)+0.17*inch, height-2*inch, leftPadding = 0, topPadding=0, showBoundary = 1, id='col' )laterpages = PageTemplate(id='laterpages',frames=[frame_later])firstpage = PageTemplate(id='firstpage',frames=[left_header_frame, right_header_frame,frame_table],)contents.append(NextPageTemplate('firstpage'))logoleft = Image('logo_power.png')logoleft._restrictSize(1.5*inch, 1.5*inch)logoleft.hAlign = 'CENTER'logoleft.vAlign = 'CENTER'contents.append(logoleft)contents.append(FrameBreak())styleSheet = getSampleStyleSheet()style_title = styleSheet['Heading1']style_title.fontSize = 20 style_title.fontName = 'Helvetica-Bold'style_title.alignment=TA_CENTERstyle_data = styleSheet['Normal']style_data.fontSize = 16 style_data.fontName = 'Helvetica'style_data.alignment=TA_CENTERstyle_date = styleSheet['Normal']style_date.fontSize = 14style_date.fontName = 'Helvetica'style_date.alignment=TA_CENTERcanvas.setTitle(document_title)contents.append(Paragraph(title, style_title))contents.append(Paragraph(nombre_colaborador + ' - ' + sucursal_colaborador, style_data))contents.append(Paragraph(fecha_actual, style_date))contents.append(FrameBreak())title_background = colors.fidbluehour = 8minute = 0hour_list = []data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'},]i = 0for i in range(300): if minute == 0: if hour <= 12: time = str(hour) + ':' + str(minute) + '0 a.m.' else: time = str(hour-12) + ':' + str(minute) + '0 p.m.' else: if hour <= 12: time = str(hour) + ':' + str(minute) + ' a.m.' else: time = str(hour-12) + ':' + str(minute) + ' p.m.' if minute != 45: minute += 15 else: hour += 1 minute = 0 hour_list.append(time) # I TRIED THIS SOLUTION BUT THIS DIDN'T WORK # if i % 20 == 0: data_actividades.append([hour_list[i], i, i, i, i, i, i, i]) i += 1 table_actividades = Table(data_actividades, colWidths=85, rowHeights=30, repeatRows=1) tblStyle = TableStyle([ ('BACKGROUND', (0, 0), (-1, 0), title_background), ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke), ('ALIGN', (1, 0), (1, -1), 'CENTER'), ('GRID', (0, 0), (-1, -1), 1, colors.black) ]) rowNumb = len(data_actividades) for row in range(1, rowNumb): if row % 2 == 0: table_background = colors.lightblue else: table_background = colors.aliceblue tblStyle.add('BACKGROUND', (0, row), (-1, row), table_background) table_actividades.setStyle(tblStyle) width = 150 height = 150 contents.append(NextPageTemplate('laterpages'))contents.append(table_actividades)contents.append(PageBreak())doc.addPageTemplates([firstpage,laterpages])doc.build(contents)ResultsWith this you can add as many records as you want, I tried with 300. The table is not fully visible because for my convenience I made an A4 size pdf. However, the principle is the same for any size so you must play with the size of the frames and the size of the pdf page.EXTRA, add header on each pagesince only one template will be needed now, the "first_page" template should be removed since it will be the same for all pages. In the same way that you proposed in the beginning I cut the table every 21 records (to include the header of the table) and it is grouped in a list that then iterates adding the header with the logo in each cycle. Also it is included in the logical cutting sentence, the case when the number of records does not reach 21 but the number of records is going to end. The code is as follows:canvas = Canvas(file_name, pagesize=landscape(letter))doc = BaseDocTemplate(file_name)contents =[]width,height = A4left_header_frame = Frame( 0.2*inch, height-1.2*inch, 2*inch, 1*inch )right_header_frame = Frame( 2.2*inch, height-1.2*inch, width-2.5*inch, 1*inch,id='normal' )frame_table= Frame( 0.2*inch, 0.7*inch, (width-0.6*inch)+0.17*inch, height-2*inch, leftPadding = 0, topPadding=0, showBoundary = 1, id='col' )laterpages = PageTemplate(id='laterpages',frames=[left_header_frame, right_header_frame,frame_table],)logoleft = Image('logo_power.png')logoleft._restrictSize(1.5*inch, 1.5*inch)logoleft.hAlign = 'CENTER'logoleft.vAlign = 'CENTER'styleSheet = getSampleStyleSheet()style_title = styleSheet['Heading1']style_title.fontSize = 20 style_title.fontName = 'Helvetica-Bold'style_title.alignment=TA_CENTERstyle_data = styleSheet['Normal']style_data.fontSize = 16 style_data.fontName = 'Helvetica'style_data.alignment=TA_CENTERstyle_date = styleSheet['Normal']style_date.fontSize = 14style_date.fontName = 'Helvetica'style_date.alignment=TA_CENTERcanvas.setTitle(document_title)title_background = colors.fidbluehour = 8minute = 0hour_list = []data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'},]i = 0table_group= []size = 304count = 0for i in range(size): if minute == 0: if hour <= 12: time = str(hour) + ':' + str(minute) + '0 a.m.' else: time = str(hour-12) + ':' + str(minute) + '0 p.m.' else: if hour <= 12: time = str(hour) + ':' + str(minute) + ' a.m.' else: time = str(hour-12) + ':' + str(minute) + ' p.m.' if minute != 45: minute += 15 else: hour += 1 minute = 0 hour_list.append(time) data_actividades.append([hour_list[i], i, i, i, i, i, i, i]) i += 1 table_actividades = Table(data_actividades, colWidths=85, rowHeights=30, repeatRows=1) tblStyle = TableStyle([ ('BACKGROUND', (0, 0), (-1, 0), title_background), ('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke), ('ALIGN', (1, 0), (1, -1), 'CENTER'), ('GRID', (0, 0), (-1, -1), 1, colors.black) ]) rowNumb = len(data_actividades) for row in range(1, rowNumb): if row % 2 == 0: table_background = colors.lightblue else: table_background = colors.aliceblue tblStyle.add('BACKGROUND', (0, row), (-1, row), table_background) table_actividades.setStyle(tblStyle) if ((count >= 20) or (i== size) ): count = 0 table_group.append(table_actividades) data_actividades = [ {'Hora', 'Cliente', 'Resultado de \nActividad', 'Monto Venta \n(MXN)', 'Monto Venta \n(USD)', 'Monto Cotización \n(MXN)', 'Monto Cotización \n(USD)', 'Comentarios \nAdicionales'},] width = 150 height = 150 count += 1 if i > size: breakcontents.append(NextPageTemplate('laterpages'))for table in table_group: contents.append(logoleft) contents.append(FrameBreak()) contents.append(Paragraph(title, style_title)) contents.append(Paragraph(nombre_colaborador + ' - ' + sucursal_colaborador, style_data)) contents.append(Paragraph(fecha_actual, style_date)) contents.append(FrameBreak()) contents.append(table) contents.append(FrameBreak())doc.addPageTemplates([laterpages,])doc.build(contents)Extra - result:
How to automatically select idle GPU for model traning in tensorflow? I am using nvidia prebuilt docker container NVIDIA Release 20.12-tf2 to run my experiment. I am using TensorFlow Version 2.3.1. Currently, I am running my model on one of GPU, I still have 3 more idle GPUs so I intend to use my alternative experiment on any idle GPUs. Here is the output of nvidia-smi:+-----------------------------------------------------------------------------+| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.1 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. || | | MIG M. ||===============================+======================+======================|| 0 Tesla T4 Off | 00000000:6A:00.0 Off | 0 || N/A 70C P0 71W / 70W | 14586MiB / 15109MiB | 100% Default || | | N/A |+-------------------------------+----------------------+----------------------+| 1 Tesla T4 Off | 00000000:6B:00.0 Off | 0 || N/A 39C P0 27W / 70W | 212MiB / 15109MiB | 0% Default || | | N/A |+-------------------------------+----------------------+----------------------+| 2 Tesla T4 Off | 00000000:6C:00.0 Off | 0 || N/A 41C P0 28W / 70W | 212MiB / 15109MiB | 0% Default || | | N/A |+-------------------------------+----------------------+----------------------+| 3 Tesla T4 Off | 00000000:6D:00.0 Off | 0 || N/A 41C P0 28W / 70W | 212MiB / 15109MiB | 0% Default || | | N/A |+-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|+-----------------------------------------------------------------------------+update: prebuilt -container:I'm using nvidia-prebuilt container as follow:docker run -ti --rm --gpus all --shm-size=1024m -v /home/hamilton/data:/data nvcr.io/nvidia/tensorflow:20.12-tf2-py3To utilize idle GPU for my other experiments, I tried to add those in my python script:attempt-1import tensorflow as tfdevices = tf.config.experimental.list_physical_devices('GPU')tf.config.experimental.set_memory_growth(devices[0], True)but this attempt gave me following error:raise ValueError("Memory growth cannot differ between GPU devices") ValueError: Memory growth cannot differ between GPU devicesI googled this error but none of them discussed on GitHub is not working for me.attempt-2I also tried this:gpus = tf.config.experimental.list_physical_devices('GPU')for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True)but this attempt also gave me error like this:Error occurred when finalizing GeneratorDataset iterator: Failedprecondition: Python interpreter state is not initialized. The processmay be terminated.people discussed this error on github but still not able to get rid of error on my side.latest attempt:I also tried parallel training with TensorFlow and added those to my python script:device_type = "GPU"devices = tf.config.experimental.list_physical_devices(device_type)devices_names = [d.name.split("e:")[1] for d in devices]strategy = tf.distribute.MirroredStrategy(devices=devices_names[:3])with strategy.scope(): opt = Adam(learning_rate=0.1) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])but this gave me also error and the program stopped.Can anyone help me how to automatically select idle GPUs for the training model in tensorflow? Does anyone know any workable approach? What's wrong with my above attempt? Any possible ideas to utilize idle GPUs while running the program on one of the GPUs? any thoughts?
Thanks to @HernánAlarcón suggestion, I tried like this and it worked like charm:docker run -ti --rm --gpus device=1,3 --shm-size=1024m -v /home/hamilton/data:/data nvcr.io/nvidia/tensorflow:20.12-tf2-py3this may not be an elegant solution but it worked like charm. I am open to other possible remedies to fix this sort of problem.
Reshape tensors in pytorch? I'm struggling with the result of a matrix multiplication in pytorch and I don't know how to solve it, in particular:I'm multiplying these two matricestensor([[[[209.5000, 222.7500], [276.5000, 289.7500]], [[208.5000, 221.7500], [275.5000, 288.7500]]]], dtype=torch.float64)andtensor([[[[ 0., 1., 2., 5., 6., 7., 10., 11., 12.], [ 2., 3., 4., 7., 8., 9., 12., 13., 14.], [10., 11., 12., 15., 16., 17., 20., 21., 22.], [12., 13., 14., 17., 18., 19., 22., 23., 24.]], [[25., 26., 27., 30., 31., 32., 35., 36., 37.], [27., 28., 29., 32., 33., 34., 37., 38., 39.], [35., 36., 37., 40., 41., 42., 45., 46., 47.], [37., 38., 39., 42., 43., 44., 47., 48., 49.]], [[50., 51., 52., 55., 56., 57., 60., 61., 62.], [52., 53., 54., 57., 58., 59., 62., 63., 64.], [60., 61., 62., 65., 66., 67., 70., 71., 72.], [62., 63., 64., 67., 68., 69., 72., 73., 74.]]]], dtype=torch.float64)with the following line of code A.view(2,-1) @ B, and then I reshape the result with result.view(2, 3, 3, 3).The resulting matrix istensor([[[[ 6687.5000, 7686.0000, 8684.5000], [11680.0000, 12678.5000, 13677.0000], [16672.5000, 17671.0000, 18669.5000]], [[ 6663.5000, 7658.0000, 8652.5000], [11636.0000, 12630.5000, 13625.0000], [16608.5000, 17603.0000, 18597.5000]], [[31650.0000, 32648.5000, 33647.0000], [36642.5000, 37641.0000, 38639.5000], [41635.0000, 42633.5000, 43632.0000]]], [[[31526.0000, 32520.5000, 33515.0000], [36498.5000, 37493.0000, 38487.5000], [41471.0000, 42465.5000, 43460.0000]], [[56612.5000, 57611.0000, 58609.5000], [61605.0000, 62603.5000, 63602.0000], [66597.5000, 67596.0000, 68594.5000]], [[56388.5000, 57383.0000, 58377.5000], [61361.0000, 62355.5000, 63350.0000], [66333.5000, 67328.0000, 68322.5000]]]], dtype=torch.float64)Instead I wanttensor([[[[ 6687.5000, 7686.0000, 8684.5000], [11680.0000, 12678.5000, 13677.0000], [16672.5000, 17671.0000, 18669.5000]], [[31650.0000, 32648.5000, 33647.0000], [36642.5000, 37641.0000, 38639.5000], [41635.0000, 42633.5000, 43632.0000]], [[56612.5000, 57611.0000, 58609.5000], [61605.0000, 62603.5000, 63602.0000], [66597.5000, 67596.0000, 68594.5000]]], [[[ 6663.5000, 7658.0000, 8652.5000], [11636.0000, 12630.5000, 13625.0000], [16608.5000, 17603.0000, 18597.5000]], [[31526.0000, 32520.5000, 33515.0000], [36498.5000, 37493.0000, 38487.5000], [41471.0000, 42465.5000, 43460.0000]], [[56388.5000, 57383.0000, 58377.5000], [61361.0000, 62355.5000, 63350.0000], [66333.5000, 67328.0000, 68322.5000]]]], dtype=torch.float64)Can someone help me? Thanks
This is a common but interesting problem because it involves a combination of torch.reshapes and torch.transpose to solve it. More specifically, you will needApply an initial reshape to restructure the tensor and expose the axes you want to swap;Then do so using a transpose operation;Lastly apply a second reshape to get to the desired format.In your case, you could do:>>> result.reshape(3,2,3,3).transpose(0,1).reshape(2,3,3,3)tensor([[[[ 6687.5000, 7686.0000, 8684.5000], [11680.0000, 12678.5000, 13677.0000], [16672.5000, 17671.0000, 18669.5000]], [[31650.0000, 32648.5000, 33647.0000], [36642.5000, 37641.0000, 38639.5000], [41635.0000, 42633.5000, 43632.0000]], [[56612.5000, 57611.0000, 58609.5000], [61605.0000, 62603.5000, 63602.0000], [66597.5000, 67596.0000, 68594.5000]]], [[[ 6663.5000, 7658.0000, 8652.5000], [11636.0000, 12630.5000, 13625.0000], [16608.5000, 17603.0000, 18597.5000]], [[31526.0000, 32520.5000, 33515.0000], [36498.5000, 37493.0000, 38487.5000], [41471.0000, 42465.5000, 43460.0000]], [[56388.5000, 57383.0000, 58377.5000], [61361.0000, 62355.5000, 63350.0000], [66333.5000, 67328.0000, 68322.5000]]]], dtype=torch.float64)I encourage you to look a the intermediate results to get an idea of how the method works so you can apply it on other use cases in the future.
How to inherit the Odoo default QWeb reports in .py file? I want to inherit odoo default qweb report "Picking operation" from stock.picking in python file.I know how to inherit default qweb report in xml.please suggest/guide how to inherit a qweb default report in .py file
You can use it.return self.env.ref('your_module_name.your_menu_id').report_action(self, data=data)
How to store user uploaded image from flask server in google storage bucket? I am trying to find a way to store an image uploaded in the flask server by a user in a google storage bucket.This is my attempt to upload the image. It [email protected]("/upload-image", methods=["GET", "POST"])def upload_image(): if request.method == "POST": try: if request.files: image = request.files["image"] readImg = image.read() content = bytes(readImg) client = storage.Client().from_service_account_json(os.environ['GOOGLE_APPLICATION_CREDENTIALS']) print('1)') bucket = storage.Bucket(client, "uploaded-usrimg") print('2)') file_blob = bucket.blob(content) print('3)') return render_template('result.html', request=result.payload[0].display_name) # return render_template('homepage.html') except Exception as e: print('error creating image data') print(e)My blob (image) does not upload to my bucket.I get this error:127.0.0.1 - - [13/Jan/2021 18:40:58] "POST /upload-image HTTP/1.1" 500 -1)2)error creating image data'utf-8' codec can''t decode byte 0x89 in position 0: invalid start byte[2021-01-13 18:41:11,663] ERROR in app: Exception on /upload-image [POST]Traceback (most recent call last): File "/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py", line 1953, in full_dispatch_request return self.finalize_request(rv) File "/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py", line 1968, in finalize_request response = self.make_response(rv) File "/Users/me/.pyenv/versions/3.7.3/lib/python3.7/site-packages/flask/app.py", line 2098, in make_response "The view function did not return a valid response. The"TypeError: The view function did not return a valid response. The function either returned None or ended without a return statement.127.0.0.1 - - [13/Jan/2021 18:41:11] "POST /upload-image HTTP/1.1" 500 -Any idea how to solve this error? Or another method in uploading to google bucket? Thanks so much.
I believe this error message is due to the way you are handling the image. In your code readImg = image.read(), you are decoding the image according to UTF-8 rules and encounter a byte sequence that is not allowed in UTF-8 encoding.You need to open the image with b in the open() mode so that the file is read as binary and the contents remain as bytes.with open(path, 'rb') as f: contents = f.read()If you were using different file types, byte XXXX in position 0 could also mean that the file is encoded incorrectly so, for example, you could try this or similar:open(path, encoding='utf-16') as f: contents = f.read()
Find URLs in text and replace them with their domain name I am working on an NLP project and I want to replace all the URLs in a text with their domain name to simplify my corpora. An example of this could be:Input: Ask questions here https://stackoverflow.com/questions/askOutput: Ask questions here stackoverflow.comAt this moment I am finding the urls with the following RE:urls = re.findall('https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+', text)And then I iterate over them to get the domain name:doms = [re.findall(r'^(?:https?:)?(?:\/\/)?(?:[^@\n]+@)?(?:www\.)?([^:\/\n]+)',url) for url in urls]And then I simply replace each URL with its dom.This is not an optimal approach and I am wondering if someone has a better solution for this problem!
You can use re.sub:import res = 'Ask questions here https://stackoverflow.com/questions/ask, new stuff here https://stackoverflow.com/questions/, Final ask https://stackoverflow.com/questions/50565514/find-urls-in-text-and-replace-them-with-their-domain-name mail server here mail.inbox.com/whatever'new_s = re.sub('https*://[\w\.]+\.com[\w/\-]+|https*://[\w\.]+\.com|[\w\.]+\.com/[\w/\-]+', lambda x:re.findall('(?<=\://)[\w\.]+\.com|[\w\.]+\.com', x.group())[0], s)Output:'Ask questions here stackoverflow.com, new stuff here stackoverflow.com, Final ask stackoverflow.com mail server here mail.inbox.com'
How to print line of a file after match an exact string pattern with python? I have a list list = ['plutino?','res 2:11','Uranus L4','res 9:19','damocloid','cubewano?','plutino']I want to search every element from the list in a file with the next format and print the line after match 1995QY9 | 1995_QY9 | plutino | 32929 | | 39.445 | 0.260 | 29.193 | 49.696 | 4.8 | 66 | # 0.400 | 1.21 BR-U | ?1997CU29 | 1997_CU29 | cubewano | 33001 | | 43.534 | 0.039 | 41.815 | 45.253 | 1.5 | 243 | | 1.82 RR | 1998BU48 | 1998_BU48 | Centaur | 33128 | | 33.363 | 0.381 | 20.647 | 46.078 | 14.2 | 213 | # 0.052 | 1.59 RR | ?1998VG44 | 1998_VG44 | plutino | 33340 | | 39.170 | 0.250 | 29.367 | 48.974 | 3.0 | 398 | # 0.028 | 1.51 IR | 1998SN165 | 1998_SN165 | inner classic | 35671 | | 37.742 | 0.041 | 36.189 | 39.295 | 4.6 | 393 | # 0.060 | 1.13 BB | 2000VU2 | 2000_VU2 | unusual | 37117 | Narcissus | 6.878 | 0.554 | 3.071 | 10.685 | 13.8 | 11 | # 0.088 | | 1999HX11 | 1999_HX11 | plutino? | 38083 | Rhadamanthus | 39.220 | 0.151 | 33.295 | 45.144 | 12.7 | 168 | | 1.18 BR | 1999HB12 | 1999_HB12 | res 2:5 | 38084 | | 56.376 | 0.422 | 32.566 | 80.187 | 13.1 | 176 | | 1.39 BR-IR | I am using the next code to do thatfor i in list:with open("tnolist.txt") as f: for line in f: if re.search(i, line): print(line)The code works fine for all element, except for plutino. When the variable i is plutino the code prints lines for plutino and for plutino?.
This happens because plutino is a substring of plutino?, so the regex parser matches the first part of plutino? and returns a non-falsey answer. Without a whole lot of additional work, you should be able to fix the problem with re.search(i, line+r'\s'), which says that you need to have a whitespace character after the phrase you're searching. As the file gets longer and more complicated, you might have more such exceptions to make the regex behave as desired.Update: I also like visual regex editors for reasons like this. They make it easy to see what matches and what doesn't.Another option would be something like i==line.split('|')[2].strip() which extracts the portion of your file you seem to care about. The .strip() method can become inefficient on long lines, but this might fit your use case.
Cancel a task execution in a thread and remove a task from a queue A Python application I'm developing watches for a folder and uploads certain new or changed files to a server. As a task queue I'm using Queue module and a pool of worker threads.Sometimes during an upload a file changes and the upload needs to be canceled and started all over.I know how to stop thread execution with threading.Event, but how do I remove or move around a task in Queue?
The easiest way to do it would be to mark the instance you've loaded into the Queue as cancelled:class Task(object): def __init__(self, data): self.cancelled = False self.data = data def cancel(self): self.cancelled = Trueq = Queue.Queue()t = Task("some data to put in the queue")q.put(t)# Latert.cancel()Then in your consuming thread:task = q.get()if task.cancelled: # Skip itelse: # handle it.It is also possible to directly interact with the deque that the Queue uses internally, provided you acquire the internal mutex used to synchronized access to the Queue:>>> import Queue>>> q = Queue.Queue()>>> q.put("a")>>> q.put("b")>>> q.put("c")>>> q.put("d")>>> q.queue[2]'c'>>> q.queue[3]'d'>>> with q.mutex: # Always acquire the lock first in actual usage... q.queue[3]... 'd'While this should work, messing with the internals of the Queue isn't recommended, and could break across Python versions if the implementation of Queue changes. Also, keep in mind that operations other than append/appendleft and pop/popleft on deque objects do not perform as well as they do on list instances; even something as simple as __getitem__ is O(n).
Remove random text from filename based on list So I have a list of files from glob that are formated in the following wayfilename xx xxx moretxt.txtwhat I'am trying to do is rename them as followsfilename.txtthe first two xx is one of these:[1B, 2B, 3B, 4B, 5B, 6B, 7B, 8B, 9B, 10B, 11B, 12B, 1A, 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A, 10A, 11A, 12A]so how do I remove the "xx xxx moretxt" from the file name and keep the extension?import glob, osos.chdir("C:\\somepath")for file in glob.glob("**/*.txt", recursive = True): print(file)
Using str.splitEx:filename = "filename xx xxx moretxt.txt"val = filename.split()filename = "{}.{}".format(val[0], val[-1].split(".")[-1])print(filename)or using re.matchEx:import refilename = "filename xx xxx moretxt.txt"filename = re.match(r"(?P<filename>\w+).*\.(?P<ext>.+)", filename)filename = "{}.{}".format(filename.group('filename'), filename.group('ext'))print(filename)Output:filename.txt
How to check a time is between two times in python For example if I have test_case1: (9:00 - 16:00) and test_case2: (21:30 - 4:30)that is, it works whether the first or second number is bigger than the other.
You can use pure lexicographical string comparison if you zero-fill your times - then all you need is to determine if the second time is 'smaller' than the first time and for that special case check both days, e.g.:def is_between(time, time_range): if time_range[1] < time_range[0]: return time >= time_range[0] or time <= time_range[1] return time_range[0] <= time <= time_range[1]print(is_between("11:00", ("09:00", "16:00"))) # Trueprint(is_between("17:00", ("09:00", "16:00"))) # Falseprint(is_between("01:15", ("21:30", "04:30"))) # TrueThis will also work with time tuples (e.g. (9, 0)) instead of strings if that's how you represent your time. It will even work with most time objects.
find cosine similarity between words Is it possible to find similarity between two words? For example:cos_lib = cosine_similarity('kamra', 'cameras')This gives me an errorValueError: could not convert string to float: 'kamra'because I haven't converted the words into numerical vectors. How can I do so? I tried this but it wouldn't work either:('kamra').toarray()My aim is to check the similarity with both value(lists) of my dictionary and return the key with the highest similarity. Is that possible?features = {"CAMERA": ['camera', 'kamras'], "BATTERY": ['batterie', 'battery']}I also tried this but I am not satisfied with the results:print(damerau.distance('dual camera', 'cameras'))print(damerau.distance('kamra', 'battery'))since the results are 6 and 5. But the similar between the first two strings is more so the distance should be less. That's what I am trying to achieve.
I'd recommend using a pre-trained model from Gensim. You can can download a pre-trained model and then get the cosine similarity between their two vectors.import gensim.downloader as api# overview of all models in gensim: https://github.com/RaRe-Technologies/gensim-datamodel_glove = api.load("glove-wiki-gigaword-100")model_glove.relative_cosine_similarity("politics", "vote")# output: 0.07345439049627836model_glove.relative_cosine_similarity("film", "camera")# output: 0.06281138757741007model_glove.relative_cosine_similarity("economy", "fart")# output: -0.01170896437873441Pretrained models will have a hard time recognising typos though, because they were probably not in the training data. Figuring these out is a separate task from cosine similarity.model_glove.relative_cosine_similarity("kamra", "cameras")# output: -0.040658474068872255The following function might be useful though, if you have several words and you want to have the most similar one from the list:model_glove.most_similar_to_given("camera", ["kamra", "movie", "politics", "umbrella", "beach"])# output: 'movie'
Python - Optimal way to re-assign global variables from function in other module I have a module which I called entities.py - there are 2 classes within it and 2 global variables as in below pattern:FIRST_VAR = ...SECOND_VAR = ...class FirstClass: [...]class SecondClass: [...]I also have another module (let's call it main.py for now) where I import both classes and constants as like here:from entities import FirstClass, SecondClass, FIRST_VAR, SECOND_VARIn the same "main.py" module I have another constant: THIRD_VAR = ..., and another class, in which all of imported names are being used.Now, I have a function, which is being called only if a certain condition is met (passing config file path as CLI argument in my case). As my best bet, I've written it as following:def update_consts_from_config(config: ConfigParser): global FIRST_VAR global SECOND_VAR global THIRD_VAR FIRST_VAR = ... SECOND_VAR = ... THIRD_VAR = ...This works perfectly fine, although PyCharm indicates two issues, which at least I don't consider accurate.from entities import FirstClass, SecondClass, FIRST_VAR, SECOND_VAR - here it warns me that FIRST_VAR and SECOND_VAR are unused imports, but from my understanding and testing they are used and not re-declared elsewhere unless function update_consts_from_config is invoked.Also, under update_consts_from_config function:global FIRST_VAR - at this and next line, it saysGlobal variable FIRST_VAR is undefined at the module levelMy question is, should I really care about those warnings and (as I think the code is correct and clear), or am I missing something important and should come up with something different here?I know I can do something as:import entitiesfrom entities import FirstClass, SecondClassFIRST_VAR = entities.FIRST_VARSECOND_VAR = entities.SECOND_VARand work from there, but this look like an overkill for me, entities module has only what I have to import in main.py which also strictly depends on it, therefore I would rather stick to importing those names explicitly than referencing them by entities. just for that reasonWhat do you think would be a best practice here? I would like my code to clear, unambiguous and somehow optimal.
Import only entities, then refer to variables in its namespace to access/modify them.Note: this pattern, modifying constants in other modules (which then, to purists, aren't so much constants as globals) can be justified. I have tons of cases where I use constants, rather than magic variables, as module level configuration. However, for example for testing, I might reach in and modify these constants. Say to switch a cache expiry from 2 days to 0.1 seconds to test caching. Or like you propose, to override configuration. Tread carefully, but it can be useful.main.py:import entitiesdef update_consts_from_config(FIRST_VAR): entities.FIRST_VAR = FIRST_VARfirstclass = entities.FirstClass()print(f"{entities.FIRST_VAR=} before override")firstclass.debug()entities.debug()update_consts_from_config("override")print(f"{entities.FIRST_VAR=} after override")firstclass.debug()entities.debug()entities.py:FIRST_VAR = "ori"class FirstClass: def debug(self): print(f"entities.py:{FIRST_VAR=}")def debug(): print(f"making sure no closure/locality effects after object instantation {FIRST_VAR=}")$ python main.pyentities.FIRST_VAR='ori' before overrideentities.py:FIRST_VAR='ori'making sure no closure/locality effects after object instantation FIRST_VAR='ori'entities.FIRST_VAR='override' after overrideentities.py:FIRST_VAR='override'making sure no closure/locality effects after object instantation FIRST_VAR='override'Now, if FIRST_VAR wasn't a string, int or another type of immutable, you should I think be able to import it separately and mutate it. Like SECOND_VAR.append("config override") in main.py. But assigning to a global in main.py will only affect affect the main.py binding, so if you want to share actual state between main.py and entities and other modules, everyone, not just main.py needs to import entities then access entities.FIRST_VAR.Oh, and if you had:class SecondClass: def __init__(self): self.FIRST_VAR = FIRST_VARthen its instance-level value of that immutable string/int would not be affected by any overrides done after an instance creation. Mutables like lists or dictionaries would be affected because they're all different bindings pointing to the same variable.Last, wrt to those "tricky" namespaces. global in your original code means: "dont consider FIRST_VAR as a variable to assign in update_consts_from_config s local namespace , instead assign it to main.py global, script-level namespace".It does not mean "assign it to some global state magically shared between entities.py and main.py". __builtins__ might be that beast but modifying it is considered extremely bad form in Python.
Convolutional Neural Network seems to be randomly guessing So I am currently trying to build a race recognition program using a convolution neural network. I'm inputting 200px by 200px versions of the UTKFaceRegonition dataset (put my dataset on a google drive if you want to take a look). Im using 8 different classes (4 races * 2 genders) using keras and tensorflow, each having about 700 images but I have done it with 1000. The problem is when I run the network it gets at best 13.5% accuracy and about 11-12.5% validation accuracy, with a loss around 2.079-2.081, even after 50 epochs or so it won't improve at all. My current hypothesis is that it is randomly guessing stuff/not learning because 8/100=12.5%, which is about what it is getting and on other models I have made with 3 classes it was getting about 33%I noticed the validation accuracy is different on the first and sometimes second epoch, but after that it ends up staying constant. I've increased the pixel resolution, changed amount of layers, types of layer and neurons per layer, I've tried optimizers (sgd at the normal lr and at very large and small (.1 and 10^-6) and I've tried different loss functions like KLDivergence but nothing seems to have any effect on it except KLDivergence which on one run did pretty well (about 16%) but then it flopped again. Some ideas I had are maybe theres too much noise in the dataset or maybe it has to do with the amount of dense layers, but honestly I dont know why it is not learning.Heres the code to make the tensorsimport numpy as npimport matplotlibimport matplotlib.pyplot as pltimport osimport cv2import randomimport pickleWIDTH_SIZE = 200HEIGHT_SIZE = 200CATEGORIES = []for CATEGORY in os.listdir('./TRAINING'): CATEGORIES.append(CATEGORY)DATADIR = "./TRAINING"training_data = []def create_training_data(): for category in CATEGORIES: path = os.path.join(DATADIR, category) class_num = CATEGORIES.index(category) for img in os.listdir(path)[:700]: try: img_array = cv2.imread(os.path.join(path,img), cv2.IMREAD_COLOR) new_array = cv2.resize(img_array,(WIDTH_SIZE,HEIGHT_SIZE)) training_data.append([new_array,class_num]) except Exception as error: print(error)create_training_data()random.shuffle(training_data)X = []y = []for features, label in training_data: X.append(features) y.append(label)X = np.array(X).reshape(-1, WIDTH_SIZE, HEIGHT_SIZE, 3)y = np.array(y)pickle_out = open("X.pickle", "wb")pickle.dump(X, pickle_out)pickle_out = open("y.pickle", "wb")pickle.dump(y, pickle_out)Heres my built modelfrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2Dimport picklepickle_in = open("X.pickle","rb")X = pickle.load(pickle_in)pickle_in = open("y.pickle","rb")y = pickle.load(pickle_in)X = X/255.0model = Sequential()model.add(Conv2D(256, (2,2), activation = 'relu', input_shape = X.shape[1:]))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Dropout(0.4))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Dropout(0.4))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Conv2D(256, (2,2), activation = 'relu'))model.add(Dropout(0.4))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Flatten())model.add(Dense(8, activation="softmax"))model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy'])model.fit(X, y, batch_size=16,epochs=100,validation_split=.1)Heres a log of 10 epochs I ran.5040/5040 [==============================] - 55s 11ms/sample - loss: 2.0803 - accuracy: 0.1226 - val_loss: 2.0796 - val_accuracy: 0.1250Epoch 2/1005040/5040 [==============================] - 53s 10ms/sample - loss: 2.0797 - accuracy: 0.1147 - val_loss: 2.0798 - val_accuracy: 0.1161Epoch 3/1005040/5040 [==============================] - 53s 10ms/sample - loss: 2.0797 - accuracy: 0.1190 - val_loss: 2.0800 - val_accuracy: 0.1161Epoch 4/1005040/5040 [==============================] - 53s 11ms/sample - loss: 2.0797 - accuracy: 0.1173 - val_loss: 2.0799 - val_accuracy: 0.1107Epoch 5/1005040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1183 - val_loss: 2.0802 - val_accuracy: 0.1107Epoch 6/1005040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1226 - val_loss: 2.0801 - val_accuracy: 0.1107Epoch 7/1005040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1238 - val_loss: 2.0803 - val_accuracy: 0.1107Epoch 8/1005040/5040 [==============================] - 54s 11ms/sample - loss: 2.0797 - accuracy: 0.1169 - val_loss: 2.0802 - val_accuracy: 0.1107Epoch 9/1005040/5040 [==============================] - 52s 10ms/sample - loss: 2.0797 - accuracy: 0.1212 - val_loss: 2.0803 - val_accuracy: 0.1107Epoch 10/1005040/5040 [==============================] - 53s 11ms/sample - loss: 2.0797 - accuracy: 0.1177 - val_loss: 2.0802 - val_accuracy: 0.1107So yeah, any help on why my network seems to be just guessing? Thank you!
The problem lies in the design of you network. Typically you'd want in the first layers to learn high-level features and use larger kernel with odd size. Currently you're essentially interpolating neighbouring pixels. Why odd size? Read e.g. here.Number of filters typically increases from small (e.g. 16, 32) number to larger values when going deeper into the network. In your network all layers learn the same number of filters. The reasoning is that the deeper you go, the more fine-grained features you'd like to learn - hence increase in number of filters.In your ANN each layer also cuts out valuable information from the image (by default you are using valid padding).Here's a very basic network that gets me after 40 seconds and 10 epochs over 95% training accuracy:import pickleimport tensorflow as tffrom tensorflow.keras.models import Sequentialfrom tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2Dpickle_in = open("X.pickle","rb")X = pickle.load(pickle_in)pickle_in = open("y.pickle","rb")y = pickle.load(pickle_in)X = X/255.0model = Sequential()model.add(Conv2D(16, (5,5), activation = 'relu', input_shape = X.shape[1:], padding='same'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(32, (3,3), activation = 'relu', padding='same'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Conv2D(64, (3,3), activation = 'relu', padding='same'))model.add(MaxPooling2D(pool_size=(2,2)))model.add(Flatten())model.add(Dense(512))model.add(Dense(8, activation='softmax'))model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),metrics=['accuracy'])Architecture:Model: "sequential_4"_________________________________________________________________Layer (type) Output Shape Param # =================================================================conv2d_19 (Conv2D) (None, 200, 200, 16) 1216 _________________________________________________________________max_pooling2d_14 (MaxPooling (None, 100, 100, 16) 0 _________________________________________________________________conv2d_20 (Conv2D) (None, 100, 100, 32) 4640 _________________________________________________________________max_pooling2d_15 (MaxPooling (None, 50, 50, 32) 0 _________________________________________________________________conv2d_21 (Conv2D) (None, 50, 50, 64) 18496 _________________________________________________________________max_pooling2d_16 (MaxPooling (None, 25, 25, 64) 0 _________________________________________________________________flatten_4 (Flatten) (None, 40000) 0 _________________________________________________________________dense_7 (Dense) (None, 512) 20480512 _________________________________________________________________dense_8 (Dense) (None, 8) 4104 =================================================================Total params: 20,508,968Trainable params: 20,508,968Non-trainable params: 0Training:Train on 5040 samples, validate on 560 samplesEpoch 1/105040/5040 [==============================] - 7s 1ms/sample - loss: 2.2725 - accuracy: 0.1897 - val_loss: 1.8939 - val_accuracy: 0.2946Epoch 2/105040/5040 [==============================] - 6s 1ms/sample - loss: 1.7831 - accuracy: 0.3375 - val_loss: 1.8658 - val_accuracy: 0.3179Epoch 3/105040/5040 [==============================] - 6s 1ms/sample - loss: 1.4857 - accuracy: 0.4623 - val_loss: 1.9507 - val_accuracy: 0.3357Epoch 4/105040/5040 [==============================] - 6s 1ms/sample - loss: 1.1294 - accuracy: 0.6028 - val_loss: 2.1745 - val_accuracy: 0.3250Epoch 5/105040/5040 [==============================] - 6s 1ms/sample - loss: 0.8060 - accuracy: 0.7179 - val_loss: 3.1622 - val_accuracy: 0.3000Epoch 6/105040/5040 [==============================] - 6s 1ms/sample - loss: 0.5574 - accuracy: 0.8169 - val_loss: 3.7494 - val_accuracy: 0.2839Epoch 7/105040/5040 [==============================] - 6s 1ms/sample - loss: 0.3756 - accuracy: 0.8813 - val_loss: 4.9125 - val_accuracy: 0.2643Epoch 8/105040/5040 [==============================] - 6s 1ms/sample - loss: 0.3001 - accuracy: 0.9036 - val_loss: 5.6300 - val_accuracy: 0.2821Epoch 9/105040/5040 [==============================] - 6s 1ms/sample - loss: 0.2345 - accuracy: 0.9337 - val_loss: 5.7263 - val_accuracy: 0.2679Epoch 10/105040/5040 [==============================] - 6s 1ms/sample - loss: 0.1549 - accuracy: 0.9581 - val_loss: 7.3682 - val_accuracy: 0.2732As you can see, validation score is terrible, but the point was to demonstrate that poor architecture can prevent training altogether.
How do I choose multiple minimum values in a dictionary? I'm trying to inspect and choose values in a dictionary if they're the same minimum values from the whole dictionary.As you can see, my code is choosing duplicate values although they're not the minimum values. How do i correct this error? For example, my code shouldn't't delete values (including duplicates) unless there are multiple '7.14' def tieBreaker (preDictionary): while True: minValues = min(preDictionary.values()) minKeys = [k for k in preDictionary if preDictionary[k] == minValues] print(minKeys) for i in range(0, len(minKeys)): for j in range(0, len(minKeys)): if minKeys[i] > minKeys[j]: del preDictionary[minKeys[i]] i += 1 j += 1 if len(minKeys) < 2: return preDictionary breakCurrent output is {'candidate1': '35.71', 'candidate2': '28.57', 'candidate4': '14.29', 'candidate3': '7.14'}While the input is {'candidate1': '35.71', 'candidate2': '28.57', 'candidate5': '14.29', 'candidate4': '14.29', 'candidate3': '7.14'}candidate 5 should not be deleted as although it's a duplicate value, not a minimum..Also, minKeys currently is ['candidate5', 'candidate4'] where it should be ['candidate3']
You do not need a while loop. You can run through each key value pair and construct a new dict without the keys with minimum value.d = {'candidate1': 35.71, 'candidate2': 28.57, 'candidate4': 14.29, 'candidate3': 7.14}min_value = min(d.values())d_without_min_value = {k: v for k, v in d.items() if v != min_value}# output# {'candidate1': 35.71, 'candidate2': 28.57, 'candidate4': 14.29}EDITSeems like you are passing values as string instead of float. Calling min() on a list of str will result in minimum value in lexicographical order. Remove the quotes around the values or convert the values into float before you process the dictd = {k: float(v) for k, v in d.items()}
How to create .ts files for Qt Linguist with PySide6? I have a python project written with PySide2 and now I want to migrate to PySide6. I used Qt Linguist to translate UI and created .ts files with help of this command:pylupdate5 utility from PyQt5 package (but it worked fine for my project with PySide2). Now I plan to get rid of PySide2 and PyQt5 packages.So, I need to replace pylupdate5 with something from PySide6 package. I assumed lupdate should to the job but it seems it works only with C++ code. It gives me errors like Unterminated C++ character or Unbalanced opening parenthesis in C++ code. An with lupdate -help I don't see how I may switch it to python mode (similar to uic -g python).Does anyone know how to create .ts files for Qt Linguist from python source files?
lupdate of Qt 6.1 does not support Python but in Qt 6.2 that problem no longer exists so you have 2 options:Install Qt 6.2 and use its lupdate.Wait for PySide6 6.2.0 to be released (at this moment it is only available for windows) and use lupdate.
asyncio: can a task only start when previous task reach a pre-defined stage? I am starting with asyncio that I wish to apply to following problem:Data is split in chunks.A chunk is 1st compressed.Then the compressed chunk is written in the file.A single file is used for all chunks, so I need to process them one by one.with open('my_file', 'w+b') as f: for chunk in chunks: compress_chunk(ch) f.write(ch)From this context, to run this process faster, as soon as the write step of current iteration starts, could the compress step of next iteration be triggered as well?Can I do that with asyncio, keeping a similar for loop structure? If yes, could you share some pointers about this?I am guessing another way to run this in parallel is by using ProcessPoolExecutor and splitting fully the compress phase from the write phase. This means compressing 1st all chunks in different executors.Only when all chunks are compressed, then starting the writing step .But I would like to investigate the 1st approach with asyncio 1st, if it makes sense.Thanks in advance for any help.Bests
You can do this with a producer-consumer model. As long as there is one producer and one consumer, you will have the correct order. For your use-case, that's all you'll benefit from. Also, you should use the aioFiles library. Standard file IO will mostly block your main compression/producer thread and you won't see much speedup. Try something like this:async def produce(queue, chunks): for chunk in chunks: compress_chunk(ch) await queue.put(i)async def consume(queue): with async with aiofiles.open('my_file', 'w') as f: while True: compressed_chunk = await Q.get() await f.write(b'Hello, World!') queue.task_done()async def main(): queue = asyncio.Queue() producer = asyncio.create_task(producer(queue, chunks)) consumer = asyncio.create_task(consumer(queue)) # wait for the producer to finish await producer # wait for the consumer to finish processing and cancel it await queue.join() consumer.cancel() asyncio.run(main())https://github.com/Tinche/aiofilesUsing asyncio.Queue for producer-consumer flow
Add custom field to ModelSerializer and fill it in post save signal In my API I have a route to add a resource named Video. I have a post_save signal to this Model where I proccess this video and I generate a string. I want a custom field in my serializer to be able to fill it with this text that was generated. So, in my response I can have this value.class VideoSerializer(serializers.ModelSerializer): class Meta: model = Video fields = ('id', 'owner', 'description', 'file')@receiver(post_save, sender=Video)def encode_video(sender, instance=None, created=False, **kwargs): string_generated = do_stuff()Right now what I am getting in my response is:{ "id": 17, "owner": "b424bc3c-5792-470f-bac4-bab92e906b92", "description": "", "file": "https://z.s3.amazonaws.com/videos/sample.mkv"}I expect a new key "string" with the value generated by the signal.
In order to append string_generated in your response you need to be able to access that field from your serializer. There are 2 convenient ways to do that:Add string_generated as a field in your model and add that in VideoSerializer as a SerializerMethodField so that string_generated will be a read-only value. This means it will only appear in response. And finally delete your post signal and override the save() method instead:class VideoSerializer(serializers.ModelSerializer): string_generated = serializers.SerializerMethodField(source='get_string_generated') class Meta: model = Video fields = ('id', 'owner', 'description', 'file') read_only_fields = ('string_generated') def get_string_generated(self, obj): return obj.string_generated# models.pyclass Video(models.Model): # your fields... def save(self, force_insert=False, force_update=False): string_generated = do_stuff() super(Video, self).save(force_insert, force_update)If possible, delete your post-signal. Then, add do_stuff as SerializerMethodField in your VideoSerializer:class VideoSerializer(serializers.ModelSerializer): string_generated = serializers.SerializerMethodField() class Meta: model = Video fields = ('id', 'owner', 'description', 'file') def get_string_generated(self, obj): return do_stuff()
Unable to send an email by python Im trying to send a basic email by this script s = smtplib.SMTP(email_user, 587)s.starttls()s.login(email_user, pasword)message = 'Hi There, sending this email from python's.sendmail(email_user, email_user, message)s.quit()and getting the following errorfor res in _socket.getaddrinfo(host, port, family, type, proto, flags):socket.gaierror: [Errno 11003] getaddrinfo failed
On your first line:s = smtplib.SMTP(email_user, 587)'email_user' should instead be the email server, example 'smtp.gmail.com'
How to calculate average values of an array for each class? I was wondering if there is an efficient way to calculate the average values for each class.For example:scores = [1, 2, 3, 4, 5]classes = [0, 0, 1, 1, 1]Expected output isoutput = [[0, 1.5], [1, 4.0]]where output is [[class_indx, avg_value], ...]I can achieve it using the dictionary. But it means I need to convert the array (list in this example) into dict first and then convert back to array when the job is done. It seems like a workaround in this case and I would prefer to operate directly on arrays.I guess someone has invented the wheel but just I haven't dug it out from my search. Are there any approaches to do that efficiently?Thanks.
With itertools.groupby function:from itertools import groupbyscores = [1, 2, 3, 4, 5]classes = [0, 0, 1, 1, 1]res = []for k, g in groupby(zip(scores, classes), key=lambda x: x[1]): group = list(g) res.append([k, sum(i[0] for i in group) / len(group)])print(res) # [[0, 1.5], [1, 4.0]]Or with collections.defauldict object:from collections import defauldictscores = [1, 2, 3, 4, 5]classes = [0, 0, 1, 1, 1]d = defaultdict(list)res = []for sc, cl in zip(scores, classes): d[cl].append(sc)res = [[cl, sum(lst)/len(lst)] for cl, lst in d.items()]print(res) # [[0, 1.5], [1, 4.0]]
Display graphs in the toplevel window What I am trying to do :I am trying to display three displays in a loop in the toplevel window rather than the main window. I am getting the error which is mentioned below. So, I haven't been able to run it.Error I am getting :Exception in Tkinter callbackTraceback (most recent call last): File "C:\Users\sel\Anaconda3\lib\tkinter\__init__.py", line 1705, in __call__ return self.func(*args) File "<ipython-input-19-df56c3798d6a>", line 91, in on_window show_figure(selected_figure) File "<ipython-input-19-df56c3798d6a>", line 53, in show_figure one_figure = all_figures[number]IndexError: list index out of rangeBelow is my code :import tkinter as tkimport matplotlib.pyplot as pltfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAggall_figures = []selected_figure = 0 class MyClass(): def __init__(self): self.sheets = [[1,2,3], [3,1,2], [1,5,1]] self.W = 2 self.L = 5 def plot_sheet(self, data): """plot single figure""" fig, ax = plt.subplots(1) ax.set_xlim([0, self.W]) ax.set_ylim([0, self.L]) ax.plot(data) return fig def generate_all_figures(self): """create all figures and keep them on list""" global all_figures for data in self.sheets: fig = self.plot_sheet(data) all_figures.append(fig)dataPlot = None def on_window(): def show_figure(number): global dataPlot # remove old canvas if dataPlot is not None: # at start there is no canvas to destroy dataPlot.get_tk_widget().destroy() # get figure from list one_figure = all_figures[number] # display canvas with figure dataPlot = FigureCanvasTkAgg(one_figure, master=window) dataPlot.draw() dataPlot.get_tk_widget().grid(row=0, column=0) def on_prev(): global selected_figure # get number of previous figure selected_figure -= 1 if selected_figure < 0: selected_figure = len(all_figures)-1 show_figure(selected_figure) def on_next(): global selected_figure # get number of next figure selected_figure += 1 if selected_figure > len(all_figures)-1: selected_figure = 0 show_figure(selected_figure) top = tk.Toplevel() top.wm_geometry("794x370") top.title('Optimized Map') selected_figure = 0 dataPlot = None # default value for `show_figure` show_figure(selected_figure) frame = tk.Frame(top) frame.grid(row=1, column=0) b1 = tk.Button(frame, text="<<", command=on_prev) b1.grid(row=0, column=0) b2 = tk.Button(frame, text=">>", command=on_next) b2.grid(row=0, column=1)window = tk.Tk()b1 = tk.Button(window, text="Next", command=on_window)b1.grid(row=0, column=0)window.mainloop()
You have created a class with a generate_all_figures but you haven't created a MyClass object and run generate_all_figures() therefore your all_figures list is empty. This is why you get an IndexError.You need to create a MyClass object and run generate_all_figures()to populate your all_figures list before executing on_window():window = tk.Tk()mc = MyClass()mc.generate_all_figures()b1 = tk.Button(window, text="Next", command=on_window)b1.grid(row=0, column=0)window.mainloop()By the way, you don't need global all_figures in generate_all_figures (See Defining lists as global variables in Python).
How to enable timing magics for every cell in Jupyter notebook? The %%time and %%timeit magics enable timing of a single cell in a Jupyter or iPython notebook.Is there similar functionality to turn timing on and off for every cell in a Jupyter notebook?This question is related but does not have an answer to the more general question posed of enabling a given magic automatically in every cell.
A hacky way to do this is via a custom.js file (usually placed in ~/.jupyter/custom/custom.js)The example of how to create buttons for the toolbar is located here and it's what I based this answer off of. It merely adds the string form of the magics you want to all cells when pressing the enable button, and the disable button uses str.replace to "turn" it off. define([ 'base/js/namespace', 'base/js/events'], function(Jupyter, events) { events.on('app_initialized.NotebookApp', function(){ Jupyter.toolbar.add_buttons_group([ { 'label' : 'enable timing for all cells', 'icon' : 'fa-clock-o', // select your icon from http://fortawesome.github.io/Font-Awesome/icons 'callback': function () { var cells = Jupyter.notebook.get_cells(); cells.forEach(function(cell) { var prev_text = cell.get_text(); if(prev_text.indexOf('%%time\n%%timeit\n') === -1) { var text = '%%time\n%%timeit\n' + prev_text; cell.set_text(text); } }); } }, { 'label' : 'disable timing for all cells', 'icon' : 'fa-stop-circle-o', // select your icon from http://fortawesome.github.io/Font-Awesome/icons 'callback': function () { var cells = Jupyter.notebook.get_cells(); cells.forEach(function(cell) { var prev_text = cell.get_text(); var text = prev_text.replace('%%time\n%%timeit\n',''); cell.set_text(text); }); } } // add more button here if needed. ]); });});
How to call a function with pymunk's collision handler? I am trying to implement an AI to solve a simple task: move from A to B, while avoiding obstacles. So far I used pymunk and pygame to build the enviroment and this works quite fine. But now I am facing the next step: to get rewards for my reinforcement learning algorithm I need to detect the collision between the player and, for example, a wall. Or simply to restart the enviroment when a wall/obstacle gets hit. Setting the c_handler.begin function equals the Game.restart fuctions helped me to print out that the player actually hit something. But except from print() I can't access any other function concerning the player position and I don't really know what to do next. So how can i use the pymunk collision to restart the environment? Or are there other ways for resetting or even other libraries to build a proper enviroment?def restart(self, arbiter, data): car.body.position = 50, 50 return True def main(self):[...]c_handler = space.add_collision_handler(1,2)c_handler.begin = Game.restart[...]
In general it seems like it would be useful for you to read up a bit on how classes works in python, particularly how class instance variables works. Anyway, if you already know you want to manipulate the car variable, you can store it in the class itself. Then since you have self available in the restart method you can just do whatever there. Or, the other option is to find out the body that you want to change from the arbiter that is passed into the callback.option 1:class MyClass: def restart(self, space, arbiter, data): self.car.body.position = 50,50 return True def main(self): [...] self.car = car c_handler = space.add_collision_handler(1,2) c_handler.begin = self.restart [...]option 2:def restart(space, arbiter, data): arbiter.shapes[0].body.position = 50,50 # or maybe its the other shape, in that case you should do this instead # arbiter.shapes[1].body.position = 50,50
The application of self-attention layer raised index error So I am doing a classification machine learning with the input of (batch, step, features).In order to improve the accuracy of this model, I intended to apply a self-attention layer to it.I am unfamiliar with how to use it for my case since most examples online are concerned with embedding NLP models.def opt_select(optimizer): if optimizer == 'Adam': adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8) return adamopt elif optimizer == 'RMS': RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6) return RMSopt else: print('undefined optimizer')def LSTM_attention_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2,optimizer='Adam'): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('acc') > 0.90): print("\nReached 90% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() model = tf.keras.models.Sequential() model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout))) model.add(SeqSelfAttention(attention_activation='sigmoid')) model.add(Dense(num_classes, activation='softmax')) opt = opt_select(optimizer) model.compile(loss=loss, optimizer=opt, metrics=['accuracy']) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), verbose=1, callbacks=[callbacks]) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) yhat = model.predict(X_test) return history, thatThis led to IndexError: list index out of rangeWhat is the correct way to apply this layer to my model?As requested, one may use the following codes to simulate a set of the dataset.import tensorflow as tffrom tensorflow.keras.layers import Dense, Dropout,Bidirectional,Masking,LSTMfrom keras_self_attention import SeqSelfAttentionX_train = np.random.rand(700, 50,34)y_train = np.random.choice([0, 1], 700)X_test = np.random.rand(100, 50, 34)y_test = np.random.choice([0, 1], 100)batch_size= 217epochs = 600dropout = 0.6Rdropout = 0.7learning_rate = 0.00001optimizer = 'RMS'loss = 'categorical_crossentropy'num_classes = y_train.shape[1]LSTM_attention_his,yhat = LSTM_attention_model(X_train,y_train,X_test,y_test,loss =loss,num_classes=num_classes,batch_size=batch_size,units=32,learning_rate=learning_rate,epochs=epochs,dropout = 0.5,recurrent_dropout=Rdropout,optimizer=optimizer)
Here is how I would rewrite the code -import tensorflow as tffrom tensorflow.keras.layers import Dense, Dropout, Bidirectional, Masking, LSTM, Reshapefrom keras_self_attention import SeqSelfAttentionimport numpy as npdef opt_select(optimizer): if optimizer == 'Adam': adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8) return adamopt elif optimizer == 'RMS': RMSopt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6) return RMSopt else: print('undefined optimizer')def LSTM_attention_model(X_train, y_train, X_test, y_test, num_classes, loss, batch_size=68, units=128, learning_rate=0.005, epochs=20, dropout=0.2, recurrent_dropout=0.2, optimizer='Adam'): class myCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if (logs.get('accuracy') > 0.90): print("\nReached 90% accuracy so cancelling training!") self.model.stop_training = True callbacks = myCallback() model = tf.keras.models.Sequential() model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True))) model.add(SeqSelfAttention(attention_activation='sigmoid')) model.add(Reshape((-1, model.output.shape[1]*model.output.shape[2]))) model.add(Dense(num_classes, activation='softmax')) opt = opt_select(optimizer) model.compile(loss=loss, optimizer=opt, metrics=['accuracy']) history = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, y_test), verbose=1, callbacks=[callbacks]) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size) yhat = model.predict(X_test) return history, thatX_train = np.random.rand(700, 50,34)y_train = np.random.choice([0, 1], (700, 1))X_test = np.random.rand(100, 50, 34)y_test = np.random.choice([0, 1], (100, 1))batch_size= 217epochs = 600dropout = 0.6Rdropout = 0.7learning_rate = 0.00001optimizer = 'RMS'loss = 'categorical_crossentropy'num_classes = y_train.shape[1]LSTM_attention_his,yhat = LSTM_attention_model(X_train,y_train,X_test,y_test, loss =loss,num_classes=num_classes,batch_size=batch_size,units=32, learning_rate=learning_rate,epochs=epochs,dropout = 0.5,recurrent_dropout=Rdropout,optimizer=optimizer)These are the changes I had to make to get this to start training -The original issue was caused by the LSTM layer outputting the wrong dimensions. The SeqSelfAttention layer needs a 3D input (one dimension corresponding to the sequence of the data) which was missing from the output of the LSTM layer. As mentioned by @today, in the comments, this can be solved by adding return_sequences=True to the LSTM layer.But even with that modification,the code still gives an error at when trying to compute the cost function.The issue is that, the output of the self-attention layer is (None, 50, 64) when this is directly passed into the Dense layer, the final output of the network becomes (None, 50, 1). This doesn't make sense for what we are trying to do, because the final output should just contain a single label for each datapoint (it should have the shape (None, 1)). The issue is the output from the self-attention layer which is 3 dimensional (each data point has a (50, 64) feature vector). This needs to be reshaped into a single dimensional feature vector for the computation to make sense. So I added a reshape layer model.add(Reshape((-1, ))) between the attention layer and the Dense layer.In addition, the myCallback class is testing if logs.get('acc') is > 0.9 but I think it should be (logs.get('accuracy').To comment on OP's question in the comment on what kind of column should be added, in this case, it was just a matter of extracting the full sequential data from the LSTM layer. Without the return_sequence flag, the output from the LSTM layer is (None, 64) This is simply the final features of the LSTM without the intermediate sequential data.
Python numpy efficiently combining arrays My question might sound biology heavy, but I am confident anyone could answer this without any knowledge of biology and I could really use some help.Suppose you have a function, create_offspring(mutations, genome1, genome2), that takes a list of mutations, which are in the form of a numpy 2d arrays with 5 rows and 10 columns as such ( each set of 5 vals is a mutation): [ [4, 3, 6 , 7, 8], [5, 2, 6 , 7, 8] ...]The function also takes two genomes which are in the form of numpy 2d arrays with 5 rows and 10 columns. The value at each position in the genomes is either 5 zeros at places where a mutation hasn't occurred, or filled with the values corresponding to the mutation list for spots where a mutation has occurred. The follow is an example of a genome that has yet to have a mutation at pos 0 and has a mutation at position 1 already. [ [0, 0, 0 , 0, 0], [5, 2, 5 , 7, 8] ...]What I am trying to accomplish is to efficiently ( I have a current way that works but it is WAY to slow) generate a child genome from my two genomes that is a numpy array and a random combination of the two parent genomes(AKA the numpy arrays). By random combination, I mean that each position in the child array has a 50% chance of either being the 5 values at position X from parent 1 genome or parent 2. For example if parent 1 is[0,0,0,0,0], [5, 2, 6 , 7, 8] ...]and parent 2 is[ [4, 3, 6 , 7, 8], [0, 0, 0 , 0, 0] ...]the child genome should have a 50% chance of getting all zeros at position 1 and a 50% chance of getting [4, 3, 6 , 7, 8] etc..Additionally, there needs to be a .01% chance that the child genome gets whatever the corresponding mutation is from the mutation list passed in at the beginning.I have a current method for solving this, but it takes far too long: def create_offspring(mutations, genome_1, genome_2 ): ##creates an empty genome child_genome = numpy.array([[0]*5] * 10, dtype=np.float) for val in range(10): random = rand() if random < mutation_rate: child_genome[val] = mutation_list[val] elif random > .5: child_genome[val] = genome1[val] else: child_genome[val] = genome2[val] return child_genome
Thanks for the clarification in the comments. Things work differently with 10000 than with 10 :)First, there's a faster way to make an empty (or full) array:np.zeros(shape=(rows, cols), dtype=np.float)Then, try generating a list of random numbers, checking each of them simultaneously, and then working from there.randoms = np.rand(len(genome))half = (randoms < .5)for val, (rand, half) in enumerate(zip(randoms, half)): your_codeThis will at least speed the random number generation. I'm still thinking on the rest.
Seemingly nonsensical runtime increases when switching from pure C to C with Numpy objects IntroductionI am trying to realise some number crunching on a one-dimensional array in C (herafter: standalone) and as a Numpy module written in C (herafter: module) simultaneously. Since all I need to do with the array is to compare selected elements, I could use an abstraction layer for the array access and thus I can use the same code for the standalone or the module.Now, I expect the module to be somewhat slower, since comparing elements of a Numpy array of unknown type using descr->f->compare requires extra function calls and similar and thus is more costly than the analogous operation for a C array of known type. However, when looking at the output of a profiler (Valgrind), I found runtime increases in the module for lines which have no obvious connection to the Python methods. I want to understand and avoid this, if possible.Minimal exampleUnfortunately, my minimal example is quite lengthy. Note that the Python variant is no real module anymore due to example reduction.# include <stdlib.h># include <stdio.h># ifdef PYTHON # include <Python.h> # include <numpy/arrayobject.h> // Define Array creation and access routines for Python. typedef PyArrayObject * Array; static inline char diff_sign (Array T, int i, int j) { return T->descr->f->compare ( PyArray_GETPTR1(T,i), PyArray_GETPTR1(T,j), T ); } Array create_array (int n) { npy_intp dims[1] = {n}; Array T = (Array) PyArray_SimpleNew (1, dims, NPY_DOUBLE); for (int i=0; i<n; i++) {* (double *) PyArray_GETPTR1(T,i) = i;} // Line A return T; }#endif# ifdef STANDALONE // Define Array creation and access routines for standalone C. typedef double * Array; static inline char diff_sign (Array T, int i, int j) { return (T[i]>T[j]) - (T[i]<T[j]); } Array create_array (int n) { Array T = malloc (n*sizeof(double)); for (int i=0; i<n; i++) {T[i] = i;} // Line B return T; }# endifint main(){ # ifdef PYTHON Py_Initialize(); import_array(); # endif // avoids that the compiler knows the values of certain variables at runtime. int volatile blur = 0; int n = 1000; Array T = create_array (n); # ifdef PYTHON for (int i=0; i<n; i++) {* (double *) PyArray_GETPTR1(T,i) = i;} // Line C # endif # ifdef STANDALONE for (int i=0; i<n; i++) {T[i] = i;} // Line D #endif int a = 333 + blur; int b = 444 + blur; int c = 555 + blur; int e = 666 + blur; int f = 777 + blur; int g = 1 + blur; int h = n + blur; // Line E standa. module for (int i=h; i>0; i--) // 4000 8998 { int d = c; do c = (c+a)%b; // 4000 5000 while (c>n-1); // 2000 2000 if (c==e) f*=2; // 3000 3000 if ( diff_sign(T,c,d)==g ) f++; // 5000 5000 } printf("%i\n", f);}I compiled this with the following two commands:gcc source.c -o standalone -O3 -g -std=c11 -D STANDALONEgcc source.c -o module -O3 -g -std=c11 -D PYTHON -lpython2.7 -I/usr/include/python2.7Changing to -O2 does not change the following; changing the compiler to Clang does change the minimal example but not the phenomenon with my actual code.Profiling resultsThe interesting things happen after Line E and I gave the total runtime spent in those lines as reported by the profiler as comments in the source code: Despite having no direct relation to whether I compile as standalone or module, the runtimes for these lines strongly differ. In particular, in my actual application, the additional time spent in those lines in the module makes up for one fourth of the module’s total runtime.What’s even more weird is that if I remove line C (and D) – which is redundant in the example, as the array’s values are already set (to the same values) in line A (and B) –, the runtime spent in the loop header is reduced from 8998 to 6002 (the other reported runtimes do not change). The same thing happens, if I change int n = 1000; to int n = 1000 + blur;, i.e., if I make n unknown compile time.This does not make much sense to me and since it has a relevant impact on the runtime, I would like to avoid it.QuestionsWhere do these runtime increases come from. I am aware that compilers are not perfect and sometimes work in seemingly mysterious ways, but I would like to understand.How can I avoid these runtime increases?
you have to be very careful when interpreting callgrind profiles. Callgrind gives you the instruction fetch count, so the number of instructions. This is not connected to actual performance on modern cpus, as instructions can have different latencies and throughputs and can be reordered by suitably capable cpus.Also you are here matching the instruction fetch to the lines the debug symbols associate to them. Those do not correspond exactly, e.g. the module code associates the a register copy and a nop instruction (which are essentially free in terms of runtime compared to the following division) to the loop line the source code, while the standalone module associates it to the line above.You can see that in the machine code tab when using --dump-instr=yes in kcachegrind.This is will have something to do with different registers being available for the two variants due to the different number of function calls that imply spilling stuff onto the stack.Lets look at the modulo loops to see if there is a significant runtime difference:module: 400b58: 42 8d 04 3b lea (%rbx,%r15,1),%eax 400b5c: 99 cltd 400b5d: 41 f7 fe idiv %r14d 400b60: 81 fa e7 03 00 00 cmp $0x3e7,%edx 400b66: 89 d3 mov %edx,%ebx 400b68: 7f ee jg 400b58 <main+0x1b8>standalone: 4005f0: 8d 04 32 lea (%rdx,%rsi,1),%eax 4005f3: 99 cltd 4005f4: f7 f9 idiv %ecx 4005f6: 81 fa e7 03 00 00 cmp $0x3e7,%edx 4005fc: 7f f2 jg 4005f0 <main+0x140>the difference is one register to register copy mov %edx,%ebx (likely again caused by different register pressure due to earlier function calls) this is one of the cheapest operations available in a cpu probably around 1-2 cycles and good throughput, so it should have no measurable effect on the actual wall time. The idiv instruction is the expensive part, it should be around 20 cycles with poor throughput. So the instruction fetch count here is grossly misleading.A better tool for such detailed profiling is a sampling profiler like perf record/report. When you run long enough you will be able to single out instructions that are costing a lot of time, though the actually high sample counts will also then not match up directly with the slow instructions as the cpu may execute later independent instructions in parallel with the slow ones.
error while using FREAK I'm trying to create Descriptor extractor using FREAK. but at the following line:freakExtractor = cv2.DescriptorExtractor_create('FREAK')I get an error saying:freakExtractor = cv2.DescriptorExtractor_create('FREAK')AttributeError: 'module' object has no attribute 'DescriptorExtractor_create'can someone tell me what is the exact problem and why i'm getting this error? I'm using ubuntu 12.10 with opencv 2.4.3 and python 2.7.
I think, cv2.DescriptorExtractor_create('FREAK') is not a part of python interface, just use the latest opencv for that then it will work or you simply can write the code in c++ which is availabe in that version in c++.
Django - User creation with custom user model results in internal error Ok, I know this is a silly question but I am blocked and I can't figure out what to do.I have searched on google and stackoverflow but did not found any answer :I tried this :Adding custom fields to users in djangoDjango - Create user profile on user creationhttps://docs.djangoproject.com/en/dev/topics/auth/#storing-additional-information-about-usersMy model is the following :class UserProfile(models.Model): user = models.OneToOneField(User) quota = models.IntegerField(null = True)def create_user_profile(sender, instance, created, **kwargs): if created: UserProfile.objects.create(user=instance)post_save.connect(create_user_profile, sender=User)And my view for user registration is the following : def register(request): if request.method == 'POST': # If the form has been submitted... form = RegistrationForm(request.POST) # A form bound to the POST data if form.is_valid(): # All validation rules pass # Process the data in form.cleaned_data cd = form.cleaned_data #Then we create the user user = User.objects.create_user(cd['username'],cd["email"],cd["password1"]) user.get_profil().quota = 20 user.save() return HttpResponseRedirect('') else: form = RegistrationForm() # An unbound form return render(request, 'registration_form.html', {'form': form,})The line that launches an InternalError is :user = User.objects.create_user(cd['username'],cd["email"],cd["password1"])And the error is :InternalError at /register/current transaction is aborted, commands ignored until end of transaction blockThank you for your help
user = User.objects.create_user(username=form.cleaned_data['username'], password=form.cleaned_data['password'], email=form.cleaned_data['email'])user.is_active = Trueuser.save()
`OSError: [Errno 9] Bad file descriptor` with socket wrapper on Windows I was writing a wrapper class for sockets so I could use it as a file-like object for piping into the stdin and stdout of a process created with subprocess.Popen().def do_task(): global s #The socket class sockIO(): def __init__(self, s):self.s=s def write(self, m): self.s.send(m) def read(self, n=None): return self.s.read() if n is None else self.s.read(n) def fileno(self): return self.s.fileno() #stdio=s.makefile('rw') stdio=sockIO(s) cmd = subprocess.Popen('cmd', shell=True, stdout=stdio, stderr=stdio, stdin=stdio)I didn't use socket.makefile() as it gives a io.UnsupportedOperation: fileno error, but with my present code I'm getting the following error on Windows (works fine on Linux):Traceback (most recent call last): File "C:\Users\admin\Desktop\Projects\Python3\client.py", line 65, in <module> main() File "C:\Users\admin\Desktop\Projects\Python3\client.py", line 62, in main receive_commands2() File "C:\Users\admin\Desktop\Projects\Python3\client.py", line 57, in receive_commands2 stdin=stdio) File "C:\Python3\lib\subprocess.py", line 914, in __init__ errread, errwrite) = self._get_handles(stdin, stdout, stderr) File "C:\Python3\lib\subprocess.py", line 1127, in _get_handles p2cread = msvcrt.get_osfhandle(stdin.fileno())OSError: [Errno 9] Bad file descriptor
According to the Python documentation about socket.fileno(), it is stated that this won't work in Windows. Quoting from Python Documentation: socket.fileno() Return the socket’s file descriptor (a small integer). This is useful with select.select(). Under Windows the small integer returned by this method cannot be used where a file descriptor can be used (such as os.fdopen()). Unix does not have this limitation.Note:The above code will work in Linux and other *nix systems.
Problems with IMAP in Gmail with Python I have a problem with IMAP in Python 2.7For testing purposes, I have created [email protected] with the password testing123testingI am following this tutorial and typed this into my Python Iteractive Shell: Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32Type "copyright", "credits" or "license()" for more information.>>> import imaplibmail = imaplib.IMAP4_SSL('imap.gmail.com')mail.login('[email protected]', 'testing123testing')mail.list()# Out: list of "folders" aka labels in gmail.mail.select("inbox") # connect to inbox.>>> Nothing happens, not even error messages.Note: I have enabled IMAP in GmailThanks, -timUpdate: In response to this comment: Did you do the next section after the code you quoted above? – Amber I tried this: Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32Type "copyright", "credits" or "license()" for more information.>>> import imaplibmail = imaplib.IMAP4_SSL('imap.gmail.com')mail.login('[email protected]', 'mypassword')mail.list()# Out: list of "folders" aka labels in gmail.mail.select("inbox") # connect to inbox.result, data = mail.search(None, "ALL")ids = data[0] # data is a list.id_list = ids.split() # ids is a space separated stringlatest_email_id = id_list[-1] # get the latestresult, data = mail.fetch(latest_email_id, "(RFC822)") # fetch the email body (RFC822) for the given IDraw_email = data[0] # here's the body, which is raw text of the whole email# including headers and alternate payloads>>>and it still did nothing
It seems to work for me; I created a sarnoldwashere folder via the python API:>>> mail.create("sarnoldwashere")('OK', ['Success'])>>> mail.list()('OK', ['(\\HasNoChildren) "/" "INBOX"','(\\HasNoChildren) "/" "Personal"','(\\HasNoChildren) "/" "Receipts"','(\\HasNoChildren) "/" "Travel"','(\\HasNoChildren) "/" "Work"','(\\Noselect \\HasChildren) "/" "[Gmail]"','(\\HasNoChildren) "/" "[Gmail]/All Mail"','(\\HasNoChildren) "/" "[Gmail]/Drafts"','(\\HasNoChildren) "/" "[Gmail]/Sent Mail"','(\\HasNoChildren) "/" "[Gmail]/Spam"','(\\HasNoChildren) "/" "[Gmail]/Starred"','(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"','(\\HasNoChildren) "/" "sarnoldwashere"'])>>> mail.logout()('BYE', ['LOGOUT Requested'])It ought to still be there in the web interface. (Unless someone else deletes it in the meantime.)Edit to include the full contents of the session, even including the boring bits where I re-learn The Way of Python:>>> import imaplib>>> mail = imaplib.IMAP4_SSL('imap.gmail.com')>>> mail.login('[email protected]', 'testing123testing')('OK', ['[email protected] .. .. authenticated (Success)'])>>> mail.list()('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"'])>>> # Out: list of "folders" aka labels in gmail.... mail.select("inbox") # connect to inbox.('OK', ['3'])>>> mail.dir()Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/imaplib.py", line 214, in __getattr__ raise AttributeError("Unknown IMAP4 command: '%s'" % attr)AttributeError: Unknown IMAP4 command: 'dir'>>> dir(mail)['PROTOCOL_VERSION', '_CRAM_MD5_AUTH', '__doc__', '__getattr__', '__init__', '__module__', '_append_untagged', '_check_bye', '_checkquote', '_cmd_log', '_cmd_log_idx', '_cmd_log_len', '_command', '_command_complete', '_dump_ur', '_get_line', '_get_response', '_get_tagged_response', '_log', '_match', '_mesg', '_new_tag', '_quote', '_simple_command', '_untagged_response', 'abort', 'append', 'authenticate', 'capabilities', 'capability', 'certfile', 'check', 'close', 'continuation_response', 'copy', 'create', 'debug', 'delete', 'deleteacl', 'error', 'expunge', 'fetch', 'getacl', 'getannotation', 'getquota', 'getquotaroot', 'host', 'is_readonly', 'keyfile', 'list', 'literal', 'login', 'login_cram_md5', 'logout', 'lsub', 'mo', 'mustquote', 'myrights', 'namespace', 'noop', 'open', 'partial', 'port', 'print_log', 'proxyauth', 'read', 'readline', 'readonly', 'recent', 'rename', 'response', 'search', 'select', 'send', 'setacl', 'setannotation', 'setquota', 'shutdown', 'sock', 'socket', 'sort', 'ssl', 'sslobj', 'state', 'status', 'store', 'subscribe', 'tagged_commands', 'tagnum', 'tagpre', 'tagre', 'thread', 'uid', 'unsubscribe', 'untagged_responses', 'welcome', 'xatom']>>> dir(mail).sort()>>> d=dir(mail)>>> d.sort()>>> d['PROTOCOL_VERSION', '_CRAM_MD5_AUTH', '__doc__', '__getattr__', '__init__', '__module__', '_append_untagged', '_check_bye', '_checkquote', '_cmd_log', '_cmd_log_idx', '_cmd_log_len', '_command', '_command_complete', '_dump_ur', '_get_line', '_get_response', '_get_tagged_response', '_log', '_match', '_mesg', '_new_tag', '_quote', '_simple_command', '_untagged_response', 'abort', 'append', 'authenticate', 'capabilities', 'capability', 'certfile', 'check', 'close', 'continuation_response', 'copy', 'create', 'debug', 'delete', 'deleteacl', 'error', 'expunge', 'fetch', 'getacl', 'getannotation', 'getquota', 'getquotaroot', 'host', 'is_readonly', 'keyfile', 'list', 'literal', 'login', 'login_cram_md5', 'logout', 'lsub', 'mo', 'mustquote', 'myrights', 'namespace', 'noop', 'open', 'partial', 'port', 'print_log', 'proxyauth', 'read', 'readline', 'readonly', 'recent', 'rename', 'response', 'search', 'select', 'send', 'setacl', 'setannotation', 'setquota', 'shutdown', 'sock', 'socket', 'sort', 'ssl', 'sslobj', 'state', 'status', 'store', 'subscribe', 'tagged_commands', 'tagnum', 'tagpre', 'tagre', 'thread', 'uid', 'unsubscribe', 'untagged_responses', 'welcome', 'xatom']>>> mail.list()('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"'])>>> mail.select("INBOX") # connect to inbox.('OK', ['3'])>>> mail.list()('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"'])>>> mail.list("INBOX")('OK', ['(\\HasNoChildren) "/" "INBOX"'])>>> mail.open("INBOX")Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/imaplib.py", line 1149, in open self.sock = socket.create_connection((host, port)) File "/usr/lib/python2.6/socket.py", line 547, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM):socket.gaierror: [Errno -2] Name or service not known>>> mail.recent()('OK', ['0'])>>> mail.create("sarnoldwashere")('OK', ['Success'])>>> mail.list()('OK', ['(\\HasNoChildren) "/" "INBOX"', '(\\HasNoChildren) "/" "Personal"', '(\\HasNoChildren) "/" "Receipts"', '(\\HasNoChildren) "/" "Travel"', '(\\HasNoChildren) "/" "Work"', '(\\Noselect \\HasChildren) "/" "[Gmail]"', '(\\HasNoChildren) "/" "[Gmail]/All Mail"', '(\\HasNoChildren) "/" "[Gmail]/Drafts"', '(\\HasNoChildren) "/" "[Gmail]/Sent Mail"', '(\\HasNoChildren) "/" "[Gmail]/Spam"', '(\\HasNoChildren) "/" "[Gmail]/Starred"', '(\\HasChildren \\HasNoChildren) "/" "[Gmail]/Trash"', '(\\HasNoChildren) "/" "sarnoldwashere"'])>>> mail.logout()('BYE', ['LOGOUT Requested'])>>>
How to get the raw JSON response of a HTTP request from `driver.page_source` in Selenium webdriver Firefox If I browse to https://httpbin.org/headers I expect to get the following JSON response:{ "headers": { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.5", "Connection": "close", "Host": "httpbin.org", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:64.0) Gecko/20100101 Firefox/64.0" }}However, if I use Seleniumfrom selenium import webdriverfrom selenium.webdriver.firefox.options import Optionsoptions = Options()options.headless = Truedriver = webdriver.Firefox(options=options)url = 'https://httpbin.org/headers'driver.get(url)print(driver.page_source)driver.close()I get<html platform="linux" class="theme-light" dir="ltr"><head><meta http-equiv="Content-Security-Policy" content="default-src 'none' ; script-src resource:; "><link rel="stylesheet" type="text/css" href="resource://devtools-client-jsonview/css/main.css"><script type="text/javascript" charset="utf-8" async="" data-requirecontext="_" data-requiremodule="viewer-config" src="resource://devtools-client-jsonview/viewer-config.js"></script><script type="text/javascript" charset="utf-8" async="" data-requirecontext="_" data-requiremodule="json-viewer" src="resource://devtools-client-jsonview/json-viewer.js"></script></head><body><div id="content"><div id="json">{ "headers": { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "en-US,en;q=0.5", "Connection": "close", "Host": "httpbin.org", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:64.0) Gecko/20100101 Firefox/64.0" }}</div></div><script src="resource://devtools-client-jsonview/lib/require.js" data-main="resource://devtools-client-jsonview/viewer-config.js"></script></body></html>Where do the HTML tags come from? How do I get the raw JSON response of a HTTP request from driver.page_source?
use the "view-source:" parameter in your urlSimple Mode:example:url = 'view-source:https://httpbin.org/headers'driver.get(url)content = driver.page_sourceprint(content)output:'<html><head><meta name="viewport" content="width=device-width"><title>https://httpbin.org/headers</title><link rel="stylesheet" type="text/css" href="resource://content-accessible/viewsource.css"></head><body id="viewsource" class="highlight" style="-moz-tab-size: 4"><pre>{\n "headers": {\n "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", \n "Accept-Encoding": "gzip, deflate, br", \n "Accept-Language": "en-US,en;q=0.5", \n "Host": "httpbin.org", \n "Upgrade-Insecure-Requests": "1", \n "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0"\n }\n}\n</pre></body></html>'Best Mode: (for JSON)example:url = 'view-source:https://httpbin.org/headers'driver.get(url)content = driver.page_sourcecontent = driver.find_element_by_tag_name('pre').textparsed_json = json.loads(content)print(parsed_json)output:{'headers': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.5', 'Host': 'httpbin.org', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:67.0) Gecko/20100101 Firefox/67.0'}}
How to programmatically fill in RGB noise in the transparent region of an image using Python? I need to process a lot of images using Python. All these images have some transparent region (alpha channel) of different sizes.I need to programmatically fill in RGB noise in the transparent region of those images, but keep the non-transparent region unchanged. This is an example of changing the images.How to do this programmatically in Python?
In my opinion you need to:Create a Mat that contains Gaussian noise (or what kind of noise you need to add in the images).For each image you copy the noise Mat into another one based on the alpha channel (used as mask)Add the two images (initial and noise_mask) to he initial image (or the inital_noisy_background)
ValueError: unknown url type: h I wrote an application in python to download the file at a specified hour but I received ValueError: unknown url type: h Errorthis is my codeimport time,os,urllib2coun=input("Enter count of the movies:")x=0namelist=[]addresslist=[]os.chdir('D:\\')while(coun > x): name=raw_input("Enter the name of movie:") namelist.append(name) address=raw_input("enter the address of %s:"%(name)) addresslist.append(address) x=x+1ti= time.localtime().tm_hourprint('it\'s wating...')while(ti!=11): ti= time.localtime().tm_hour timi=time.localtime().tm_min tisec=time.localtime().tm_sec if (ti==3 & timi==59 & tisec==59): print('it\'s 3')print('it\'s your time.let start downloating')x=0while(coun > x): data=urllib2.urlopen(address[x]) file=open(namelist[x],'wb') file.write(data) file.close() x=x+1And when I run it and answer the questions that return to me this Error:Traceback (most recent call last): File "tidopy.py", line 24, in <module> data=urllib2.urlopen(address[x]) File "C:\Python27\lib\urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "C:\Python27\lib\urllib2.py", line 421, in open protocol = req.get_type() File "C:\Python27\lib\urllib2.py", line 283, in get_type raise ValueError, "unknown url type: %s" % self.__originalValueError: unknown url type: hHow can I fix it? please help
This line:data=urllib2.urlopen(address[x])Should most likely be this:data=urllib2.urlopen(addresslist[x])You want the element of the list addresslist, not the first character of the string address.
Using Pycharm I don't get any output once code is run, just Process finished with exit code 0 So pretty much, I have a directory that contains many of 2 types of text files, a bunch of files that begin with "summary_" and a bunch of files that begin with "log". All these text are in the same directory. For now I don't care about the "log" text files, I only care about the files that start with "summary".Each "summary" file contains either 7 lines of text or 14 lines of text.At the end of each line it will say either PASS or FAIL depending on the test result. For the test result to be passing all 7 or 14 lines have to say "PASS" at the end. If one of those lines have just one "FAIL" in it, The test has failed. I want to count the number of passes and failures.import osimport globdef pass_or_fail_counter(): pass_count = 0 fail_count = 0 os.chdir("/home/dario/Documents/Log_Test") print("Working directory is ") + os.getcwd() data = open('*.txt').read() count = data.count('PASS') if count == 7 or 14: pass_count = pass_count + 1 else: fail_count = fail_count + 1 print(pass_count) print(fail_count) f.close()pass_or_fail_counter()
I don't know about within Pycharm, but the following seems to work without it:import osimport globdef pass_or_fail_counter(logdir): pass_count, fail_count = 0, 0 for filename in glob.iglob(os.path.join(logdir, '*.txt')): with open(filename, 'rt') as file: lines = file.read().splitlines() if len(lines) in {7, 14}: # right length? if "PASS" in lines[-1]: # check last line for result pass_count += 1 else: fail_count += 1 print(pass_count, fail_count)pass_or_fail_counter("/home/dario/Documents/Log_Test")
Why or-combined != sometimes does not behave as expected So I was trying to create a tic tac toe game and I ran into a problem with one of my method where I could not figure out why it was going on an infinite loop. My code is:def player_input(): marker = '' while marker != 'X' or marker != 'O': marker = input('Do you want to be X or O: ').upper() print(marker) if marker == 'X': return ['X','O'] return ['O','X']What it is currently doing is that it keeps asking the question even when the user inputs X or O. The code works when I use the condition:while not (marker == 'X' or marker == 'O'):
The problem is your logic in checking marker != 'X' or marker != 'O'.Let's pretend marker == 'X'. So our expression evaluates to False or True which evaluates to True. Same goes with marker == 'O'. Our expression here evaluates to True or False which evaluates to True.You should be using and, not or.Your second expression, not (marker == 'X' or marker == 'O') is equivalent to (not marker == 'X') and (not marker == 'O'), so it works. (De Morgan's laws)def player_input(): marker = '' while marker != 'X' and marker != 'O': # change from 'or' to 'and' marker = input('Do you want to be X or O: ').upper() print(marker) if marker == 'X': return ['X','O'] return ['O','X']
Assign variable names to different files in python I am trying to open various text files simulataneously in python. I want to assign a unique name to each file. I have tried the follwoing but it is not working:for a in [1,2,11]: "DOS%s"%a=open("DOS%s"%a,"r")Instead I get this error:SyntaxError: can't assign to operatorWhat is the correct way to do this?
you always have to have the namespace declared before assignment, either on a previous line or on the left of a statement. Anyway you can do:files = {f:open("DOS%s" % f) for f in [1,2,11]}then access files like:files[1].read()
In Pandas How to sort one level of a multi-index based on the values of a column, while maintaining the grouping of the other level I'm taking a Data Mining course at university right now, but I'm a wee bit stuck on a multi-index sorting problem. The actual data involves about 1 million reviews of movies, and I'm trying to analyze that based on American zip codes, but to test out how to do what I want, I've been using a much smaller data set of 250 randomly generated ratings for 10 movies and instead of zip codes, I'm using age groups.So this is what I have right now, it's a multiindexed DataFrame in Pandas with two levels, 'group' and 'title' ratinggroup title Alien 4.000000 Argo 2.166667Adults Ben-Hur 3.666667 Gandhi 3.200000 ... ... Alien 3.000000 Argo 3.750000Coeds Ben-Hur 3.000000 Gandhi 2.833333 ... ... Alien 2.500000 Argo 2.750000Kids Ben-Hur 3.000000 Gandhi 3.200000 ... ...What I'm aiming for is to sort the titles based on their rating within the group (and only show the most popular 5 or so titles within each group) So something like this (but I'm only going to show two titles in each group): ratinggroup title Alien 4.000000Adults Ben-Hur 3.666667 Argo 3.750000Coeds Alien 3.000000 Gandhi 3.200000Kids Ben-Hur 3.000000Anyone know how to do this? I've tried sort_order, sort_index, etc and swapping the levels, but they mix up the groups too. So it then looks like: ratinggroup title Adults Alien 4.000000Coeds Argo 3.750000Adults Ben-Hur 3.666667Kids Gandhi 3.666667Coeds Alien 3.000000Kids Ben-Hur 3.000000I'm kind of looking for something like this: Multi-Index Sorting in Pandas, but instead of sorting based on another level, I want to sort based on the values. Kind of like if that person wanted to sort based on his sales column.Thanks!
You're looking for sort:In [11]: s = pd.Series([3, 1, 2], [[1, 1, 2], [1, 3, 1]])In [12]: s.sort()In [13]: sOut[13]: 1 3 12 1 21 1 3dtype: int64Note; this works inplace (i.e. modifies s), to return a copy use order:In [14]: s.order()Out[14]: 1 3 12 1 21 1 3dtype: int64Update: I realised what you were actually asking, and I think this ought to be an option in sortlevels, but for now I think you have to reset_index, groupby and apply:In [21]: s.reset_index(name='s').groupby('level_0').apply(lambda s: s.sort('s')).set_index(['level_0', 'level_1'])['s']Out[21]: level_0 level_11 3 1 1 32 1 2Name: 0, dtype: int64Note: you can set the level names to [None, None] afterwards.
Calculate numpy.std of each pandas.DataFrame's column? I want to get the numpy.std of each column of my pandas.DataFrame.Here is my code:import pandas as pdimport numpy as npprices = pd.DataFrame([[-0.33333333, -0.25343423, -0.1666666667], [+0.23432323, +0.14285714, -0.0769230769], [+0.42857143, +0.07692308, +0.1818181818]])print(pd.DataFrame(prices.std(axis=0)))Here is my code's output:pd.DataFrame([[ 0.39590933], [ 0.21234018], [ 0.1809432 ]])And here is the right output (if calculate with np.std)pd.DataFrame([[ 0.32325862], [ 0.17337503], [ 0.1477395 ]])Why am I having such difference?How can I fix that?NOTE: I have tried to do this way:print(np.std(prices, axis=0))But I had the following error:Traceback (most recent call last): File "C:\Users\*****\Documents\******\******\****.py", line 10, in <module> print(np.std(prices, axis=0)) File "C:\Python33\lib\site-packages\numpy\core\fromnumeric.py", line 2812, in std return std(axis=axis, dtype=dtype, out=out, ddof=ddof)TypeError: std() got an unexpected keyword argument 'dtype'Thank you!
They're both right: they just differ on what the default delta degrees of freedom is. np.std uses 0, and DataFrame.std uses 1:>>> prices.std(axis=0, ddof=0)0 0.3232591 0.1733752 0.147740dtype: float64>>> prices.std(axis=0, ddof=1)0 0.3959091 0.2123402 0.180943dtype: float64>>> np.std(prices.values, axis=0, ddof=0)array([ 0.32325862, 0.17337503, 0.1477395 ])>>> np.std(prices.values, axis=0, ddof=1)array([ 0.39590933, 0.21234018, 0.1809432 ])
python convert sorted Double linked list to BST The idea is quite similar to most of others, do a inorder in place traversal treat left as prevNode and right as nextNodefor Some reason it just cannot work.. seems not running recursion?I tested my DoubleLinked list is contructed correctly by printing preNode and nextNodebut is still say 'NoneType' object has no attribute 'val'The problem is on buildTreeAny one plz helpclass LinkNode(object): def __init__(self, val): self.val = val self.nextNode = None self.prevNode = Nonedef buildLinkList(arr): dummy = head = LinkNode(None) dummy.nextNode = head for val in arr: new_node = LinkNode(val) new_node.prevNode = head head.nextNode = new_node head = head.nextNode return dummy.nextNodedef printLink(head): while head: print head.val if not head.nextNode: #print head.val return head head = head.nextNodedef buildTree(head, n): if n <= 0: return None left = buildTree(head, n / 2) print head.val root = head root.prevNode = left head = head.nextNode root.nextNode = buildTree(head, n - n / 2 - 1) return rootdef inorder(root): if root: inorder(root.prevNode) print root.val inorder(root.nextNode) arr = [1, 2, 3, 4, 5, 6, 7]head = buildLinkList(arr)#print head.valroot = buildTree(head, 7)
head is not updated in every recursion call,so here head should be global variable (like how double pointer is used in C).update your buildTree function likedef buildTree(n): global head if n<=0 : return None left = buildTree(n/2) root = head root.prevNode = left head = head.nextNode root.nextNode = buildTree(n-n/2-1) return root
passing a variable to urlopen() and reading it again in python using bs4 I am planning to open a bunch of links where the only thing changing is the year at the end of the links. I am using the code below but it is returning a bunch of errors. My aim is to open that link and filter some things on the page but first I need to open all the pages so I have the test code. Code below:from xlwt import *from urllib.request import urlopenfrom bs4 import BeautifulSoup, SoupStrainerfrom xlwt.Style import *j=2014for j in range(2015): conv=str(j) content = urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_%s").read() %conv j+=1print(content)Errors:Traceback (most recent call last): File "F:\urltest.py", line 11, in <module> content = urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_%s").read() %conv File "C:\Python34\lib\urllib\request.py", line 161, in urlopen return opener.open(url, data, timeout) File "C:\Python34\lib\urllib\request.py", line 469, in open response = meth(req, response) File "C:\Python34\lib\urllib\request.py", line 579, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python34\lib\urllib\request.py", line 507, in error return self._call_chain(*args) File "C:\Python34\lib\urllib\request.py", line 441, in _call_chain result = func(*args) File "C:\Python34\lib\urllib\request.py", line 587, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp)urllib.error.HTTPError: HTTP Error 400: Bad RequestA little guidance required. If there is any other way to pass the variables[2014, 2015 etc] also it would be great.
That may be because you are declaring j and then modifying it at the end of your loop. range() already does this for you so you don't have to increment it. Also, your string interpolation syntax looks wrong. Be sure to include the variable immediately after the string. print("Hi %s!" % name).Try:for j in range(2015): conv=str(j) content = urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_%s" % conv).read()Also, I am assuming you don't want to query from years 0 to 2015. You can call range(start_year, end_year) to iterate from [start_year, end_year).
Pandas UDF in pyspark I am trying to fill a series of observation on a spark dataframe. Basically I have a list of days and I should create the missing one for each group.In pandas there is the reindex function, which is not available in pyspark.I tried to implement a pandas UDF:@pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP)def reindex_by_date(df): df = df.set_index('dates') dates = pd.date_range(df.index.min(), df.index.max()) return df.reindex(dates, fill_value=0).ffill()This looks like should do what I need, however it fails with this messageAttributeError: Can only use .dt accessor with datetimelike values. What am I doing wrong here?Here the full code:data = spark.createDataFrame( [(1, "2020-01-01", 0), (1, "2020-01-03", 42), (2, "2020-01-01", -1), (2, "2020-01-03", -2)], ('id', 'dates', 'value'))data = data.withColumn('dates', col('dates').cast("date"))schema = StructType([ StructField('id', IntegerType()), StructField('dates', DateType()), StructField('value', DoubleType())])@pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP)def reindex_by_date(df): df = df.set_index('dates') dates = pd.date_range(df.index.min(), df.index.max()) return df.reindex(dates, fill_value=0).ffill()data = data.groupby('id').apply(reindex_by_date)Ideally I would like something like this:+---+----------+-----+ | id| dates|value|+---+----------+-----+| 1|2020-01-01| 0|| 1|2020-01-02| 0|| 1|2020-01-03| 42|| 2|2020-01-01| -1|| 2|2020-01-02| 0|| 2|2020-01-03| -2|+---+----------+-----+
Case 1: Each ID has an individual date range.I would try to reduce the content of the udf as much as possible. In this case I would only calculate the date range per ID in the udf. For the other parts I would use Spark native functions.from pyspark.sql import types as Tfrom pyspark.sql import functions as F# Get min and max date per IDdate_ranges = data.groupby('id').agg(F.min('dates').alias('date_min'), F.max('dates').alias('date_max'))# Calculate the date range for each [email protected](returnType=T.ArrayType(T.DateType()))def get_date_range(date_min, date_max): return [t.date() for t in list(pd.date_range(date_min, date_max))]# To get one row per potential date, we need to explode the UDF outputdate_ranges = date_ranges.withColumn( 'dates', F.explode(get_date_range(F.col('date_min'), F.col('date_max'))))date_ranges = date_ranges.drop('date_min', 'date_max')# Add the value for existing entries and add 0 for othersresult = date_ranges.join( data, ['id', 'dates'], 'left')result = result.fillna({'value': 0})Case 2: All ids have the same date rangeI think there is no need to use a UDF here. What you want to can be archived in a different way: First, you get all possible IDs and all necessary dates. Second, you crossJoin them, which will provide you with all possible combinations. Third, left join the original data onto the combinations. Fourth, replace the occurred null values with 0.# Get all unique idsids_df = data.select('id').distinct()# Get the date seriesdate_min, date_max = data.agg(F.min('dates'), F.max('dates')).collect()[0]dates = [[t.date()] for t in list(pd.date_range(date_min, date_max))]dates_df = spark.createDataFrame(data=dates, schema="dates:date")# Calculate all combinationsall_comdinations = ids_df.crossJoin(dates_df)# Add the value columnresult = all_comdinations.join( data, ['id', 'dates'], 'left')# Replace all null values with 0result = result.fillna({'value': 0})Please be aware of the following limitiations with this solution:crossJoins can be quite costly. One potential solution to cope with the issue can be found in this related question.The collect statement and use of Pandas results in a not perfectly parallelised Spark transformation.[EDIT] Split into two cases as I first thought all IDs have the same date range.
Python Selenium window closing no matter what I don't really like to ask questions but I just can't find out what is wrong with my code. I'm new to selenium so please excuse me if it's something obvious.from selenium import webdriverfrom selenium.webdriver.chrome.service import Servicefrom webdriver_manager.chrome import ChromeDriverManagerfrom selenium.webdriver.chrome.options import Optionschrome_options = Options()chrome_options.add_experimental_option("detach", True)s=Service(ChromeDriverManager().install())driver = webdriver.Chrome(options=chrome_options, service=s)driver.maximize_window()driver.get('https://www.youtube.com')This code works, and opens up youtube successfully, however, the window will close shortly after opening. To combat this, I added the 'detach True' option into the code as shown above (Python selenium keep browser open), however, this hasn't worked and the window will close a few seconds after opening. There was also this error showing when I ran the code.[17708:21796:0720/212826.842:ERROR:device_event_log_impl.cc(214)] [21:28:26.841] USB: usb_device_handle_win.cc:1048 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)I looked at other people on SO who had this issue, but all the resources said to ignore it and that it shouldn't affect the running of the program. To stop the error message from popping up I put this line into my code.chrome_options.add_experimental_option('excludeSwitches', ['enable-logging'])This stopped the error from showing up but didn't stop the window from closing.Any help is appreciated, I'm running the most recent version of VS on windows 10.
After your test case finishes running, it will close the browser no matter what. In your case browser will be closed as soon as you navigate to youtube. You don't have anything else and your test case is finished as soon as you navigate to youtube.But, if you would like to observe more and stay on youtube once you navigate, you can add wait time so it doesn't close as soon as it navigates to youtube.Try adding this line below so that it will wait for 10 seconds.time.sleep(10)
SQLAlchemy db.create_all() doesn't create the tables when my models are defined in a separate file I'm somewhat new to Flask and I'm having a problem.I have an app.py whereby I instantiated my Flask app and SQLAlchemy. (Code shown below):from flask import Flaskfrom flask_sqlalchemy import SQLAlchemyimport osbasedir = os.path.abspath(os.path.dirname(__file__))app = Flask(__name__)app.config['SECRET_KEY'] = 'I5aE2js75KeZHVx88qAm4gPHnDvM7lSD'app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = Falseapp.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'data.db')db = SQLAlchemy(app)Then on a separate file (models.py) I imported the db and used it as follows:from app import dbclass User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(100), nullable=False) def __repr__(self): return f'<User {self.username}>'The problem is when i use db.create_all(), the tables defined within my models.py file don't get created. I copied the classes over onto my app.py and they got created.I dont know what I am doing wrong. Any help is greatly appreciated!
I like to put my db initialization into a function:models.pyfrom flask_sqlalchemy import SQLAlchemydb = SQLAlchemy()class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(100), nullable=False) def __repr__(self): return f'<User {self.username}>'app.pyfrom flask import Flaskfrom flask_sqlalchemy import SQLAlchemyimport osfrom models import dbdef initialize_db(app): app.app_context().push() db.init_app(app) db.create_all() db.session.commit()basedir = os.path.abspath(os.path.dirname(__file__))app = Flask(__name__)app.config['SECRET_KEY'] = 'I5aE2js75KeZHVx88qAm4gPHnDvM7lSD'app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = Falseapp.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join(basedir, 'data.db')initialize_db(app)
How Check ManyRelatedManager is None -models.pyclass Role: name = models.CharFiled(max_length=100) comp = models.ForeignKey(Company, models.CASCADE, related_name="comp_roles") users = models.ManyToManyField(User, related_name='user_roles', related_query_name='user_role', through='UserRole')class UserRole(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) role = models.ForeignKey(Role, on_delete=models.CASCADE)based on certain conditions I have two queries-in api_views.py:1-Role.objects.filter(comp_id=self.comp_id).prefetch_related('users')2-Role.objects.filter(comp_id=self.comp_id)-RoleSerializer.pyclass RoleSerializer(serializers.ModelSerializer): users = serializers.SerializerMethodField() def get_users(self, obj): #users: ManyRelatedManager = obj.users logger.info(f"this is obj.users: {obj.users}") logger.info(f"this is n obj.users: {obj.users == None}") # hit the db: # if not obj.users.all(): # if obj.users.exists() == None: # no hit the db: # if obj.users.all() == None: # if obj.users == None: if obj.users.all() == None: logger.info(f"obj.users is None!") return None logger.info(f"obj.users is not None!") serializer = UserReadSerializer(obj.users.all(), many=True) return serializer.data Either obj.users == None log or obj.users.all() == None codition are always false!My question is how can I find out obj.users or obj.users.all() (in RoleSerializer/get_users) is None?so I can decide to return whether None or UserReadSerializer data.
obj.users.all() returns a queryset. so validating it against to None will not work. Instead you can use below line to get the count of entries and do your other operations on it.obj.users.count()if no users are present in the database for the model, it will return 0.Edit : Saw one more answer posted for this question now.obj.users.exists()using exists is efficient than getting the countEdit: Adding complete code .Views.pyclass RoleModelViewset(modelViewset): serializer_class = RoleSerializer queryset=Role.objects.filter(comp_id=self.comp_id).prefetch_related('users')Serializer.pyclass RoleSerializer(serializers.ModelSerializer): users = serializers.SerializerMethodField() def get_users(self, obj): if not obj.users.exists(): logger.info(f"obj.users is None!") return None logger.info(f"obj.users is not None!") serializer = UserReadSerializer(obj.users.all(), many=True) return serializer.data
Formatting paragraph text in HTML as single line I have tried to extract text from html page using traditional beautiful soup method. I have followed the code from another SO answer.import urllibfrom bs4 import BeautifulSoupurl = "http://orizon-inc.com/about.aspx"html = urllib.urlopen(url).read()soup = BeautifulSoup(html)# kill all script and style elementsfor script in soup(["script", "style"]): script.extract() # rip it out# get texttext = soup.get_text()# break into lines and remove leading and trailing space on eachlines = (line.strip() for line in text.splitlines())# break multi-headlines into a line eachchunks = (phrase.strip() for line in lines for phrase in line.split(" "))# drop blank linestext = '\n'.join(chunk for chunk in chunks if chunk)print(text)I am able to extract text using this correctly for most of the pages. But I there occurs new line between the words in the paragraph for some particular pages like the one I've mentioned.result:\nAt Orizon, we use our extensive consulting, management, technology and\nengineering capabilities to design, develop,\ntest, deploy, and sustain business and mission-critical solutions to government\nclients worldwide.\nBy using proven management and technology deployment\npractices, we enable our clients to respond faster to opportunities,\nachieve more from their operations, and ultimately exceed\ntheir mission requirements.\nWhere\nconverge\nTechnology & Innovation\n© Copyright 2019 Orizon Inc., All Rights Reserved.\n>'In the result there occurs a new line between technology and\nengineering, develop,\ntest,etc.These are all the text inside the same paragraph. If we view it in html source code it is correct:<p> At Orizon, we use our extensive consulting, management, technology and engineering capabilities to design, develop, test, deploy, and sustain business and mission-critical solutions to government clients worldwide. </p> <p> By using proven management and technology deployment practices, we enable our clients to respond faster to opportunities, achieve more from their operations, and ultimately exceed their mission requirements. </p>What is the reason for this? and how can I extract it accurately?
Instead of splitting the text per line, you should be splitting the text per HTML tag, since for each paragraph and title, you want the text inside to be stripped of line breaks.You can do that by iterating over all elements of interest (I included p, h2 and h1 but you can extend the list), and for each element, strip it of any newlines, then append a newline to the end of the element to create a line break before the next element.Here's a working implementation:import urllib.requestfrom bs4 import BeautifulSoupurl = "http://orizon-inc.com/about.aspx"html = urllib.request.urlopen(url).read()soup = BeautifulSoup(html,'html.parser')# kill all script and style elementsfor script in soup(["script", "style"]): script.extract() # rip it out# put text inside paragraphs and titles on a single linefor p in soup(['h1','h2','p']): p.string = " ".join(p.text.split()) + '\n'text = soup.text# remove duplicate newlines in the texttext = '\n\n'.join(x for x in text.splitlines() if x.strip())print(text)Output sample:loginAbout UsAt Orizon, we use our extensive consulting, management, technology and engineering capabilities to design, develop, test, deploy, and sustain business and mission-critical solutions to government clients worldwide.By using proven management and technology deployment practices, we enable our clients to respond faster to opportunities, achieve more from their operations, and ultimately exceed their mission requirements.If you don't want a gap between paragraphs/titles, use:text = '\n'.join(x for x in text.splitlines() if x.strip())
How can a webcam be accessed using opencv with python? This is the code that I currently have:import cv2, timevideo=cv2.VideoCapture(0)check, frame=video.read()print(check)print(frame)cv2.imshow("Capturing", frame)cv2.waitkey(0)video.release()The code is showing a syntax error.The error:FalseNoneTraceback (most recent call last): File "C:/Users/Stagiair/Desktop/aaa.py", line 9, in <module> cv2.imshow("Capturing", frame)cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-95hbg2jt\opencv\modules\highgui\src\window.cpp:376: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
Your example is showing that video.read() is returning (False, None). This is telling you that the read failed. The failure you're seeing is the result of passing None to cv2.imshow().On one of my laptops, it take a few attempts for the video to 'warm up' and start returning images. Arranging to ignore some number of (False, None) returns is usually sufficient.
Draw a turtle path without Turtle module I'm trying to code a program that draws the path of a 'turtle' when given a string. I can't use the turtle module. We suppose the turtle starts at (0,0) and points toward y.Here are the 4 possible caracters:S: Go forward 1 in the current direction;R: Turns right 90 degrees;L: Turns left 90 degrees;T: Disables displacement tracking if it is currently active, otherwise enables it.For example, a path could be: SSSRSSLSTSSTI see two ways to approch this problem. Either the turtle is always moving straight in a plane that rotates. Either the particle can 'recognize' where it is actually pointing, and then move left and right.In both situations, I'm stuck.Here is the 'code' I did:import matplotlib.pyplot as pltpathUser=input('Write a path') #User enter a pathpath=list(pathUser) #Convert string to a matrixx=0y=0for p in path: #Check the letters one-by-one if p == "S": y=y+1 #Moves straight plt.plot(x,y,'k^') elif p == "R": elif p == "L": elif p == "T": plt.show()Is it a good start? What I can do it rotates the point, but not the axis.Can someone could help me to figure out what to put in R and L parts?Thank you in advance for your time and your help.
For turtle graphics, we need to store and update both direction and positions.Here a simple way:initialize position and directionif S: change position acc. to the directionif R: change direction by -90 degreesif L: change direction by +90 degreesHere's sample code:import matplotlib.pyplot as pltdirection = 90track = Truemove_dir = { 0: [ 1, 0], 90: [ 0, 1], 180: [-1, 0], 270: [ 0, -1]}x, y = [0, 0]prev_x, prev_y = x, ypath = input('write a path: \n>>>')for p in path: if p == 'S': prev_x, prev_y = x, y x += move_dir[direction][0] y += move_dir[direction][1] if track: plt.plot([prev_x, x], [prev_y, y], color='red', marker='.', markersize=10) elif p == 'L': direction = (direction + 90) % 360 elif p == 'R': if direction == 0: direction = 270 else: direction = direction - 90 else: track = not trackplt.grid()plt.show()Sample test case:write a path:>>>SSSRSSLSTSSTSoutputs:
What Does Tensorflow.js Use fbjs For? I was looking over the Tensorflow.js dependencies and noticed that fbjs is included in the dependency list. What functionality requires fbjs? I'm not familiar with the package, but I'm aware that it is a Facebook JavaScript package. It just seems a little strange to me, but as I said, I don't know much about fbjs so maybe there's something useful in the context of Tensorflow.js.
The Facebook JavaScript package, fbjs, used in Tensorflow.js is for the tfjs-vis package. fbjs contains several visual tools utilized in tfjs-vis.Being that Facebook is most known as a social media platform, I found it odd at the time.
Django - set ForeignKey deferrable foreign key constraint in SQLite3 I seem to be stuck with creating an initialy deferrable foreign key relationship between two models in Django and using SQLite3 as my backend storage. Consider this simple example. This is what models.py looks like:from django.db import modelsclass Investigator(models.Model): name = models.CharField(max_length=250) email = models.CharField(max_length=250)class Project(models.Model): name = models.CharField(max_length=250) investigator = models.ForeignKey(Investigator)And this is what the output from sqlall looks like:BEGIN;CREATE TABLE "moo_investigator" ( "id" integer NOT NULL PRIMARY KEY, "name" varchar(250) NOT NULL, "email" varchar(250) NOT NULL);CREATE TABLE "moo_project" ( "id" integer NOT NULL PRIMARY KEY, "name" varchar(250) NOT NULL, "investigator_id" integer NOT NULL REFERENCES "moo_investigator" ("id"));CREATE INDEX "moo_project_a7e50be7" ON "moo_project" ("investigator_id");COMMIT;"DEFERRABLE INITIALLY DEFERRED" is missing from the *investigator_id* column in the project table. What am I doing wrong?p.s. I am new to Python and Django - using Python version 2.6.1 Django version 1.4 and SQLite version 3.6.12
This behavior is now the default. See https://github.com/django/django/blob/803840abf7dcb6ac190f021a971f1e3dc8f6792a/django/db/backends/sqlite3/schema.py#L16
How do I create a text file in jupyter notebook, from the output from my code How can I create and append the output of my code to a txt file? I have a lot of code.This is just a small example of what I'm trying to do:def output_txt(): def blop(a,b):ans = a + b print(ans) blop(2,3) x = 'Cool Stuff' print(x)def main(x): f = open('demo.txt', 'a+') f.write(str(output_txt)) f.close main(x)f = open('demo.txt', 'r')contents = f.read()print(contents)But the output gives me this:cool bananas<function output_txt at 0x0000017AA66F0F78><function output_txt at 0x0000017AA66F0B88><function output_txt at 0x0000017AA66F0948>
If you just want to save the output of your code to a text file, you can add this to a Jupyter notebook cell (the variable result holds the content you want to write away):with open('result.txt', 'a') as fp: fp.write(result)If you want to convert your whole Jupyter notebook to a txt file (so including the code and the markdown text), you can use nbconvert. For example, to convert to reStructuredText:jupyter nbconvert --to rst notebook.ipynb
Using numpy.exp to calculate object life length I can't find any example anywhere on the internet .I would like to learn using the exponential law to calculate a probability.This my exponential lambda : 0.0035 What is the probability that my object becomes defectuous before 100 hours of work ? P(X < 100) How could I write this with numpy or sci kit ? Thanks !Edit : this is the math :P(X < 100) = 1 - e ** -0.0035 * 100 = 0.3 = 30%Edit 2 : Hey guys, I maybe have found something there, hi hi :http://web.stanford.edu/class/archive/cs/cs109/cs109.1192/handouts/pythonForProbability.htmlEdit 3 :This is my attempt with scipy :from scipy import statsB = stats.expon(0.0035) # Declare B to be a normal random variableprint(B.pdf(1)) # f(1), the probability density at 1print(B.cdf(100)) # F(2) which is also P(B < 100)print(B.rvs()) # Get a random sample from Bbut B.cdf is wrong : it prints 1, while it should print 0.30, please help !B.pdf prints 0.369 : What is this ?Edit 4 : I've done it with the python math lib like this :lambdaCalcul = - 0.0035 * 100MyExponentialProbability = 1 - math.exp(lambdaCalcul)print("My probability is",MyExponentialProbability * 100 , "%");Any other solution with numpy os scipy is appreciated, thank you
The expon(..) function takes as parameters loc and scale (which correspond to the mean and the standard deviation. Since the standard deviation is the inverse of the variance, we thus can construct such distribution with:B = stats.expon(scale=1/0.0035)Then the cummulative distribution function says for P(X < 100):>>> print(B.cdf(100))0.2953119102812866
how to solve '[mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found' in opencv I'm trying to create a video uploader in a kivy app using OpenCV. However, when I try to upload a video, I get the following error [mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found[mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found[mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found[mov,mp4,m4a,3gp,3g2,mj2 @ 0000021c356d9e00] moov atom not found...The screen becomes unresponsive during this. I edited the save() function recently and added an uploadClass() because I was getting another error. main.py...class SaveDialog(Screen): save = ObjectProperty(None) text_input = ObjectProperty(None) cancel = ObjectProperty(None) def save(self, path, filename): for letter in os.path.join(path, filename): print(letter) def find(s, ch): return [i for i, letter in enumerate(s) if letter == ch] os_path_simpl = list(os.path.join(path, filename)) for t in range(len(find(os.path.join(path, filename), '\\'))): os_path_simpl[find(os.path.join(path, filename), '\\')[t]] = '\\' class uploadClass(object): video = ''.join(os_path_simpl) def __init__(self, src=video): self.video_selected = cv2.VideoCapture(src) self.vid_cod = cv2.VideoWriter_fourcc(*'mp4v') self.out = cv2.VideoWriter('media/testOne.mp4', self.vid_cod, 20.0, (640,480)) self.thread = Thread(target=self.update, args=()) self.thread.daemon = True self.thread.start() def update(self): while True: if self.video_selected.isOpened(): (self.status, self.frame) = self.video_selected.read() def show_frame(self): if self.status: cv2.imshow('uploading', self.frame) if cv2.waitKey(10) & 0xFF == ord('q'): self.video_selected.release() self.out.release() cv2.destroyAllWindows() exit(1) def save_frame(self): self.out.write(self.frame) rtsp_stream_link = 'media/testOne.mp4' upload_Class = uploadClass(rtsp_stream_link) while True: try: upload_Class.__init__() upload_Class.show_frame() upload_Class.save_frame() except AttributeError: pass sm.current = "home"...
Moov atom contains various bits of information required to play a video and the errors you are getting are saying that this information is either missing or corrupt.This can happen, for example, when you create a video then attempt move/upload the file whilst the creation process is still running. In your case I think you need to release the cv2.VideoWriter/cv2.VideoCapture objects prior to attempting to upload the file. i.e.self.video_selected.release()self.out.release()Need to be called before the video is uploaded.
how to handle complex json where key is not always present using python? I need support in getting the value of Key issues which is not always present in JSON.my JSON object is as below -{ "records":[ { "previousAttempts": [], "id": "aaaaa-aaaaaa-aaaa-aaa", "parentId": null }, { "previousAttempts": [], "id": "aaaaa-aaaaaa-aaaa-aaa", "parentId": null, "issues":[ { "type": "warning", "category": "General" }, { "type": "warning", "category": "General" } ] } ]}
This should work for youimport jsondata = json.loads(json_data)issues = [r.get('issues', []) for r in data['records']]
Unable to find fixture "mocker" (pytest-mock) when running from tox I have been using pytest-mock library for mocking with pytest. When I'm trying to run the test using tox command, I am getting the following error:...tests/test_cli.py ....EEEE...file /path/to/test_cli.py, line 63 def test_cli_with_init_cmd_fails_with_db_error(runner, mocker, context):E fixture 'mocker' not found> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, context, cov, doctest_namespace, fs, monkeypatch, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, runner, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory> use 'pytest --fixtures [testpath]' for help on them.However, when I try to run the test directly using pytest from within my venv, everything works as expected.$ py.test --cov esmigrate --cov-report term-missing...platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1rootdir: /path/to/project/root, configfile: tox.iniplugins: cov-2.10.1, pyfakefs-4.0.2, mock-3.3.1, requests-mock-1.8.0collected 50 items tests/test_cli.py ........ [ 16%]tests/test_contexts/test_context_config.py ... [ 22%]tests/test_internals/test_db_manager.py .......... [ 42%]tests/test_internals/test_glob_loader.py ..... [ 52%]tests/test_internals/test_http_handler.py ....... [ 66%]tests/test_internals/test_script_parser.py ................. [100%]...Which is strange, because, I have added pytest-mock in my requirements.txt file, which was used to install dependencies within the venv, and I have this file added as a dependency for tox testenv as well. This is the content of my tox.ini file.[tox]envlist=py36, py37, py38, flake8[pytest]filterwarnings = error::DeprecationWarning error::PendingDeprecationWarning[flake8]max-line-length = 120select = B,C,E,F,W,T4,B9,B950ignore = E203,E266,E501,W503,D1[testenv]passenv=USERNAMEcommands=py.test --cov esmigrate {posargs} --cov-report term-missingdeps= -rrequirements.txt[testenv:flake8]basepython = python3.8deps = flake8commands = flake8 esmigrate testsA snapshot of requirements.txt file...pyfakefs==4.0.2pyparsing==2.4.7pyrsistent==0.17.3pytest==6.1.1pytest-cov==2.10.1pytest-mock==3.3.1PyYAML==5.3.1...This doesn't cause any problem when ran from travis-ci either, but I want to know what's the problem here and what I've been doing wrong. Was tox-env unable to install pytest-mock, or did "mocker" fixture got shadowed by something else?
tox currently (though this is planned to be improved in the (at the time of writing) current rewrite) does not recreate environments if files it does not manage change (such as requirements.txt / setup.py)For a related question, you can see my question and workaroundsthe core issue here is if you're not managing tox environment dependencies directly inline in tox.ini it will not notice changes (such as adding / removing dependencies from requirements.txt) and so you will need to run tox with the --recreate flag to reflect those changesdisclaimer: I'm one of the current tox maintainers
Add n values from the both sides of a pandas data frame column values I have a data frame like this,dfcol1 col2 A 1 B 3 C 2 D 5 E 6 F 8 G 10I want to add previous and next n values of a particular value of col2 and store it into a new column,So, If n=2, then the data frame should look like, col1 col2 col3 A 1 6 (only below 2 values are there no upper values, so adding 3 numbers) B 3 11 (adding one prev, current and next two) C 2 17(adding all 4 values) D 5 24(same as above) E 6 31(same as above) F 8 29(adding two prev and next one as only one is present) G 10 24(adding with only prev two values)When previous or next 2 values are not found adding whatever values are available.I can do it using a for loop, but the execution time will be huge, looking for some pandas shortcuts do do it most efficiently.
You can use the rolling method.import pandas as pddf = pd.read_json('{"col1":{"0":"A","1":"B","2":"C","3":"D","4":"E","5":"F","6":"G"},"col2":{"0":1,"1":3,"2":2,"3":5,"4":6,"5":8,"6":10}}')df['col3'] = df['col2'].rolling(5, center=True, min_periods=0).sum()col1 col2 col30 A 1 6.01 B 3 11.02 C 2 17.03 D 5 24.04 E 6 31.05 F 8 29.06 G 10 24.0
Python script is failing with ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) I've similar issue as many of you but cannot get this resolve. I'm producing a self-executable file that is running fine on my VirtualBox Linux 7.3 and 7.9 but when I'm trying to run it somewhere else (on other Linux servers) I'm getting the below output: Traceback (most recent call last): File "urllib3/connectionpool.py", line 706, in urlopen File "urllib3/connectionpool.py", line 382, in _make_request File "urllib3/connectionpool.py", line 1010, in _validate_conn File "urllib3/connection.py", line 421, in connect File "urllib3/util/ssl_.py", line 429, in ssl_wrap_socket File "urllib3/util/ssl_.py", line 472, in _ssl_wrap_socket_impl File "ssl.py", line 365, in wrap_socket File "ssl.py", line 776, in __init__ File "ssl.py", line 1036, in do_handshake File "ssl.py", line 648, in do_handshakeConnectionResetError: [Errno 104] Connection reset by peerDuring handling of the above exception, another exception occurred:Traceback (most recent call last): File "requests/adapters.py", line 449, in send File "urllib3/connectionpool.py", line 756, in urlopen File "urllib3/util/retry.py", line 532, in increment File "urllib3/packages/six.py", line 734, in reraise File "urllib3/connectionpool.py", line 706, in urlopen File "urllib3/connectionpool.py", line 382, in _make_request File "urllib3/connectionpool.py", line 1010, in _validate_conn File "urllib3/connection.py", line 421, in connect File "urllib3/util/ssl_.py", line 429, in ssl_wrap_socket File "urllib3/util/ssl_.py", line 472, in _ssl_wrap_socket_impl File "ssl.py", line 365, in wrap_socket File "ssl.py", line 776, in __init__ File "ssl.py", line 1036, in do_handshake File "ssl.py", line 648, in do_handshakeurllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))During handling of the above exception, another exception occurred:Traceback (most recent call last): File "create_incident.py", line 415, in <module> open_incident_ticket() File "create_incident.py", line 368, in open_incident_ticket resp = requests.post(endpoint_uri, headers=headers, data = json.dumps(data)) File "requests/api.py", line 119, in post File "requests/api.py", line 61, in request File "requests/sessions.py", line 542, in request File "requests/sessions.py", line 655, in send File "requests/adapters.py", line 498, in sendrequests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))My post constructor looks like:resp = requests.post(endpoint_uri, headers=headers, data = json.dumps(data))Could you pls advise me where exactly I need to look for it? Is there multiple issues that I'm struggling with?Many thanks,Mario
In the end, the issue was caused by firewall settings.
OpenCV Orb Algorithm QR Code Match Issues I'm following a video tutorial about, feature detection and matching by using Python OpenCV. Video uses the ORB (Oriented FAST and Rotated BRIEF) algorithm, as seen in the link below:https://youtu.be/nnH55-zD38ISo i decided to use it the example 2 images i have, with little modification on the code.There are 2 input images, 1 with single QR code (single.jpg), other with a few different QR codes inside (multiple.jpg). The aim is to find the most similar region in the bigger image (multiple.jpg). But getting matches with totally different QR codes.Why it is marking different region, and can we do an improvement on this example?import cv2MULTIPLE_NAME ="...multiple.jpg"SINGLE_NAME = "...single.jpg"multiple = cv2.imread(MULTIPLE_NAME)single = cv2.imread(SINGLE_NAME)orb=cv2.ORB_create()kpsingle,dessingle = orb.detectAndCompute(single,None)kpmultiple,desmultiple = orb.detectAndCompute(multiple,None)bf=cv2.BFMatcher()matches = bf.knnMatch(dessingle, desmultiple, k=2)good=[]for m, n in matches: if m.distance < 2*n.distance: good.append([m])img3 = cv2.drawMatchesKnn(single, kpsingle, multiple, kpmultiple, good, None, flags=2)cv2.imshow("img",multiple)cv2.imshow("crop",single)cv2.imshow("img3",img3)cv2.waitKey()
Method 1: pyzbar QR code (you need to pip or conda install pyzbar)I tried and got thisimport cv2import numpy as npimport pyzbar.pyzbar as pyzbarYELLOW = (0,255,255)RED = (0,0,255)font = cv2.FONT_HERSHEY_SIMPLEXimg = cv2.imread(r"C:\some-path\qr-codes-green.jpg")# Create a qrCodeDetector Objectdec_objs = pyzbar.decode(img)for d in dec_objs: pts = [[p.x,p.y] for p in d.polygon] txt_org = (d.rect.left,d.rect.top) txt = d.data.decode("utf-8") cv2.putText(img,txt,txt_org,font,0.5,YELLOW,1,cv2.LINE_AA) poly = np.int32([pts]) cv2.polylines(img,poly,True,RED,thickness=1,lineType=cv2.LINE_AA)cv2.imshow('QR-scanner',img)cv2.waitKey(0)ResultsMethod 2: wechat QR codeimport cv2import numpy as npimport pyzbar.pyzbar as pyzbarYELLOW = (0,255,255)RED = (0,0,255)font = cv2.FONT_HERSHEY_SIMPLEXimg = cv2.imread(r"C:\some-path\qr-codes-green.jpg")mod_path = r'C:\some-path\model\\'detector = cv2.wechat_qrcode_WeChatQRCode(mod_path+'detect.prototxt', mod_path+'detect.caffemodel', mod_path+'sr.prototxt', mod_path+'sr.caffemodel')res, points = detector.detectAndDecode(img)for i in range(len(res)): poly = points[i].astype(np.int32) txt = res[i] print(poly) txt_org = poly[0] cv2.putText(img,txt,txt_org,font,0.5,YELLOW,1,cv2.LINE_AA) cv2.polylines(img, [poly], True, RED, thickness=1, lineType=cv2.LINE_AA)cv2.imshow('QR-scanner',img)cv2.waitKey(0)Good references herehttps://learnopencv.com/opencv-qr-code-scanner-c-and-python/https://learnopencv.com/wechat-qr-code-scanner-in-opencv/
Read lines containing integers from a file in Python? I have a file format like this:9 8 13 4 1......Now, I want to get each line as three integers.When I usedfor line in f.readlines(): print line.split(" ")The script printed this:['9', '8', '1\r\n']['3', '4', '1\r\n']......How can I get each line as three integers?
Using the code you have and addressing your specific question of how to convert your list to integers:You can iterate through each line and convert the strings to int with the following example using list comprehension:Given:line =['3', '4', '1\r\n']then:int_list = [int(i) for i in line]will yield a list of integers[3, 4, 1]that you can then access via subscripts (0 to 2). e.g. int_list[0] contains 3, int_list[1] contains 4, etc.A more streamlined version for your consideration:with open('data.txt') as f: for line in f: int_list = [int(i) for i in line.split()] print int_listThe advantage of using with is that it will automatically close your file for you when you are done, or if you encounter an exception.UPDATE:Based on your comments below, if you want the numbers in 3 different variables, say a, b and c, you can do the following: for line in f: a, b, c = [int(i) for i in line.split()] print 'a = %d, b = %d, c = %d\n' %(a, b, c)and get this: a = 9, b = 8, c = 1This counts on there being 3 numbers on each line.Aside:Note that in place of "list comprehension" (LC) you can also use a "generator expression" (GE) of this form: a, b, c = (int(i) for i in line.split())for your particular problem with 3 integers this doesn't make much difference, but I show it for completeness. For larger problems, LC requires more memory as it generates a complete list in memory at once, while GE generate a value one by one as needed. This SO question Generator Expressions vs. List Comprehension will give you more information if you are curious.
Include file and line number in python logs so clicking in PyCharm can go to the source line? I'm new to Python but experienced in Java. One of the more useful tricks was to have the slf4j backend include file name and line number of the source line for a log statement so IDE's could navigate to that when clicked.I would like to have the same facility in Python but is inexperienced with the ecosystem.How do I configure the Python logging library to add this?
You can use any of the LogRecord attributes in the formatter of a logger.import logginglogging.basicConfig(format="File: %(filename)s Line: %(lineno)d Message: %(message)s")logging.warning("this is a log")Output: File: lineno.py Line: 4 Message: this is a log
Extracting the last items from nested lists in python I have some nested lists. I want to extract the last occurring element within each sublist (eg 'bye' for the first sublist). I then want to add all these last occurring elements ('bye', 'bye', 'hello' and 'ciao') to a new list so that I can count them easily, and find out which is the most frequently occurring one ('bye).The problem is that my code leaves me with an empty list. I've looked at Extracting first and last element from sublists in nested list and How to extract the last item from a list in a list of lists? (Python) and they're not exactly what I'm looking for.Thanks for any help!my_list = [['hello', 'bye', 'bye'], ['hello', 'bye', 'bye'], ['hello', 'hello', 'hello'], ['hello', 'bye', 'ciao']]# Make a new list of all of the last elements in the sublistsnew_list = []for sublist in my_list: for element in sublist: if sublist.index(element) == -1: new_list.append(element)# MY OUTPUTprint(new_list)[] # EXPECTED OUTPUT ['bye', 'bye', 'hello', 'ciao']# I would then use new_list to find out what the most common last element is:most_common = max(set(new_list), key = new_list.count) # Expected final outputprint(most_common)# 'bye'
You are looking for this:for sublist in my_list: new_list.append(sublist[-1])The index -1 does not "really exist", it is a way to tell python to start counting from the end of the list. That is why you will not get a match when looking for -1 like you do it.Additionally, you are walking over all the lists, which is not necessary, as you have "random access" by fetching the last item as you can see in my code.There is even a more pythonic way to do this using list comprehensions:new_list = [sublist[-1] for sublist in my_list]Then you do not need any of those for loops.
glib main loop hangs after Popen I'm trying to build a script that logs the window title when a different window becomes active. This is what I have so far:import glibimport dbusfrom dbus.mainloop.glib import DBusGMainLoopdef notifications(bus, message): if message.get_member() == "event": args = message.get_args_list() if args[0] == "activate": print "Hello world" activewindow = Popen("xdotool getactivewindow getwindowname", stdout=PIPE, stderr=PIPE); print activewindow.communicate()DBusGMainLoop(set_as_default=True)bus = dbus.SessionBus()bus.add_match_string_non_blocking("interface='org.kde.KNotify',eavesdrop='true'")bus.add_message_filter(notifications)mainloop = glib.MainLoop()mainloop.run()However, something is apparently wrong with my Popen call, and glib seems to swallow the error. At least, that is what someone on a IRC channel told me. When I remove the Popen and activewindow.communicate() calls, everything keeps working and I get a message "Hello world!" printed in the shell whenever I switch to a new window. With the Popen and communicate() calls, the script prints a single "Hello world" and hangs after that. Does anyone know:How I can get a proper error message?What I'm doing wrong in my Popen call?Thanks in advance!
I can't just comment... you haven't imported process module, or its members (Popen, PIPE).
Convert a list to a string and back I have a virtual machine which reads instructions from tuples nested within a list like so:[(0,4738),(0,36), (0,6376),(0,0)]When storing this kind of machine code program, a text file is easiest, and has to be written as a string. Which is obviously quite hard to convert back.Is there any module which can read a string into a list/store the list in a readable way?requirements: Must be human readable in stored form (hence "pickle" is not suitable) Must be relatively easy to implement
Use the json module:string = json.dumps(lst)lst = json.loads(string)Demo:>>> import json>>> lst = [(0,4738),(0,36),... (0,6376),(0,0)]>>> string = json.dumps(lst)>>> string'[[0, 4738], [0, 36], [0, 6376], [0, 0]]'>>> lst = json.loads(string)>>> lst[[0, 4738], [0, 36], [0, 6376], [0, 0]]An alternative could be to use repr() and ast.literal_eval(); for just lists, tuples and integers that also allows you to round-trip:>>> from ast import literal_eval>>> string = repr(lst)>>> string'[[0, 4738], [0, 36], [0, 6376], [0, 0]]'>>> lst = literal_eval(string)>>> lst[[0, 4738], [0, 36], [0, 6376], [0, 0]]JSON has the added advantage that it is a standard format, with support from tools outside of Python support serialising, parsing and validation. The json library is also a lot faster than the ast.literal_eval() function.
Negative results in ARIMA model I'm trying to predict daily revenue to end of month by learning previous month. Due to different behavior of the revenue between workdays and weekends I decided to use time series model (ARIMA) in Python.This is the my Python code that I'm using:import itertoolsimport pandas as pdimport numpy as npfrom datetime import datetime, date, timedeltaimport statsmodels.api as smimport matplotlib.pyplot as pltplt.style.use('fivethirtyeight')import calendardata_temp = [['01/03/2020',53921.785],['02/03/2020',97357.9595],['03/03/2020',95353.56893],['04/03/2020',93319.6761999999],['05/03/2020',88835.79958],['06/03/2020',98733.0856000001],['07/03/2020',61501.03036],['08/03/2020',74710.00968],['09/03/2020',156613.20712],['10/03/2020',131533.9006],['11/03/2020',108037.3002],['12/03/2020',106729.43067],['13/03/2020',125724.79704],['14/03/2020',79917.6726599999],['15/03/2020',90889.87192],['16/03/2020',160107.93834],['17/03/2020',144987.72243],['18/03/2020',146793.40641],['19/03/2020',145040.69416],['20/03/2020',140467.50472],['21/03/2020',69490.18814],['22/03/2020',82753.85331],['23/03/2020',142765.14863],['24/03/2020',121446.77825],['25/03/2020',107035.29359],['26/03/2020',98118.19468],['27/03/2020',82054.8721099999],['28/03/2020',61249.91097],['29/03/2020',72435.6711699999],['30/03/2020',127725.50818],['31/03/2020',77973.61724]] panel = pd.DataFrame(data_temp, columns = ['Date', 'revenue'])pred_result=pd.DataFrame(columns=['revenue'])panel['Date']=pd.to_datetime(panel['Date'])panel.set_index('Date', inplace=True)ts = panel['revenue']p = d = q = range(0, 2)pdq = list(itertools.product(p, d, q))seasonal_pdq = [(x[0], x[1], x[2], 7) for x in list(itertools.product(p, d, q))]aic = float('inf')for es in [True,False]: for param in pdq: for param_seasonal in seasonal_pdq: try: mod = sm.tsa.statespace.SARIMAX(ts, order=param, seasonal_order=param_seasonal, enforce_stationarity=es, enforce_invertibility=False) results = mod.fit() if results.aic<aic: param1=param param2=param_seasonal aic=results.aic es1=es #print('ARIMA{}x{} enforce_stationarity={} - AIC:{}'.format(param, param_seasonal,es,results.aic)) except: continueprint('Best model parameters: ARIMA{}x{} - AIC:{} enforce_stationarity={}'.format(param1, param2, aic,es1))mod = sm.tsa.statespace.SARIMAX(ts, order=param1, seasonal_order=param2, enforce_stationarity=es1, enforce_invertibility=False)results = mod.fit()pred_uc = results.get_forecast(steps=calendar.monthrange(datetime.now().year,datetime.now().month)[1]-datetime.now().day+1)pred_ci = pred_uc.conf_int()ax = ts.plot(label='observed', figsize=(12, 5))pred_uc.predicted_mean.plot(ax=ax, label='Forecast')ax.fill_between(pred_ci.index, pred_ci.iloc[:, 0], pred_ci.iloc[:, 1], color='k', alpha=.25)ax.set_xlabel('Date')plt.legend()plt.show()predict=pred_uc.predicted_mean.to_frame()predict.reset_index(inplace=True)predict.rename(columns={'index': 'date',0: 'revenue_forcast'}, inplace=True)display(predict)The output looks like:How you can see the prediction results have negative value as result of negative slope.Since I'm trying to predict income, the result cannot be lower than zero, and the negative slope also looks very strange.What's wrong with my method?How can I improve it?
You can't force an ARIMA model to take only positive values. However, a classic 'trick' when you want to predict something that's always positive is to use a function that converts positive values to any value in R. The log function is a good example of this.panel['log_revenue'] = np.log(panel['revenue'])And predict now log_revenue column.Now if the predictions take negative values, that's ok because your prediction is actually np.exp(predict), which is positive.
Can't access text saved in a Quill form field on Django template In my django template I want to access the bio prop of an instance of my Creator class. This bio is set up as a QuillField in the Creator model class. When I try to access creator.bio, all that renders to the page is the following:<django_quill.fields.FieldQuill object at 0x1084ce518>What I want is the actual paragraph of formatted text (ie. the bio) that I typed into the form and saved. As of now, the QuillField is only accessible through the form in the Django admin page. The problem has nothing to do with the Quill UI, but rather being able to access the text I wrote into that form field and render it to the page in a readable format.From models.py:from django.db import modelsfrom django_quill.fields import QuillFieldclass Creator(models.Model): name = models.CharField(max_length=100) title = models.CharField(max_length=100, default='Creator') bio = QuillField() photo = models.ImageField(upload_to='images/', default='static/assets/icons/user-solid.svg') email = models.EmailField(max_length=100) website = models.URLField(max_length=1000, blank=True) facebook = models.URLField(max_length=1000, blank=True) twitter = models.URLField(max_length=1000, blank=True) instagram = models.URLField(max_length=1000, blank=True) def __str__(self): return self.nameIn views.py:def about(request): context = {"creators" : Creator.objects.all()} return render(request, 'about.html', context)And, in the template: <section id="creator-container"> {% for creator in creators %} <div class="creator-square"> <h4>{{ creator.name }}</h4> <h5>{{ creator.title }}</h5> <img src="../../media/{{ creator.photo }}" alt="{{actor.name}} headshot" id="creator-photo"> <p class="creator-bio">{{ creator.bio }}</p> </div> {% endfor %} </section>If I print the creator.bio object to the console, this is what I get:{"delta":"{\"ops\":[{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"bold\":true},\"insert\":\"Sharon Yablon\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\" is an award-winning playwright who has been writing and directing her plays in Los Angeles for many years. Her work has appeared in a variety of sites, and on stage with The Echo Theater Company, Padua Playwrights, Zombie Joe's Underground Theater, The Lost Studio, Theater Unleashed, Bootleg, Theater of N.O.T.E., and others. Her short stories, \\\"Perfidia\\\" and \\\"The Caller,\\\" can be found in journals, and her published plays are in \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"Desert Road's One Acts of Note\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\", \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"Fever Dreams\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\", \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"Los Angeles Under the Influence\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\", \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"LA Writers and Their Works\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\", and others. She was co-editor of an anthology of plays from the LA underground scene titled \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"I Might Be The Person You Are Talking To, \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\"and most recently, her play \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"Hello Stranger\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\" (Theater of N.O.T.E., 2017) was published by Original Works. She is a frequent writer and sometime co-curator with Susan Hayden's \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"Library Girl\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\", a \\\"Best of the Westside\\\" monthly literary series centered around a music theme. Her one-acts inspired by crimes in LA history have appeared in \"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\",\"italic\":true},\"insert\":\"LA True Crime’s\"},{\"attributes\":{\"background\":\"transparent\",\"color\":\"#000000\"},\"insert\":\" quarterly evenings since its inception in 2015. \"},{\"insert\":\"\\n\"}]}","html":"<p><strong style=\"background-color: transparent; color: rgb(0, 0, 0);\">Sharon Yablon</strong><span style=\"background-color: transparent; color: rgb(0, 0, 0);\"> is an award-winning playwright who has been writing and directing her plays in Los Angeles for many years. Her work has appeared in a variety of sites, and on stage with The Echo Theater Company, Padua Playwrights, Zombie Joe's Underground Theater, The Lost Studio, Theater Unleashed, Bootleg, Theater of N.O.T.E., and others. Her short stories, \"Perfidia\" and \"The Caller,\" can be found in journals, and her published plays are in </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">Desert Road's One Acts of Note</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\">, </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">Fever Dreams</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\">, </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">Los Angeles Under the Influence</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\">, </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">LA Writers and Their Works</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\">, and others. She was co-editor of an anthology of plays from the LA underground scene titled </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">I Might Be The Person You Are Talking To, </em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\">and most recently, her play </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">Hello Stranger</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\"> (Theater of N.O.T.E., 2017) was published by Original Works. She is a frequent writer and sometime co-curator with Susan Hayden's </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">Library Girl</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\">, a \"Best of the Westside\" monthly literary series centered around a music theme. Her one-acts inspired by crimes in LA history have appeared in </span><em style=\"background-color: transparent; color: rgb(0, 0, 0);\">LA True Crime’s</em><span style=\"background-color: transparent; color: rgb(0, 0, 0);\"> quarterly evenings since its inception in 2015. </span></p>"}Does anyone know how to access this so that it renders correctly, as HTML text?
Based on https://github.com/LeeHanYeong/django-quill-editor/issues/12 it sounds like you need to use:{{ creator.bio.html|safe }}(though be careful using safe if you aren't certain the HTML is not malicious!)
Binary Search Implementation Using Slicing Regarding the binary search implementation given below:def bin_search(arr, key): n = len(arr) if n < 2: return (0 if (n == 1 and arr[0] == key) else None) m = int(0.5 * n) if arr[m] > key: return bin_search(arr[:m], key) result = bin_search(arr[m:], key) return (result + m if result != None else None) For the above binary search implementation, time complexity will be affected as we are taking slice of an array and space complexity too as list slicing in python creates a new list object. For improving the above implementation, I am thinking of introducing lower and upper bound variables just as in its original implementation. But it will modify the above code implementation completely.Can you please let me know how to modify the above implementation so that the time and space complexity of it is improved and is my understanding regarding its complexity correct?
Here is iterative solution with time complexity of O(log(n)) and space complexity of O(1). Instead of modifying array you just modify the positions of pointers. I mean left/right by pointers.def binary_search(array, target): return binary_search_helper(array, target, 0, len(array) - 1)def binary_search_helper(array, target, left, right): while left <= right: middle = (left + right) // 2 match = array[middle] if target == match: return middle elif target < match: right = middle - 1 else: left = middle + 1 return -1Recursive solution: I don't see a way to improve complexity with a slight changes since you need to work with positions instead of array itself. That will affect your base case and function calls. Here is my attempt to reduce space complexity from O(n) to O(log(n)).def bin_search(arr, key, left=0, right=len(arr) - 1): # Should change base case since we modify only pointers if left > right: return None # since we are not modifying our array working with length will not work m = int(0.5 * (left + right)) if arr[m] == key: return m elif arr[m] > key: return bin_search(arr, key, left, m - 1) else: return bin_search(arr, key, m + 1, right)PS: Need to create arr beforehand or create another caller function since we define right=len(arr) - 1 in function definition. I would recommend using caller function like this:def binary_search_caller(arr, key): return bin_search(arr, key, 0, len(array) - 1)And change function definition to:def bin_search(arr, key, left, right): ...
Cholesky implementation in python - Solve Ax=b I'm using Cholesky decomposition for Ax=b to find x , by doing L*LT=A then y=L*b and in the end x=LT*b.When I check though I don't seem to get the same results as doing the classic Ax=b . Here's my code :import numpy as npimport scipy.linalg as slamyL=np.linalg.cholesky(A)#check_x = np.dot(A, b)#check_x = np.dot(A,b)check_x = sla.solve(A, b)#check if the composition was done rightmyLT=myL.T.conj() #transpose matrixAc=np.dot(myL,myLT) #should give the original matrix A#y=np.dot(myL,b)y = sla.solve_triangular(myL, b)#x=np.dot(myL.T.conj(),y)x = sla.solve_triangular(myLT, b)
I was sleepless and tired , I got the last line wrong it actually is x=np.linalg.solve(myLT, y)
how to sort strings in python in a numerical fashion this the output but i want to arrange these file names in numerical order like(0,1,2,...1000) and not based on the first value(0,1,10,100)D:\deep>python merge.py./data/00.jpg./data/01.jpg./data/010.jpg./data/0100.jpg./data/0101.jpg./data/0102.jpg./data/0103.jpg./data/0104.jpg./data/0105.jpg./data/0106.jpg./data/0107.jpg./data/0108.jpg./data/0109.jpg./data/011.jpg./data/0110.jpg./data/0111.jpg./data/0112.jpg./data/0113.jpg./data/0114.jpg./data/0115.jpg./data/0116.jpg./data/0117.jpg./data/0118.jpg./data/0119.jpgthe code i used is the below.i want to sort the filenames in a numerical orderi tried using sort function with key as int but it dint workimport cv2import osimport numpy as npimage_folder = 'd:/deep/data'video_name = 'video.avi'images = [img for img in os.listdir(image_folder) if img.endswith(".jpg")]print(images)frame = cv2.imread(os.path.join(image_folder, images[0]))height, width, layers = frame.shapefourcc = cv2.VideoWriter_fourcc(*'XVID')video = cv2.VideoWriter(video_name, fourcc,15.0, (width,height))for image in images: video.write(cv2.imread(os.path.join(image_folder, image)))cv2.destroyAllWindows()video.release()
Let's say that your paths are in a list paths. It could also be a generator, of course.paths = ["./data/00.jpg", "./data/100.jpg", "./data/01.jpg"]Thensorted(paths, key=lambda p: int(p[7:-4]))returns exactly the desired output:['./data/00.jpg', './data/01.jpg', './data/100.jpg']
Error Getting Managed Identity Access Token from Azure Function I'm having an issue retrieving an Azure Managed Identity access token from my Function App. The function gets a token then accesses a Mysql database using that token as the password.I am getting this response from the function:9103 (HY000): An error occurred while validating the access token. Please acquire a new token and retry.Code:import loggingimport mysql.connectorimport requestsimport azure.functions as funcdef main(req: func.HttpRequest) -> func.HttpResponse: def get_access_token(): URL = "http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<client_id>" headers = {"Metadata":"true"} try: req = requests.get(URL, headers=headers) except Exception as e: print(str(e)) return str(e) else: password = req.json()["access_token"] return password def get_mysql_connection(password): """ Get a Mysql Connection. """ try: con = mysql.connector.connect( host='<host>.mysql.database.azure.com', user='<user>@<db>', password=password, database = 'materials_db', auth_plugin='mysql_clear_password' ) except Exception as e: print(str(e)) return str(e) else: return "Connected to DB!" password = get_access_token() return func.HttpResponse(get_mysql_connection(password))Running a modified version of this code on a VM with my managed identity works. It seems that the Function App is not allowed to get an access token. Any help would be appreciated.Note: I have previously logged in as AzureAD Manager to the DB and created this user with all privileges to this DB.Edit: No longer calling endpoint for VMs.def get_access_token(): identity_endpoint = os.environ["IDENTITY_ENDPOINT"] # Env var provided by Azure. Local to service doing the requesting. identity_header = os.environ["IDENTITY_HEADER"] # Env var provided by Azure. Local to service doing the requesting. api_version = "2019-08-01" # "2018-02-01" #"2019-03-01" #"2019-08-01" CLIENT_ID = "<client_id>" resource_requested = "https%3A%2F%2Fossrdbms-aad.database.windows.net" # resource_requested = "https://ossrdbms-aad.database.windows.net" URL = f"{identity_endpoint}?api-version={api_version}&resource={resource_requested}&client_id={CLIENT_ID}" headers = {"X-IDENTITY-HEADER":identity_header} try: req = requests.get(URL, headers=headers) except Exception as e: print(str(e)) return str(e) else: try: password = req.json()["access_token"] except: password = str(req.text) return passwordBut now I am getting this Error:{"error":{"code":"UnsupportedApiVersion","message":"The HTTP resource that matches the request URI 'http://localhost:8081/msi/token?api-version=2019-08-01&resource=https%3A%2F%2Fossrdbms-aad.database.windows.net&client_id=<client_idxxxxx>' does not support the API version '2019-08-01'.","innerError":null}}Upon inspection this seems to be a general error. This error message is propagated even if it's not the underlying issue. Noted several times in Github.Is my endpoint correct now?
For this problem, it was caused by the wrong endpoint you request for the access token. We can just use the endpoint http://169.254.169.254/metadata/identity..... in azure VM, but if in azure function we can not use it.In azure function, we need to get the IDENTITY_ENDPOINT from the environment.identity_endpoint = os.environ["IDENTITY_ENDPOINT"]The endpoint is like:http://127.0.0.1:xxxxx/MSI/token/You can refer to this tutorial about it, you can also find the python code sample in the tutorial.In my function code, I also add the client id of the managed identity I created in the token_auth_uri but I'm not sure if the client_id is necessary here (In my case, I use user-assigned identity but not system-assigned identity).token_auth_uri = f"{identity_endpoint}?resource={resource_uri}&api-version=2019-08-01&client_id={client_id}"Update:#r "Newtonsoft.Json"using System.Net;using Microsoft.AspNetCore.Mvc;using Microsoft.Extensions.Primitives;using Newtonsoft.Json;public static async Task<IActionResult> Run(HttpRequest req, ILogger log){ string resource="https://ossrdbms-aad.database.windows.net"; string clientId="xxxxxxxx"; log.LogInformation("C# HTTP trigger function processed a request."); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(String.Format("{0}/?resource={1}&api-version=2019-08-01&client_id={2}", Environment.GetEnvironmentVariable("IDENTITY_ENDPOINT"), resource,clientId)); request.Headers["X-IDENTITY-HEADER"] = Environment.GetEnvironmentVariable("IDENTITY_HEADER"); request.Method = "GET"; HttpWebResponse response = (HttpWebResponse)request.GetResponse(); StreamReader streamResponse = new StreamReader(response.GetResponseStream()); string stringResponse = streamResponse.ReadToEnd(); log.LogInformation("test:"+stringResponse); string name = req.Query["name"]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; return name != null ? (ActionResult)new OkObjectResult($"Hello, {name}") : new BadRequestObjectResult("Please pass a name on the query string or in the request body");}
Python/Flask - Comparing user input to database value Im trying some learning-by-doing here with Python and Flask and have run into something I don't quite understand - hoping someone out there might be able to help or explain this behavior.I'm not building anything real, just trying to learn and understand what is going on here.I'm playing with registration / login form in Python/Flask and am struggling with the login part.I have built a registration from which writes name, email and password (unhashed, that comes later) to a simple table 'users' with a | ID | name | email | password | structure, all values being varchar except ID which is INT and auto increments.My imports areimport osfrom flask import Flask, session, requestfrom flask_session import Sessionfrom sqlalchemy import create_enginefrom sqlalchemy.orm import scoped_session, sessionmakerfrom flask import Flask, render_templateapp = Flask(__name__)# Set up databaseengine = create_engine(os.getenv("DATABASE_URL"))db = scoped_session(sessionmaker(bind=engine))I have a html login form that looks as follows<form action="{{ url_for('login') }}" method="post"> <div class="form-group"> <label for="email">E-mail:</label> <input type="email" name="email" class="form-control" placeholder="Enter email"> </div> <div class="form-group"> <label for="pwd">Password:</label> <input type="password" name="pwd" class="form-control" placeholder="Enter password"> </div> <button type="submit" class="btn btn-primary">Login</button></form><p> {{ logintry }}My Flask application route for 'login' is as [email protected]("/login", methods=['GET', 'POST'])def login(): if request.method == 'POST': uname = request.form.get("email") passwd = request.form.get("pwd") pwCheck = db.execute("SELECT password FROM users WHERE email = :uname", {"uname": uname}).fetchone() if pwCheck == passwd: return render_template("authenticated.html") else: return render_template("login.html", logintry="Login Failure") else: return render_template("login.html")In this sample I am trying to log in with [email protected] email and password is 1234The problem I have is that the database value seems to be returned as ('1234',)Whereas the user input is simply presented as1234And therefore they are not equal, and the login fails.Can anyone help guide me here, or maybe explain what is going on ?Thanks in advance.
There are two main things to understand here:1. What the database is returning2. What your form is returningIn order to understand how to get the login to work you must understand how to ensure that the input/results your form and database give you, can be compared.In your question you said that the database is returning ('1234',). This is a tuple in python, and can be indexed. Indexing your tuple, like sopwCheck[0]would return '1234'. So instead of comparing the raw result that your database query is returning, you should instead understand that your database is returning data that needs a little bit more processing before comparing against the form input. You could add an extra line which creates a new variable db_pwd, like sodb_pwd = pwCheck[0]And then check if db_pwd == passwd
Improve Regex to catch complete emails from Google search? In order to practice and help my sister get emails from doctors for her baby, I have designed this email harvester. It makes a search, cleans the urls given, adds them to a dictionary and parse them for emails in two different ways. The code has been taken from different places, so if you correct me, please explain clearly your improvement, as I am working at the limit of my knowledge already.The question is how to get emails better (and improve code, if possible). I'll post the code and the exact output below:CODE of my program:import requests, re, webbrowser, bs4from selenium import webdriverfrom bs4 import BeautifulSoupimport time, random, webbrowserimport urllib.requestdef google_this(): #Googles and gets the first few links search_terms = ['Fiat','Lambrusco'] added_terms = 'email contact? @' #This searches for certain keywords in Google and parses results with BS for el in search_terms: webpage = 'http://google.com/search?q=' + str(el) + str(added_terms) print('Searching for the terms...', el,added_terms) headers = {'User-agent':'Mozilla/5.0'} res = requests.get(webpage, headers=headers) #res.raise_for_status() statusCode = res.status_code if statusCode == 200: soup = bs4.BeautifulSoup(res.text,'lxml') serp_res_rawlink = soup.select('.r a') dicti = [] #This gets the href links for link in serp_res_rawlink: url = link.get('href') if 'pdf' not in url: dicti.append(url) dicti_url = [] #This cleans the "url?q=" from link for el in dicti: if '/url?q=' in el: result = (el.strip('/url?q=')) dicti_url.append(result) #print(dicti_url) dicti_pretty_links = [] #This cleans the gibberish at end of url for el in dicti_url[0:4]: pretty_url = el.partition('&')[0] dicti_pretty_links.append(pretty_url) print(dicti_pretty_links) for el in dicti_pretty_links: #This converts page in BS soup # browser = webdriver.Firefox() # browser.get(el) # print('I have been in the element below and closed the window') # print(el) # time.sleep(1) # browser.close() webpage = (el) headers = {'User-agent':'Mozilla/5.0'} res = requests.get(webpage, headers=headers) #res.raise_for_status() statusCode = res.status_code if statusCode == 200: soup = bs4.BeautifulSoup(res.text,'lxml') #This is the first way to search for an email in soup emailRegex = re.compile(r'([a-zA-Z0-9_.+]+@+[a-zA-Z0-9_.+])', re.VERBOSE) mo = emailRegex.findall(res.text) #mo = emailRegex.findall(soup.prettify()) print('THIS BELOW IS REGEX') print(mo) #This is the second way to search for an email in soup: mailtos = soup.select('a[href^=mailto]') for el in mailtos: print('THIS BELOW IS MAILTOS') print(el.text) time.sleep(random.uniform(0.5,1))google_this()And here is the OUTPUT when this very same code above. As you can see, some emails seem to be found, but at cut just after the "@" symbol:C:\Users\SK\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/SK/PycharmProjects/untitled/another_temperase.pySearching for the terms... Fiat email contact? @['http://www.fcagroup.com/en-US/footer/Pages/contacts.aspx', 'http://www.fiat.co.uk/header-contacts', 'http://www.fiatusa.com/webselfservice/fiat/', 'https://twitter.com/nic_fincher81/status/672505531689394176']THIS BELOW IS REGEX['investor.relations@f', 'investor.relations@f', 'sustainability@f', 'sustainability@f', 'mediarelations@f', 'mediarelations@f']THIS BELOW IS [email protected] BELOW IS [email protected] BELOW IS [email protected] BELOW IS REGEX[]THIS BELOW IS REGEX[]THIS BELOW IS REGEX['nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y', 'nic_fincher81@y']Searching for the terms... Lambrusco email contact? @['http://www.labattagliola.it/%3Flang%3Den']Process finished with exit code 0
I would recommend a more restrictive version that still catches all of the email:([a-zA-Z0-9_.+]+@[a-zA-Z0-9_.+]+) The problem of not catching anything after the first letter after the @ is because the regex is missing a +([a-zA-Z0-9_.+]+@+[a-zA-Z0-9_.+]+) Originally this part [a-zA-Z0-9_.+] simply said to catch one of any of the following characters a-z, A-Z, 0-9, ., _,+.I would also be careful about @+ which says to catch 1 or more "@" symbols.So a potentially valid email could look like this: ..................@@@@@@@@@@@@@@@@@@@@@@@@.................
try to print out a matrix NameError: name 'Qb_matrix' is not defined I tried to print out a matrix using code as follow, however, it showed up nameerror.i wonder where specifically should i define the matrix?can python recognise the abbreviation as Qb to Q_bar?import numpy as npQ11 = 14.583Q12 = 1.4583Q23 = 0Q22 = 3.646Q33 = 4.2theta = 60def Q_bar(Q11, Q12, Q22, Q33, theta):n = np.sin(theta*np.pi/180)m = np.cos(theta*np.pi/180)Qb_11 = Q11*m**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*n**4Qb_22 = Q11*n**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*m**4Qb_33 = (Q11 + Q22 - 2*Q12 - 2*Q33)*n**2*m**2 + Q33*(m**4 + n**4)Qb_12 = (Q11 + Q22 - 4*Q33)*n**2*m**2 + Q12*(m**4 + n**4)Qb_13 = (Q11 - Q12 - 2*Q33)*n*m**3 + (Q12 - Q22 + 2*Q33)*n**3*mQb_23 = (Q11 - Q12 - 2*Q33)*n**3*m + (Q12 - Q22 + 2*Q33)*n*m**3Qb_matrix = np.array([[Qb_11, Qb_12, Qb_13],[Qb_12, Qb_22, Qb_23],[Qb_13, Qb_23, Qb_33]])return(Qb_matrix)print(Qb_matrix)
you never call your function so the code inside it is never executed. Further more even if you did call the function. The variable Qb_matrix which you create in the function will only exist inside the function scope, when you return it you need to store that returned value. import numpy as npQ11 = 14.583Q12 = 1.4583Q23 = 0Q22 = 3.646Q33 = 4.2theta = 60def Q_bar(Q11, Q12, Q22, Q33, theta): n = np.sin(theta*np.pi/180) m = np.cos(theta*np.pi/180) Qb_11 = Q11*m**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*n**4 Qb_22 = Q11*n**4 + 2*(Q12 + 2*Q33)*n**2*m**2 + Q22*m**4 Qb_33 = (Q11 + Q22 - 2*Q12 - 2*Q33)*n**2*m**2 + Q33*(m**4 + n**4) Qb_12 = (Q11 + Q22 - 4*Q33)*n**2*m**2 + Q12*(m**4 + n**4) Qb_13 = (Q11 - Q12 - 2*Q33)*n*m**3 + (Q12 - Q22 + 2*Q33)*n**3*m Qb_23 = (Q11 - Q12 - 2*Q33)*n**3*m + (Q12 - Q22 + 2*Q33)*n*m**3 Qb_matrix = np.array([[Qb_11, Qb_12, Qb_13],[Qb_12, Qb_22, Qb_23],[Qb_13, Qb_23, Qb_33]]) return(Qb_matrix)my_qb_matrix = Q_bar(Q11, Q12, Q22, Q33, theta)print(my_qb_matrix)OUTPUT[[ 6.659175 1.179375 2.52896738] [ 1.179375 12.127675 2.20689254] [ 2.52896738 2.20689254 3.921075 ]]
How to run a kivy script in the kivy VM? So I am looking into using Kivy for Android development. Defeating the jedi etc.But I have hit a roadblock! I installed the Kivy VM image in VirtualBox, but when I try to run the test script:# /usr/bin/kivy__version__ = 1.0from kivy.app import Appfrom kivy.uix.button import Buttonclass Hello(App): def build(self): btn = Button(text='Hello World') return btnHello().run()Using:python main.pyI get:Traceback (most recent call last): File "main.py", line 3, in <module> from kivy.app import AppImportError: No module named kivy.app
I tried just plain installing kivy as they say to on their website, and it worked.sudo add-apt-repository ppa:kivy-team/kivyapt-get install python-kivy
Is it possible to unclutter a graph that uses seconds on x-axis in matplotlib I have a dataset with datetimes that sometimes contain differences in seconds.I have the datetimes (on x-axis) displayed vertically in hopes that the text won't overlap each other, but from the looks of it they're stacked practically on top of each other. I think this is because I have data that differ in seconds and the dates can range through different days, so the x-axis is very tight. Another problem with this is that the datapoints on the graph are also overlapping because the distance between them is so tight.Here's an example set (already converted using date2num()). It differs in seconds, but spans over several days:dates = [734949.584699074, 734959.4604050926, 734959.4888773148, 734949.5844791667, 734959.037025463, 734959.0425810185, 734959.0522916666, 734959.4607060185, 734959.4891435185, 734949.5819444444, 734959.0348726852, 734959.0390393519, 734959.0432175926, 734959.0515393518, 734959.4864814815, 734949.5842476852, 734959.0367476852, 734959.038125, 734959.0423032407, 734959.052025463, 734959.4603819444, 734959.4895023148, 734949.5819791667, 734959.0348958333, 734959.0390740741, 734959.0432407408, 734959.0515856481, 734959.4579976852, 734959.487175926]values = [39, 68, 27, 57, 22, 33, 70, 19, 60, 53, 52, 33, 87, 63, 78, 34, 26, 42, 24, 97, 20, 1, 32, 60, 61, 48, 30, 48, 17]dformat = mpl.dates.DateFormatter('%m-%d-%Y %H:%M:%S')figure = plt.figure()graph = figure.add_subplot(111)graph.xaxis.set_major_formatter(dformat)plt.xticks(rotation='vertical')figure.subplots_adjust(bottom=.35)graph.plot_date(dates,values)graph.set_xticks(dates)plt.show()I have two questions:Is there a way to create a spacing on the x-axis so that I can see the text and the datapoints clearly? This would result in a very long horizontal graph, but I will save this to an image file.Relates to first question: to reduce the horizontal length of the graph, is there a way to compress ticks on the x-axis so that areas which have no data will be shortened?For example, if we have three dates with values: March 22 2013 23:11:04 55 March 22 2013 23:11:10 70 April 1 2013 10:43:56 5Is it possible to condense the spaces between March 22 23:11:10 and April 1 2013 1-:43:56?
You are basically asking for something impossible, you can not both see a range of days and have differences of a few seconds be apparent while keeping the x-axis linear. If you want to try this, you can do something like (doc)fig.set_size_inches(1000, 2, forward=True)which will make you figure 1000 inches wide and 2 inches tall, but doing that is rather ungainly. What I think you should do is apply Dermen's link (Python/Matplotlib - Is there a way to make a discontinuous axis?) with a break anyplace your data has a big break. You will end up with multiple sections that are each a few seconds wide which will give enough space of the tick labels to be readable.
Set margins of a time series plotted with pandas I have the following code for generating a time series plotimport numpy as npfig = plt.figure()ax = fig.add_subplot(111)series = pd.Series([np.sin(ii*np.pi) for ii in range(30)], index=pd.date_range(start='2019-01-01', end='2019-12-31', periods=30))series.plot(ax=ax)I want to set an automatic limit for x and y, I tried using ax.margins() but it does not seem to work:ax.margins(y=0.1, x=0.05)# even with# ax.margins(y=0.1, x=5)What I am looking for is an automatic method like padding=0.1 (10% of whitespace around the graph)
Pandas and matplotlib seem to be confused rather often while collaborating when axes have dates. For some reason in this case ax.margins doesn't work as expected with the x-axis.Here is a workaround which does seem to do the job, explicitely moving the xlims:xmargins = 0.05ymargins = 0.1ax.margins(y=ymargins)x0, x1 = plt.xlim()plt.xlim(x0-xmargins*(x1-x0), x1+xmargins*(x1-x0))Alternatively, you could work directly with matplotlib's plot, which does work as expected applying the margins to the date axis.ax.plot(series.index, series)ax.margins(y=0.1, x=0.05)PS: This post talks about setting use_sticky_edges to False and calling autoscale_view after setting the margins, but also that doesn't seem to work here.ax.use_sticky_edges = Falseax.autoscale_view(scaley=True, scalex=True)
Customizing and understanding GnuRadio QT GUI Vector Sink I have created a simple GnuRadio flowgraph in GNU Radio Companion 3.8 where I connect a Vector Source block (with vector [1,2,3,4,5]) to a QT GUI Vector Sink. When I run the flowgraph, I see a two lines: one which goes from 1 to 5 (as expected) and one which is perfectly horizontal at zero. If I set the reference level in the sink to something other than zero (e.g., 1), that line at zero remains (in addition to a line at the reference). Additionally, the legend in the upper right corner contains Min Hold and Max Hold buttons. An example is shown below:I have a few questions:What is this line at zero? How do I get rid of it?How do I get rid of the Min and Max Hold options in the upper right of the plot?In general, is it true that finer control of the formatting of plots in GNURadio is possible when explicitly writing code (say in a python-based flowgraph) to render the plot instead of using companion?
The vector plot puts markers (horiz lines) at the "LowerIntensityLevel" and "UpperIntensityLevel". It seems like they are both at 0 unless something sets them. There are functions in VectorDisplayPlot to set the levels, but nothing calls them. VectorDisplayPlot is the graphical Qt-based widget that does the actual plot display.These markers default to on. Which seems wrong to me, since nothing sets them and they have no default value, so it seems like you wouldn't want them unless you are going to use them.The line style, color, and if they are enabled or not are style properties of the VectorDisplayPlot. The "dark.qss" theme turns them off, but the default theme has them on.So you can turn them off with a theme.The important parts for the theme are:VectorDisplayPlot { qproperty-marker_lower_intensity_visible: false; qproperty-marker_upper_intensity_visible: false; qproperty-marker_ref_level_visible: false;}It should be possible to make a .qss file with just that in it. Get GRC to use it with the flow graph in the properties of the Options block under "QSS Theme". The "ref_level" line is only needed to make the ref level marker go away.The VectorDisplayPlot is a private member of vector_sink, which is the GNU Radio block that one uses. I see no methods in vector_sink_impl that ever set the upper/lower intensity values, and since only that class has access to the private VectorDisplayPlot, there's no way anything else could set them either. So the feature is totally unusable from any code (Python/C++) using the vector sink, much less from GRC.It looks like these markers are used for some of the other plots, like the spectrum plot. I think someone cut & pasted that code into the vector plot and this behavior is a bug.
Force use of scientific style for basemap colorbar labels String formatting can by used to specify scientific notation for matplotlib.basemap colorbar labels:cb = m.colorbar(cs, ax=ax1, format='%.4e')But then each label is scientifically notated with the base.If numbers are large enough, the colobar automatically reduces them to scientific notation, placing the base (i.e. x10^n) at the top of the color bar, leaving only the coefficient numbers as labels.You can do this with a standard axis with the following:ax.ticklabel_format(style='sci', axis='y', scilimits=(0,0))Is there an equivalent method for matplotlib.basemap colorbars, or perhaps a standard matplotlib colorbar?
There's no one-line method, but you can do this by updating the colorbar's formatter and then calling colorbar.update_ticks(). import numpy as npimport matplotlib.pyplot as pltz = np.random.random((10,10))fig, ax = plt.subplots()im = ax.imshow(z)cb = fig.colorbar(im)cb.formatter.set_powerlimits((0, 0))cb.update_ticks()plt.show()The reason for the slightly odd way of doing things is that a colorbar actually has statically assigned ticks and ticklabels. The colorbar's axes (colorbar.ax) actually always ranges between 0 and 1. (Therefore, altering colorbar.ax.yaxis.formatter doesn't do anything useful.) The tick positions and labels are calculated from colorbar.locator and colorbar.formatter and are assigned when the colorbar is created. Therefore, if you need precise control over a colorbar's ticks/ticklables, you need to explicitly call colorbar.update_ticks() after customizing how the ticks are displayed. The colorbar's convenience functions do this for you behind the scenes, but as far as I know, what you want can't be done through another method.
Python send bash command output by query string I am a beginner to Python so please bear with me. I need my python program to accept incoming data (stdin) from a command (ibeacon scan -b) and send that data by query string to my server. I using Raspbian on a raspberry pi. The ibeacon_scan command output looks like this. iBeacon Scan ...3F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -693F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 6 2 -71 -633F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 1 4 -71 -693F234454-CFD-4A0FF-ADF2-F4911BA9FFA6 5 7 -71 -64...keeps updatingI'm pipping the command to the python script.ibeacon scan -b > python.py &Here is the outline of what I think could work. I need help organizing the code correctly.import httplib, urllib, fileinputfor line in fileinput(): params = urllib.urlencode({'@UUID': 12524, '@Major': 1, '@Minor': 2, '@Power': -71, '@RSSI': -66}) headers = {"Content-type": "application/x-www-form-urlencoded","Accept": "text/plain"} conn = httplib.HTTPConnection("www.example.com") conn.request("POST", "", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() data conn.close()I know there are a lot of problems with this and I could really use any advice on any of this. Thank you for your time!
There are several problems in your code.1. Wrong bash command: At the moment you are extending the output to your python.py file, which is completely wrong. You should use a pipe and execute the python.py script. Your command should look like this:ibeacon scan -b | ./python.py &Make shure your python.py script is executable (chown). As an alternative you could also try this:ibeacon scan -b | python python.py &2. Wrong use of fileintput: Your for loop should look like this:for line in fileinput.input(): ...
How to install wexpect? I'm running 32-bit Windows XP and trying to have Matlab communicate with Cgate, a command line program. I'd like to make this happen using wexpect, which is a port of Python's module pexpect to Windows. I'm having trouble installing or importing wexpect though. I've put wexpect in the folder Lib, along with all other modules. I can import those other modules but just not wexpect. Commands I've tried include:import wexpectimport wexpect.pypython wexpect.py installpython wexpect.py install --home=~wexpect installDoes anyone have anymore ideas?
I have created a Github repo and PyPI project for wexpect. So now wexpect can be installed with:pip install wexpect
How do I disable and then re-enable a warning? I'm writing some unit tests for a Python library and would like certain warnings to be raised as exceptions, which I can easily do with the simplefilter function. However, for one test I'd like to disable the warning, run the test, then re-enable the warning.I'm using Python 2.6, so I'm supposed to be able to do that with the catch_warnings context manager, but it doesn't seem to work for me. Even failing that, I should also be able to call resetwarnings and then re-set my filter.Here's a simple example which illustrates the problem:>>> import warnings>>> warnings.simplefilter("error", UserWarning)>>> >>> def f():... warnings.warn("Boo!", UserWarning)... >>> >>> f() # raises UserWarning as an exceptionTraceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in fUserWarning: Boo!>>> >>> f() # still raises the exceptionTraceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in fUserWarning: Boo!>>> >>> with warnings.catch_warnings():... warnings.simplefilter("ignore")... f() # no warning is raised or printed... >>> >>> f() # this should raise the warning as an exception, but doesn't>>> >>> warnings.resetwarnings()>>> warnings.simplefilter("error", UserWarning)>>> >>> f() # even after resetting, I'm still getting nothing>>> Can someone explain how I can accomplish this?EDIT: Apparently this is a known bug: http://bugs.python.org/issue4180
Reading through the docs and few times and poking around the source and shell I think I've figured it out. The docs could probably improve to make clearer what the behavior is.The warnings module keeps a registry at __warningsregistry__ to keep track of which warnings have been shown. If a warning (message) is not listed in the registry before the 'error' filter is set, any calls to warn() will not result in the message being added to the registry. Also, the warning registry does not appear to be created until the first call to warn:>>> import warnings>>> __warningregistry__------------------------------------------------------------Traceback (most recent call last): File "<ipython console>", line 1, in <module>NameError: name '__warningregistry__' is not defined>>> warnings.simplefilter('error')>>> __warningregistry__------------------------------------------------------------Traceback (most recent call last): File "<ipython console>", line 1, in <module>NameError: name '__warningregistry__' is not defined>>> warnings.warn('asdf')------------------------------------------------------------Traceback (most recent call last): File "<ipython console>", line 1, in <module>UserWarning: asdf>>> __warningregistry__{}Now if we ignore warnings, they will get added to the warnings registry:>>> warnings.simplefilter("ignore")>>> warnings.warn('asdf')>>> __warningregistry__{('asdf', <type 'exceptions.UserWarning'>, 1): True}>>> warnings.simplefilter("error")>>> warnings.warn('asdf')>>> warnings.warn('qwerty')------------------------------------------------------------Traceback (most recent call last): File "<ipython console>", line 1, in <module>UserWarning: qwertySo the error filter will only apply to warnings that aren't already in the warnings registry. To make your code work you'll need to clear the appropriate entries out of the warnings registry when you're done with the context manager (or in general any time after you've used the ignore filter and want a prev. used message to be picked up the error filter). Seems a bit unintuitive...
How to check the existence of a row in SQLite with Python? I have the cursor with the query statement as follows:cursor.execute("select rowid from components where name = ?", (name,))I want to check for the existence of the components: name and return to a python variable.How do I do that?
Since the names are unique, I really favor your (the OP's) method of using fetchone or Alex Martelli's method of using SELECT count(*) over my initial suggestion of using fetchall.fetchall wraps the results (typically multiple rows of data) in a list. Since the names are unique, fetchall returns either a list with just one tuple in the list (e.g. [(rowid,),] or an empty list []. If you desire to know the rowid, then using fetchall requires you to burrow through the list and tuple to get to the rowid. Using fetchone is better in this case since you get just one row, (rowid,) or None.To get at the rowid (provided there is one) you just have to pick off the first element of the tuple.If you don't care about the particular rowid and you just want to know there is a hit,then you could use Alex Martelli's suggestion, SELECT count(*), which would return either (1,) or (0,).Here is some example code:First some boiler-plate code to setup a toy sqlite table:import sqlite3connection = sqlite3.connect(':memory:')cursor=connection.cursor()cursor.execute('create table components (rowid int,name varchar(50))') cursor.execute('insert into components values(?,?)', (1,'foo',))Using fetchall:for name in ('bar','foo'): cursor.execute("SELECT rowid FROM components WHERE name = ?", (name,)) data=cursor.fetchall() if len(data)==0: print('There is no component named %s'%name) else: print('Component %s found with rowids %s'%(name,','.join(map(str, next(zip(*data))))))yields:There is no component named barComponent foo found with rowids 1Using fetchone: for name in ('bar','foo'): cursor.execute("SELECT rowid FROM components WHERE name = ?", (name,)) data=cursor.fetchone() if data is None: print('There is no component named %s'%name) else: print('Component %s found with rowid %s'%(name,data[0]))yields:There is no component named barComponent foo found with rowid 1Using SELECT count(*):for name in ('bar','foo'): cursor.execute("SELECT count(*) FROM components WHERE name = ?", (name,)) data=cursor.fetchone()[0] if data==0: print('There is no component named %s'%name) else: print('Component %s found in %s row(s)'%(name,data))yields: There is no component named barComponent foo found in 1 row(s)
Seaborn FacetGrid multiple page pdf plotting I'm trying to create a multi-page pdf using FacetGrid from this (https://seaborn.pydata.org/examples/many_facets.html). There are 20 grids images and I want to save the first 10 grids in the first page of pdf and the second 10 grids to the second page of pdf file. I got the idea of create mutipage pdf file from this (Export huge seaborn chart into pdf with multiple pages). This example works on sns.catplot() but in my case (sns.FacetGrid) the output pdf file has two pages and each page has all of the 20 grids instead of dividing 10 grids in each page.import numpy as npimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as plt # Create a dataset with many short random walksrs = np.random.RandomState(4)pos = rs.randint(-1, 2, (20, 5)).cumsum(axis=1)pos -= pos[:, 0, np.newaxis]step = np.tile(range(5), 20)walk = np.repeat(range(20), 5)df = pd.DataFrame(np.c_[pos.flat, step, walk], columns=["position", "step", "walk"])# plotting FacetGriddef grouper(iterable, n, fillvalue=None): from itertools import zip_longest args = [iter(iterable)] * n return zip_longest(*args, fillvalue=fillvalue)from matplotlib.backends.backend_pdf import PdfPageswith PdfPages("output.pdf") as pdf: N_plots_per_page = 10 for cols in grouper(df["walk"].unique(), N_plots_per_page): # Initialize a grid of plots with an Axes for each walk grid = sns.FacetGrid(df, col="walk", hue="walk", palette="tab20c", col_wrap=2, height=1.5) # Draw a horizontal line to show the starting point grid.map(plt.axhline, y=0, ls=":", c=".5") # Draw a line plot to show the trajectory of each random walk grid.map(plt.plot, "step", "position", marker="o") # Adjust the tick positions and labels grid.set(xticks=np.arange(5), yticks=[-3, 3], xlim=(-.5, 4.5), ylim=(-3.5, 3.5)) # Adjust the arrangement of the plots grid.fig.tight_layout(w_pad=1) pdf.savefig()
You are missing the col_order=cols argument to the grid = sns.FacetGrid(...) call.
How to make a dataframe download through browser using python I have a function, which generates a dataframe, which I am exporting as an excel sheet, at the end of the function.df.to_excel('response.xlsx')This excel file is being saved in my working directory.Now I'm hosting this in Streamlit on heroku as a web app, but I want this excel file to be downloaded in user's local disk (a normal browser download) once this function is called. Is there a way to do it ?
Snehan Kekre, from streamlit, wrote the following solution in this thread.streamlit as stimport pandas as pdimport ioimport base64import osimport jsonimport pickleimport uuidimport redef download_button(object_to_download, download_filename, button_text, pickle_it=False): """ Generates a link to download the given object_to_download. Params: ------ object_to_download: The object to be downloaded. download_filename (str): filename and extension of file. e.g. mydata.csv, some_txt_output.txt download_link_text (str): Text to display for download link. button_text (str): Text to display on download button (e.g. 'click here to download file') pickle_it (bool): If True, pickle file. Returns: ------- (str): the anchor tag to download object_to_download Examples: -------- download_link(your_df, 'YOUR_DF.csv', 'Click to download data!') download_link(your_str, 'YOUR_STRING.txt', 'Click to download text!') """ if pickle_it: try: object_to_download = pickle.dumps(object_to_download) except pickle.PicklingError as e: st.write(e) return None else: if isinstance(object_to_download, bytes): pass elif isinstance(object_to_download, pd.DataFrame): #object_to_download = object_to_download.to_csv(index=False) towrite = io.BytesIO() object_to_download = object_to_download.to_excel(towrite, encoding='utf-8', index=False, header=True) towrite.seek(0) # Try JSON encode for everything else else: object_to_download = json.dumps(object_to_download) try: # some strings <-> bytes conversions necessary here b64 = base64.b64encode(object_to_download.encode()).decode() except AttributeError as e: b64 = base64.b64encode(towrite.read()).decode() button_uuid = str(uuid.uuid4()).replace('-', '') button_id = re.sub('\d+', '', button_uuid) custom_css = f""" <style> #{button_id} {{ display: inline-flex; align-items: center; justify-content: center; background-color: rgb(255, 255, 255); color: rgb(38, 39, 48); padding: .25rem .75rem; position: relative; text-decoration: none; border-radius: 4px; border-width: 1px; border-style: solid; border-color: rgb(230, 234, 241); border-image: initial; }} #{button_id}:hover {{ border-color: rgb(246, 51, 102); color: rgb(246, 51, 102); }} #{button_id}:active {{ box-shadow: none; background-color: rgb(246, 51, 102); color: white; }} </style> """ dl_link = custom_css + f'<a download="{download_filename}" id="{button_id}" href="data:application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;base64,{b64}">{button_text}</a><br></br>' return dl_linkvals= ['A','B','C']df= pd.DataFrame(vals, columns=["Title"]) filename = 'my-dataframe.xlsx'download_button_str = download_button(df, filename, f'Click here to download {filename}', pickle_it=False)st.markdown(download_button_str, unsafe_allow_html=True)I'd recommend searching the thread on that discussion forum. There seem to be at least 3-4 alternatives to this code.