questions
stringlengths
56
48k
answers
stringlengths
13
43.8k
Python Iterating 2D Array, Return Array Value I have created a 2D 10x10 Array. using Numpy I want to iterate over the array as efficiently as possible.However I would like to return the array values. essentially iterating over the 10x10 array 10 times and return a 1x10 array each time. import datetime import numpy as np import random start = datetime.datetime.now() a = np.random.uniform(low=-1, high=1, size=(10,10)) print("Time :",datetime.datetime.now() - start) for x in np.nditer(a): print(x)the result is as follows:0.57389947777175370.249884084109107670.83918278316826570.00159758458305692130.544774598405690.14091622639476165-0.36517132895234106-0.06311125453484467-0.6572544506539948...100 timesHowever I would expect the result to be:[0.5738994777717537,0.24988408410910767,0.8391827831682657,0.0015975845830569213,0.54477459840569,0.14091622639476165,-0.36517132895234106,-0.06311125453484467,-0.6572544506539948],[...]...10 timesAny help would be appreciated!
To directly answer your question, this does exactly what you want:import numpy as npa = np.random.uniform(low=-1, high=1, size=(10,10))print(','.join([str(list(x)) for x in a]))This will print[-0.2403881196886386, ... , 0.8518165986395723],[-0.2403881196886386, ... , 0.8518165986395723], ..., [-0.2403881196886386, ... , 0.8518165986395723]The reason you're printing just the elements of the array is due to the way nditer works. nditer iterates over single elements, even at a multidimensional level, whereas you want to iterate over just the first dimension of the array. For that, for x in a: works as intended.EditHere is a good link if you want to read up on how nditer works: https://docs.scipy.org/doc/numpy/reference/arrays.nditer.html#arrays-nditer
Tensorflow Tf.tf.squared_difference is showing a value error with dense layer Whenever I try to multiply two tensors and then feed them as an input to a dense layer, it is working perfectly. But, when I try to calculate the squared difference between them, it's showing me an error.# working wellout= multiply([user, book]) result = Dense(1, activation='sigmoid', kernel_initializer=initializers.lecun_normal(), name='prediction')(out)# errorout= tf.reduce_sum(tf.squared_difference(user, book),1)result = Dense(1, activation='sigmoid', kernel_initializer=initializers.lecun_normal(), name='prediction')(out)Here is the error I get:Input 0 is incompatible with layer prediction: expected min_ndim=2, found ndim=1 error
You probably need to pass keepdims=True argument to reduce_sum function in order to keep the dimensions with length 1 (otherwise, the shape of out would be (batch_size), whereas the Dense layer expects (batch_size, N)):out= tf.reduce_sum(tf.squared_difference(user, book), axis=1, keepdims=True)Update: The input of Keras layers must be the output of other Keras layers. Therefore, if you want to use TensorFlow operations, you need to wrap them inside a Lambda layer in Keras. For example:from keras.layers import Lambdaout = Lambda(lambda x: tf.reduce_sum(tf.squared_difference(x[0], x[1]), axis=1, keepdims=True))([user, book])
How to generate random categorical data in python according to a probability distribution? I am trying to generate a random column of categorical variable from an existing column to create some synthesized data. For example if my column has 3 values 0,1,2 with 0 appearing 50% of the time and 1 and 2 appearing 30 and 20% of the time I want my new random column to have similar (but not same) proportions as wellThere is a similar question on cross validated that has been solved using R. https://stats.stackexchange.com/questions/14158/how-to-generate-random-categorical-data. However I would like a Python Solution for this
Use np.random.choice() and specify a vector of probabilities corresponding to the chosen-from arrray:>>> import numpy as np >>> np.random.seed(444) >>> data = np.random.choice( ... a=[0, 1, 2], ... size=50, ... p=[0.5, 0.3, 0.2] ... ) >>> data array([2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 2, 2, 0, 1, 0, 0, 0, 0, 2, 1, 0, 1, 1, 1, 0, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0, 2, 0, 1, 0, 2, 0, 2, 2, 2, 1, 1, 1, 0, 0, 1])>>> np.bincount(data) / len(data) # Proportions array([0.44, 0.32, 0.24])As your sample size increases, the empirical frequencies should converge towards your targets:>>> a_lot_of_data = np.random.choice( ... a=[0, 1, 2], ... size=500_000, ... p=[0.5, 0.3, 0.2] ... )>>> np.bincount(a_lot_of_data) / len(a_lot_of_data) array([0.499716, 0.299602, 0.200682])As noted by @WarrenWeckesser, if you already have the 1d NumPy array or Pandas Series, you can use that directly as the input without specifying p. The default of np.random.choice() is to sample with replacement (replace=True), so by passing your original data, the resulting distribution should approximate that of the input.
How to sum specific columns in pandas I am trying to find the average specific columns of my csv file which has been read into a Dataframe by pandas. I would like to find the mean for 2018 Jul to 2018 Sep and then display them.Variable | 2018 Jul | 2018 Aug | 2018 Sep | 2018 Oct | 2018 Nov | 2018 Dec | ....GDP | 100 | 200 | 300 | 400 | 500 | 600 | ....I have tried to use this code but end up with 'Nan'vam2['2018 Jul-Sep'] = vam2.iloc[0:1, :2].mean()vam2I believe that '2018 Jul-Sep' should be 200 after finding the mean.Variable | 2018 Jul | 2018 Aug | 2018 Sep | 2018 Oct | 2018 Nov | 2018 Dec | 2018 Jul-Sep | ....GDP | 100 | 200 | 300 | 400 | 500 | 600 | 200 | ....
I think 0:1 should be removed if need mean of all rows and add axis=1 to mean per rows:If Variable is column:#for convert to numericvam2.iloc[:, 1:] = vam2.iloc[:, 1:].apply(pd.to_numeric, errors='coerce')vam2['2018 Jul-Sep'] = vam2.iloc[:, 1:4].mean(axis=1)print (vam2) Variable 2018 Jul 2018 Aug 2018 Sep 2018 Oct 2018 Nov 2018 Dec \0 GDP 100 200 300 400 500 600 2018 Jul-Sep 0 200.0 If Variable is index:vam2 = vam2.apply(pd.to_numeric, errors='coerce')vam2['2018 Jul-Sep'] = vam2.iloc[:, :3].mean(axis=1)print (vam2) 2018 Jul 2018 Aug 2018 Sep 2018 Oct 2018 Nov 2018 Dec \Variable GDP 100 200 300 400 500 600 2018 Jul-Sep Variable GDP 200.0
Probability Density Function using pandas data I would like to model the probability of an event occurring given the existence of the previous event.To give you more context, I plan to group my data by anonymous_id, sort the values of the grouped dataset by timestamp (ts) and calculate the probability of the sequence of sources (utm_source) the person goes through. The person is represented by a unique anonymous_id. So the desired end goal is the probability of someone who came from a Facebook source to then come through from a Google source etcI have been told that a package such as sci.py gaussian_kde would be useful for this. However, from playing around with it, this requires numerical inputs.test_sample = test_sample.groupby('anonymous_id').apply(lambda x: x.sort_values(['ts'])).reset_index(drop=True)and not sure what to try next.I have also tried this, but i don't think that it makes much sense:stats.gaussian_kde(test_two['utm_source'])Here is a sample of my data {'Unnamed: 0': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}, 'anonymous_id': {0: '0000f8ea-3aa6-4423-9247-1d9580d378e1', 1: '00015d49-2cd8-41b1-bbe7-6aedbefdb098', 2: '0002226e-26a4-4f55-9578-2eff2999de7e', 3: '00022b83-240e-4ef9-aaad-ac84064bb902', 4: '00022b83-240e-4ef9-aaad-ac84064bb902', 5: '00022b83-240e-4ef9-aaad-ac84064bb902', 6: '00022b83-240e-4ef9-aaad-ac84064bb902', 7: '00022b83-240e-4ef9-aaad-ac84064bb902', 8: '00022b83-240e-4ef9-aaad-ac84064bb902', 9: '0002ed69-4aff-434d-a626-fc9b20ef1b02'}, 'ts': {0: '2018-04-11 06:59:20.206000', 1: '2019-05-18 05:59:11.874000', 2: '2018-09-10 18:19:25.260000', 3: '2017-10-11 08:20:18.092000', 4: '2017-10-11 08:20:31.466000', 5: '2017-10-11 08:20:37.345000', 6: '2017-10-11 08:21:01.322000', 7: '2017-10-11 08:21:14.145000', 8: '2017-10-11 08:23:47.526000', 9: '2019-06-12 10:42:50.401000'}, 'utm_source': {0: nan, 1: 'facebook', 2: 'facebook', 3: 'google', 4: nan, 5: 'facebook', 6: 'google', 7: 'adwords', 8: 'youtube', 9: nan}, 'rank': {0: 1, 1: 1, 2: 1, 3: 1, 4: 2, 5: 3, 6: 4, 7: 5, 8: 6, 9: 1}}Note: i converted the dataframe to a dictionary
Here is one way you can do it (if I understand correctly):from itertools import chainfrom collections import Countergroups = (df .sort_values(by='ts') .dropna() .groupby('anonymous_id').utm_source .agg(list) .reset_index())groups['transitions'] = groups.utm_source.apply(lambda x: list(zip(x,x[1:])))all_transitions = Counter(chain(*groups.transitions.tolist()))Which gives you (on your example data):In [42]: all_transitionsOut[42]:Counter({('google', 'facebook'): 1, ('facebook', 'google'): 1, ('google', 'adwords'): 1, ('adwords', 'youtube'): 1})Or are you looking for something different?
Chinese character insert issue I have the following dataframe in pandasneed to insert all value into a datawarehouse with chinese characters but chinese characters are instered as junk (?????) (百å¨è‹±åšï¼ˆèˆŸå±±ï¼‰å•¤é…’有é™å…¬å¸) like above oneThe insert query is prepared dynamically.I need help on how to handle the following scenerio:Read file as UTF-8 and writte into a datawarehouse using pyodbc connection using character set UTF-8.df=pd.read_csv(filename,dtype='str',encoding='UTF-8')cnxn = database_connect() ##Connect to database##cnxn.setencoding(ctype=pyodbc.SQL_CHAR, encoding='UTF-8')cnxn.autocommit = Truecursor = cnxn.cursor()for y in range(len(df)): inst='insert into '+tablename+' values (' for x in range(len(clm)): if str(df.iloc[y,x])=='nan': df.iloc[y,x]='' if x!=len(clm)-1: inst_val=inst_val+"'"+str(df.iloc[y,x]).strip().replace("'",'')+"'"+"," else: inst_val=inst_val+"'"+str(df.iloc[y,x]).strip().replace("'",'')+"'"+")" inst=inst+inst_val #########prepare insert statment from values inside in-memory data########### inst_val='' print("Inserting value into table") try: cursor.execute(inst) ##########Execute insert statement############## print("1 row inserted") except Exception as e: print (inst) print (e)same like value should inserted into sql datawarehouse
You are using dynamic SQL to construct string literals containing Chinese characters, but you are creating them asinsert into tablename values ('你好')when SQL Server expects Unicode string literals to be of the forminsert into tablename values (N'你好')You would be better off to use a proper parameterized query to avoid such issues:sql = "insert into tablename values (?)"params = ('你好',)cursor.execute(sql, params)
Custom sort for histogram After looking at countless questions and answers on how to do custom sorting of the bars in bar charts (or a histogram in my case) it seemed the answer was to sort the dataframe as desired and then do the plot, only to find that the plot ignores the data and blithely sorts alphabetically. There does not seem to be a simple option to turn sorting off, or just supply a list to the plot to sort by.Here's my sample codefrom matplotlib import pyplot as pltimport pandas as pd%matplotlib inlinediamonds = pd.DataFrame({'carat': [0.23, 0.21, 0.23, 0.24, 0.22], 'cut' : ['Ideal', 'Premium', 'Good', 'Very Good', 'Fair'], 'color': ['E', 'E', 'E', 'J', 'E'], 'clarity': ['SI2', 'SI1', 'VS1', 'VVS2', 'VS2'], 'depth': [61.5, 59.8, 56.9, 62.8, 65.1], 'table': [55, 61, 65, 57, 61], 'price': [326, 326, 327, 336, 337]})diamonds.set_index('cut', inplace=True)cuts_order = ['Fair','Good','Very Good','Premium','Ideal']df = pd.DataFrame(diamonds.loc[cuts_order].carat)df.reset_index(inplace=True)plt.hist(df.cut);This returns the 'cuts' in alphabetical order but not as sorted in the data. I was quite excited to have figured out a clever way of sorting the data, so much bigger the disappointment the plot is ignorant.What is the most straightforward way of doing this?Here's what I get with the above code:
A histogram was not the right plot here. With the following code the bars, sorted as desired, are created:from matplotlib import pyplot as pltimport pandas as pd%matplotlib inlinediamonds = pd.DataFrame({'carat': [0.23, 0.21, 0.23, 0.24, 0.22], 'cut' : ['Ideal', 'Premium', 'Good', 'Very Good', 'Fair'], 'color': ['E', 'E', 'E', 'J', 'E'], 'clarity': ['SI2', 'SI1', 'VS1', 'VVS2', 'VS2'], 'depth': [61.5, 59.8, 56.9, 62.8, 65.1], 'table': [55, 61, 65, 57, 61], 'price': [326, 326, 327, 336, 337]})cuts_order = ['Fair','Good','Very Good','Premium','Ideal']c_classes = pd.api.types.CategoricalDtype(ordered = True, categories = cuts_order)diamonds['cut'] = diamonds['cut'].astype(c_classes)to_plot = diamonds.cut.value_counts(sort=False)plt.bar(to_plot.index, to_plot.values)Side note, matplotlib 2.1.0 behaves differently because plt.bar will blithely ignore the sort order that it is given, I can only confirm this works with 3.0.3 (and hopefully higher).I also tried sorting the data by index but this does not take effect for some reason, looks like value_counts(sort=False) does not return values in the order it is found in the data:from matplotlib import pyplot as pltimport pandas as pd%matplotlib inlinediamonds = pd.DataFrame({'carat': [0.23, 0.21, 0.23, 0.24, 0.22], 'cut' : ['Ideal', 'Premium', 'Good', 'Very Good', 'Fair'], 'color': ['E', 'E', 'E', 'J', 'E'], 'clarity': ['SI2', 'SI1', 'VS1', 'VVS2', 'VS2'], 'depth': [61.5, 59.8, 56.9, 62.8, 65.1], 'table': [55, 61, 65, 57, 61], 'price': [326, 326, 327, 336, 337]})diamonds.set_index('cut', inplace=True)cuts_order = ['Fair','Good','Very Good','Premium','Ideal']diamonds = diamonds.loc[cuts_order]to_plot = diamonds.index.value_counts(sort=False)plt.bar(to_plot.index, to_plot.values)Seaborn is also an option as it potentially removes the dependency on the available matplotlib version:import pandas as pdimport seaborn as sb%matplotlib inlinediamonds = pd.DataFrame({'carat': [0.23, 0.21, 0.23, 0.24, 0.22], 'cut' : ['Ideal', 'Premium', 'Good', 'Very Good', 'Fair'], 'color': ['E', 'E', 'E', 'J', 'E'], 'clarity': ['SI2', 'SI1', 'VS1', 'VVS2', 'VS2'], 'depth': [61.5, 59.8, 56.9, 62.8, 65.1], 'table': [55, 61, 65, 57, 61], 'price': [326, 326, 327, 336, 337]})cuts_order = ['Fair','Good','Very Good','Premium','Ideal']c_classes = pd.api.types.CategoricalDtype(ordered = True, categories = cuts_order)diamonds['cut'] = diamonds['cut'].astype(c_classes)to_plot = diamonds.cut.value_counts(sort=False)ax = sb.barplot(data = diamonds, x = to_plot.index, y = to_plot.values)
Using pandas combine worksheets, iterate through a specific column, add rows to a new list I have an excel workbook with multiple worksheets that all have the same column headers. I want to iterate through one of the columns within each of the worksheets and add the rows to a new list (or column).Background: Each of the worksheets represents a different community of farmers and each column of each worksheet is a piece of demographic data. I have assigned a code to each of the farmers, and I would like to get all of these codes in a list. I know that I can do it manually in excel but am trying to use pandas, pythonAn example of one of the worksheets within the pruning.xlsx file looks like this:import pandas as pdimport numpy as npsheets_pt = pd.read_excel(r"C:\Users\RRF\Desktop\pruning.xlsx",sheetname=None)sheets_pt_read = pd.ExcelFile(r"C:\Users\RRF\Desktop\pruning.xlsx")sheetnames_read = sheets_pt_read.sheet_namescodelist = []for village in sheetnames_read: for code in sheets_pt[village]["Farmer Code"]: codelist.append(code)After running the code. I print the codelist and the Farmer Codes from the first 5 sheets print out. Then this error message below appears...This is the error message I get:KeyError Traceback (most recent call last)~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance...KeyError: 'Farmer Code'Would be more than happy to share entire error message if anyone is interested.
import pandas as pdimport numpy as np# read excel file into notebook assign to pro2019pro2019 = pd.read_excel(path_to_file, sheet_name=None)# concatenate all of the worksheets within the file removing the index # from individual sheetsdf = pd.concat(pro2019, ignore_index=True)# create empty list to store farmer codespro_codelist = []# iterate through the df column titled "FARMER CODE"# append each code to pro_codelistfor code in df["FARMER CODE"]: pro_codelist.append(code)
python equivalent of MATLAB statement A(B==1)= C I have three numpy arrays as follows:A = [1, 2, 3, 4, 5]B = [0, 1, 0, 0, 1]C = [30, 40]I would like to replace the elements of A which their equivalent in B is equal to 1. For the above example I would like to get this:A = [1, 30, 3, 4, 40]In MATLAB, I can do this: A(B==1) = C'Do you know an equivalent code in Python (preferably something that works when A and B are multidimensional too)? Thanks in advance.
The syntax is pretty similar:>>> import numpy as np>>> A = np.array([1, 2, 3, 4, 5])>>> B = np.array([0, 1, 0, 0, 1])>>> C = np.array([30, 40])>>> A[B==1] = C>>> Aarray([ 1, 30, 3, 4, 40])
reading millisecond data into pandas I have a file with data like this, and want to load it, and use timestamp column (which denotes milliseconds) as a DateTimeIndex. x y timestamp 0 50 90 125 37 87 234 37 87 344 37 87 453 37 87 562 26 78 656 26 78 766 26 78 875 26 78 984 30 77 when I specify timestamp as index, it becomes FloatIndexcur_df = pd.read_csv(cur_file, sep=',', comment='#', index_col = 'timestamp', parse_dates=True)EDIT:I added a function to parse dates, adding a dummy date:def convert_time(a): sec = int(math.floor(a/1000)) millisec = int(((a/1000.0)-int(math.floor(a/1000.0)))*1000) time = '2012-01-01 00:00:%d.%d' % (sec, millisec) return parser.parse(time)cur_df = pd.read_csv(cur_file, sep=',', comment='#', index_col = 'timestamp', parse_dates=True, date_parser=convert_time)now it works ok!i'd be grateful for any suggestions how could I accomplish this better ;)
Something similar, but simpler I think (python datetime.datetime uses microseconds, so therefore the factor 1000):In [12]: import datetimeIn [13]: def convert_time(a): ...: ms = int(a) ...: return datetime.datetime(2012, 1, 1, 0, 0, 0, ms*1000)In [14]: pd.read_csv(cur_file, sep=',', index_col = 'timestamp', parse_dates=True, date_parser=convert_time)Out[14]: x ytimestamp 2012-01-01 00:00:00 50 902012-01-01 00:00:00.125000 37 872012-01-01 00:00:00.234000 37 872012-01-01 00:00:00.344000 37 872012-01-01 00:00:00.453000 37 872012-01-01 00:00:00.562000 26 782012-01-01 00:00:00.656000 26 782012-01-01 00:00:00.766000 26 782012-01-01 00:00:00.875000 26 782012-01-01 00:00:00.984000 30 77
Pandas; calculate mean and append mean to original frame I have a pandas DataFrame where the first column is a country label and the second column contains a number. Most countries are in the list multiple times. In want to do 2 operations:Calculate the mean for every countryAppend the mean of every country as a third column
Perform a groupby by 'Country' and use transform to apply a function to that group which will return an index aligned to the original dfdf.groupby('Country').transform('mean')See the online docs: http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation
Download stocks data from google finance I'm trying to download data from Google Finance from a list of stocks symbols inside a .csv file.This is the class that I'm trying to adapt from this site:import urllib,time,datetimeimport csvclass Quote(object): DATE_FMT = '%Y-%m-%d' TIME_FMT = '%H:%M:%S' def __init__(self): self.symbol = '' self.date,self.time,self.open_,self.high,self.low,self.close,self.volume = ([] for _ in range(7)) def append(self,dt,open_,high,low,close,volume): self.date.append(dt.date()) self.time.append(dt.time()) self.open_.append(float(open_)) self.high.append(float(high)) self.low.append(float(low)) self.close.append(float(close)) self.volume.append(int(volume)) def append_csv(self, filename): with open(filename, 'a') as f: f.write(self.to_csv()) def __repr__(self): return self.to_csv() def get_symbols(self, filename): for line in open(filename,'r'): if line != 'codigo': print line q = GoogleQuote(line,'2014-01-01','2014-06-20') q.append_csv('data.csv')class GoogleQuote(Quote): ''' Daily quotes from Google. Date format='yyyy-mm-dd' ''' def __init__(self,symbol,start_date,end_date=datetime.date.today().isoformat()): super(GoogleQuote,self).__init__() self.symbol = symbol.upper() start = datetime.date(int(start_date[0:4]),int(start_date[5:7]),int(start_date[8:10])) end = datetime.date(int(end_date[0:4]),int(end_date[5:7]),int(end_date[8:10])) url_string = "http://www.google.com/finance/historical?q={0}".format(self.symbol) url_string += "&startdate={0}&enddate={1}&output=csv".format( start.strftime('%b %d, %Y'),end.strftime('%b %d, %Y')) csv = urllib.urlopen(url_string).readlines() csv.reverse()for bar in xrange(0,len(csv)-1): try: #ds,open_,high,low,close,volume = csv[bar].rstrip().split(',') #open_,high,low,close = [float(x) for x in [open_,high,low,close]] #dt = datetime.datetime.strptime(ds,'%d-%b-%y') #self.append(dt,open_,high,low,close,volume) data = csv[bar].rstrip().split(',') dt = datetime.datetime.strftime(data[0],'%d-%b-%y') close = data[4] self.append(dt,close) except: print "error " + str(len(csv)-1) print "error " + csv[bar]if __name__ == '__main__': q = Quote() # create a generic quote object q.get_symbols('list.csv')But, for some quotes, the code doesn't return all data (e.g. BIOM3), some fields return as '-'. How can I handle the split in these cases?For last, at some point of the script, it stops of download the data because the script stops, it doesn't return any message. How can I handle this problem?
It should work, but notice that the ticker should be: BVMF:ABRE11In [250]:import pandas.io.data as webimport datetimestart = datetime.datetime(2010, 1, 1)end = datetime.datetime(2013, 1, 27)df=web.DataReader("BVMF:ABRE11", 'google', start, end)print df.head(10) Open High Low Close Volume?Date 2011-07-26 19.79 19.79 18.30 18.50 18437002011-07-27 18.45 18.60 17.65 17.89 14751002011-07-28 18.00 18.50 18.00 18.30 4417002011-07-29 18.30 18.84 18.20 18.70 3928002011-08-01 18.29 19.50 18.29 18.86 2178002011-08-02 18.86 18.86 18.60 18.80 1546002011-08-03 18.90 18.90 18.00 18.00 1687002011-08-04 17.50 17.85 16.50 16.90 2387002011-08-05 17.00 17.00 15.63 16.00 2530002011-08-08 15.50 15.96 14.35 14.50 224300[10 rows x 5 columns]In [251]:df=web.DataReader("BVMF:BIOM3", 'google', start, end)print df.head(10) Open High Low Close Volume?Date 2010-01-04 2.90 2.90 2.90 2.90 02010-01-05 3.00 3.00 3.00 3.00 02010-01-06 3.01 3.01 3.01 3.01 02010-01-07 3.01 3.09 3.01 3.09 20002010-01-08 3.01 3.01 3.01 3.01 02010-01-11 3.00 3.00 3.00 3.00 02010-01-12 3.00 3.00 3.00 3.00 02010-01-13 3.00 3.10 3.00 3.00 70002010-01-14 3.00 3.00 3.00 3.00 02010-01-15 3.00 3.00 3.00 3.00 1000[10 rows x 5 columns]
TypeError when attempting cross validation in sklearn I really need some help but am new to programming so please forgive my general ignorance. I am trying to perform cross-validation on a data set using ordinary least squares regression from scikit as the estimator.Here is my code:from sklearn import cross_validation, linear_modelimport numpy as npX_digits = xY_digits = list(np.array(y).reshape(-1,))loo = cross_validation.LeaveOneOut(len(Y_digits))# Make sure it worksfor train_indices, test_indices in loo: print('Train: %s | test: %s' % (train_indices, test_indices))regr = linear_model.LinearRegression()[regr.fit(X_digits[train], Y_digits[train]).score(X_digits[test], Y_digits[test]) for train, test in loo]When I run this I get an error: **TypeError: only integer arrays with one element can be converted to an index**This should be referring to my x values which are lists of 0s and 1s - each list represents a categorical variable which has been encoded using OneHotEncoder.With this in mind - is there any advice on how to get around this problem?Fitting a regression estimator to this data seemed to work, although I got a lot of very large / odd looking coefficients. To be honest this whole journey into sklearn to attempt some kind of categorical linear regression has been totally fraught and I welcome any advice at this point.EDIT 2 sorry i tried another method and put that error callback up by mistake: ---------------------------------------------------------------------------TypeError Traceback (most recent call last)<ipython-input-9-be578cbe0327> in <module>() 16 regr = linear_model.LinearRegression() 17 ---> 18 [regr.fit(X_digits[train], Y_digits[train]).score(X_digits[test], Y_digits[test]) for train, test in loo]TypeError: only integer arrays with one element can be converted to an indexEDIT 3 adding an example of my independent variable (x) data:print x[1][ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]EDIT 4 Attempt to convert lists to arrays, met with error:X_digits = np.array(x)Y_digits = np.array(y)---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-20-ea8b84f0005f> in <module>() 14 15 ---> 16 [regr.fit(X_digits[train], Y_digits[train]).score(X_digits[test], Y_digits[test]) for train, test in loo]C:\Program Files\Anaconda\lib\site-packages\sklearn\base.py in score(self, X, y) 320 321 from .metrics import r2_score--> 322 return r2_score(y, self.predict(X)) 323 324 C:\Program Files\Anaconda\lib\site-packages\sklearn\metrics\metrics.py in r2_score(y_true, y_pred) 2184 2185 if len(y_true) == 1:-> 2186 raise ValueError("r2_score can only be computed given more than one" 2187 " sample.") 2188 numerator = ((y_true - y_pred) ** 2).sum(dtype=np.float64)ValueError: r2_score can only be computed given more than one sample.
The cross-validation iterators return indices for use in indexing into numpy arrays, but your data are plain Python lists. Python lists don't support the fancy kinds of indexing that numpy arrays do. You're seeing this error because Python is trying to interpret train and test as something that it can use to index into a list, and is unable to do so. You need to use numpy arrays instead of lists for your X_digits and Y_digits. (Alternatively, you could extract the given indices with a list comprehension or the like, but since scikit is going to convert to numpy anyway, you might as well use numpy in the first place.)
Combining data from two dataframe columns into one column I have time series data in two separate DataFrame columns which refer to the same parameter but are of differing lengths. On dates where data only exist in one column, I'd like this value to be placed in my new column. On dates where there are entries for both columns, I'd like to have the mean value. (I'd like to join using the index, which is a datetime value)Could somebody suggest a way that I could combine my two columns? Thanks.Edit2: I written some code which should merge the data from both of my column, but I get a KeyError when I try to set the new values using my index generated from rows where my first df has values but my second df doesn't. Here's the code:def merge_func(df): null_index = df[(df['DOC_mg/L'].isnull() == False) & (df['TOC_mg/L'].isnull() == True)].index df['TOC_mg/L'][null_index] = df[null_index]['DOC_mg/L'] notnull_index = df[(df['DOC_mg/L'].isnull() == True) & (df['TOC_mg/L'].isnull() == False)].index df['DOC_mg/L'][notnull_index] = df[notnull_index]['TOC_mg/L'] df.insert(len(df.columns), 'Mean_mg/L', 0.0) df['Mean_mg/L'] = (df['DOC_mg/L'] + df['TOC_mg/L']) / 2 return dfmerge_func(sve)And here's the error:KeyError: "['2004-01-14T01:00:00.000000000+0100' '2004-03-04T01:00:00.000000000+0100'\n '2004-03-30T02:00:00.000000000+0200' '2004-04-12T02:00:00.000000000+0200'\n '2004-04-15T02:00:00.000000000+0200' '2004-04-17T02:00:00.000000000+0200'\n '2004-04-19T02:00:00.000000000+0200' '2004-04-20T02:00:00.000000000+0200'\n '2004-04-22T02:00:00.000000000+0200' '2004-04-26T02:00:00.000000000+0200'\n '2004-04-28T02:00:00.000000000+0200' '2004-04-30T02:00:00.000000000+0200'\n '2004-05-05T02:00:00.000000000+0200' '2004-05-07T02:00:00.000000000+0200'\n '2004-05-10T02:00:00.000000000+0200' '2004-05-13T02:00:00.000000000+0200'\n '2004-05-17T02:00:00.000000000+0200' '2004-05-20T02:00:00.000000000+0200'\n '2004-05-24T02:00:00.000000000+0200' '2004-05-28T02:00:00.000000000+0200'\n '2004-06-04T02:00:00.000000000+0200' '2004-06-10T02:00:00.000000000+0200'\n '2004-08-27T02:00:00.000000000+0200' '2004-10-06T02:00:00.000000000+0200'\n '2004-11-02T01:00:00.000000000+0100' '2004-12-08T01:00:00.000000000+0100'\n '2011-02-21T01:00:00.000000000+0100' '2011-03-21T01:00:00.000000000+0100'\n '2011-04-04T02:00:00.000000000+0200' '2011-04-11T02:00:00.000000000+0200'\n '2011-04-14T02:00:00.000000000+0200' '2011-04-18T02:00:00.000000000+0200'\n '2011-04-21T02:00:00.000000000+0200' '2011-04-25T02:00:00.000000000+0200'\n '2011-05-02T02:00:00.000000000+0200' '2011-05-09T02:00:00.000000000+0200'\n '2011-05-23T02:00:00.000000000+0200' '2011-06-07T02:00:00.000000000+0200'\n '2011-06-21T02:00:00.000000000+0200' '2011-07-04T02:00:00.000000000+0200'\n '2011-07-18T02:00:00.000000000+0200' '2011-08-31T02:00:00.000000000+0200'\n '2011-09-13T02:00:00.000000000+0200' '2011-09-28T02:00:00.000000000+0200'\n '2011-10-10T02:00:00.000000000+0200' '2011-10-25T02:00:00.000000000+0200'\n '2011-11-08T01:00:00.000000000+0100' '2011-11-28T01:00:00.000000000+0100'\n '2011-12-20T01:00:00.000000000+0100' '2012-01-19T01:00:00.000000000+0100'\n '2012-02-14T01:00:00.000000000+0100' '2012-03-13T01:00:00.000000000+0100'\n '2012-03-27T02:00:00.000000000+0200' '2012-04-02T02:00:00.000000000+0200'\n '2012-04-10T02:00:00.000000000+0200' '2012-04-17T02:00:00.000000000+0200'\n '2012-04-26T02:00:00.000000000+0200' '2012-04-30T02:00:00.000000000+0200'\n '2012-05-03T02:00:00.000000000+0200' '2012-05-07T02:00:00.000000000+0200'\n '2012-05-10T02:00:00.000000000+0200' '2012-05-14T02:00:00.000000000+0200'\n '2012-05-22T02:00:00.000000000+0200' '2012-06-05T02:00:00.000000000+0200'\n '2012-06-19T02:00:00.000000000+0200' '2012-07-03T02:00:00.000000000+0200'\n '2012-07-17T02:00:00.000000000+0200' '2012-07-31T02:00:00.000000000+0200'\n '2012-08-14T02:00:00.000000000+0200' '2012-08-28T02:00:00.000000000+0200'\n '2012-09-11T02:00:00.000000000+0200' '2012-09-25T02:00:00.000000000+0200'\n '2012-10-10T02:00:00.000000000+0200' '2012-10-24T02:00:00.000000000+0200'\n '2012-11-21T01:00:00.000000000+0100' '2012-12-18T01:00:00.000000000+0100'] not in index"
You are close, but you actually don't need to iterate over the rows when using the isnull() functions. by defaultdf[(df['DOC_mg/L'].isnull() == False) & (df['TOC_mg/L'].isnull() == True)].indexWill return just the index of the rows where DOC_mg/L is not null and TOC_mg/L is null.Now you can do something like this to set the values for TOC_mg/L:null_index = df[(df['DOC_mg/L'].isnull() == False) & \ (df['TOC_mg/L'].isnull() == True)].indexdf['TOC_mg/L'][null_index] = df['DOC_mg/L'][null_index] # EDIT To switch the index position.This will use the index of the rows where TOC_mg/L is null and DOC_mg/L is not null, and set the values for TOC_mg/L to the those found in DOC_mg/L in the same rows. Note: This is not the accepted way for setting values using an index, but it is how I've been doing it for some time. Just make sure that when setting values, the left side of the equation is df['col_name'][index]. If col_name and index are switched you will set the values to a copy which is never set back to the original.Now to set the mean, you can create a new column, we'll call this Mean_mg/L and set the value = 0.0. Then set this new column to the mean of both columns:# Insert a new col at the end of the dataframe columns name 'Mean_mg/L' # with default value 0.0df.insert(len(df.columns), 'Mean_mg/L', 0.0)# Set this columns value to the average of DOC_mg/L and TOC_mg/Ldf['Mean_mg/L'] = (df['DOC_mg/L'] + df['TOC_mg/L']) / 2In the columns where we filled null values with the corresponding column value, the average will be the same as the values.
Linear fit including all errors with NumPy/SciPy I have a lot of x-y data points with errors on y that I need to fit non-linear functions to. Those functions can be linear in some cases, but are more usually exponential decay, gauss curves and so on. SciPy supports this kind of fitting with scipy.optimize.curve_fit, and I can also specify the weight of each point. This gives me weighted non-linear fitting which is great. From the results, I can extract the parameters and their respective errors.There is just one caveat: The errors are only used as weights, but not included in the error. If I double the errors on all of my data points, I would expect that the uncertainty of the result increases as well. So I built a test case (source code) to test this.Fit with scipy.optimize.curve_fit gives me:Parameters: [ 1.99900756 2.99695535]Errors: [ 0.00424833 0.00943236]Same but with 2 * y_err:Parameters: [ 1.99900756 2.99695535]Errors: [ 0.00424833 0.00943236]Same but with 2 * y_err:So you can see that the values are identical. This tells me that the algorithm does not take those into account, but I think the values should be different.I read about another fit method here as well, so I tried to fit with scipy.odr as well:Beta: [ 2.00538124 2.95000413]Beta Std Error: [ 0.00652719 0.03870884]Same but with 20 * y_err:Beta: [ 2.00517894 2.9489472 ]Beta Std Error: [ 0.00642428 0.03647149]The values are slightly different, but I do think that this accounts for the increase in the error at all. I think that this is just rounding errors or a little different weighting.Is there some package that allows me to fit the data and get the actual errors? I have the formulas here in a book, but I do not want to implement this myself if I do not have to.I have now read about linfit.py in another question. This handles what I have in mind quite well. It supports both modes, and the first one is what I need.Fit with linfit:Parameters: [ 2.02600849 2.91759066]Errors: [ 0.00772283 0.04449971]Same but with 20 * y_err:Parameters: [ 2.02600849 2.91759066]Errors: [ 0.15445662 0.88999413]Fit with linfit(relsigma=True):Parameters: [ 2.02600849 2.91759066]Errors: [ 0.00622595 0.03587451]Same but with 20 * y_err:Parameters: [ 2.02600849 2.91759066]Errors: [ 0.00622595 0.03587451]Should I answer my question or just close/delete it now?
One way that works well and actually gives a better result is the bootstrap method. When data points with errors are given, one uses a parametric bootstrap and let each x and y value describe a Gaussian distribution. Then one will draw a point from each of those distributions and obtains a new bootstrapped sample. Performing a simple unweighted fit gives one value for the parameters.This process is repeated some 300 to a couple thousand times. One will end up with a distribution of the fit parameters where one can take mean and standard deviation to obtain value and error.Another neat thing is that one does not obtain a single fit curve as a result, but lots of them. For each interpolated x value one can again take mean and standard deviation of the many values f(x, param) and obtain an error band:Further steps in the analysis are then performed again hundreds of times with the various fit parameters. This will then also take into account the correlation of the fit parameters as one can see clearly in the plot above: Although a symmetric function was fitted to the data, the error band is asymmetric. This will mean that interpolated values on the left have a larger uncertainty than on the right.
How to join strings in pandas column based on a condition Given a dataframe: text binary1 apple 12 bee 03 cider 14 honey 0I would like to get 2 lists:one = [apple cider], zero = [bee honey]How do I join the strings in the 'text' column based on the group (1 or 0) they belong to in the column 'binary'?I wrote for loops to check for each row if binary is 1 or 0 then proceeded to append the text in the text column to a list but I was wondering if there's a more efficient way given that in pandas, we could join texts in columns by simply calling ' '.join(df.text). But how can we do it base on a condition? --Follow up Question -- binary text1 text2 text30 1 hello this table1 1 cider that chair2 0 bee how mouse3 0 winter bottle fanI would like to do the same thing but with multiple text columns. raw = defaultdict(list)raw['text1'] = ['hello','cider','bee','winter']raw['text2'] = ['this','that','how','bottle']raw['text3'] = ['table','chair','mouse','fan']raw['binary'] = [1,1,0,0]df= pd.DataFrame.from_dict(raw)text1 = df.groupby('binary').text1.apply(list)text2 = df.groupby('binary').text2.apply(list)text3 = df.groupby('binary').text3.apply(list)How can I write something like:for i in ['text1','text2','text3']: df.groupby('binary').i.apply(list)
UPDATE: Follow up Questionone list for each text* column grouped by binary columnIn [56]: df.set_index('binary').stack().groupby(level=[0,1]).apply(list).unstack()Out[56]: text1 text2 text3binary0 [bee, winter] [how, bottle] [mouse, fan]1 [hello, cider] [this, that] [table, chair]one list for all text columns grouped by binary columnIn [54]: df.set_index('binary').stack().groupby(level=0).apply(list)Out[54]:binary0 [bee, how, mouse, winter, bottle, fan]1 [hello, this, table, cider, that, chair]dtype: objectOLD answer:IIUC you can group by binary and apply list to grouped text column: In [8]: df.groupby('binary').text.apply(list)Out[8]:binary0 [bee, honey]1 [apple, cider]Name: text, dtype: objector:In [10]: df.groupby('binary').text.apply(list).reset_index()Out[10]: binary text0 0 [bee, honey]1 1 [apple, cider]
Create a Combined CSV Files I have two CSV files reviews_positive.csv and reviews_negative.csv. How can I combine them into one CSV file, but in the following condition:Have odd rows fill with reviews from reviews_positive.csv and even rows fill up with reviews from reviews_negative.csv. I am using PandasI need this specific order because I want to build a balanced dataset for training using neural networks
Here is a working examplefrom io Import StringIOimport pandas as pdpos = """revabc"""neg = """revefghi"""pos_df = pd.read_csv(StringIO(pos))neg_df = pd.read_csv(StringIO(neg))Solutionpd.concat with the keys parameter to label the source dataframes as well as to preserve the desired order of positive first. Then we sort_index with parameter sort_remaining=Falsepd.concat( [pos_df, neg_df], keys=['pos', 'neg']).sort_index(level=1, sort_remaining=False) revpos 0 aneg 0 epos 1 bneg 1 fpos 2 cneg 2 g 3 h 4 iThat said, you don't have to interweave them to take balanced samples. You can use groupby with samplepd.concat( [pos_df, neg_df], keys=['pos', 'neg']).groupby(level=0).apply(pd.DataFrame.sample, n=3) revpos pos 1 b 2 c 0 aneg neg 1 f 4 i 3 h
Select indices in tensorflow that fulfils a certain condition I wish to select elements of a matrix where the coordinates of the elements in the matrix fulfil a certain condition. For example, a condition could be : (y_coordinate-x_coordinate) == -4So, those elements whose coordinates fulfil this condition will be selected. How can I do this efficiently without looping through every element?
Perhaps you need tf.gather_nd:iterSession = tf.InteractiveSession()vals = tf.constant([[1,2,3], [4,5,6], [7,8,9]])arr = tf.constant([[x, y] for x in range(3) for y in range(3) if -1 <= x - y <= 1])arr.eval()# >> array([[0, 0],# >> [0, 1],# >> [1, 0],# >> [1, 1],# >> [1, 2],# >> [2, 1],# >> [2, 2]], dtype=int32)tf.gather_nd(vals, arr).eval()# >> array([1, 2, 4, 5, 6, 8, 9], dtype=int32)Or tf.boolean_mask:iterSession = tf.InteractiveSession()vals = tf.constant([[1,2,3], [4,5,6], [7,8,9]])arr = tf.constant([[-1 <= x - y <= 1 for x in range(3)] for y in range(3)])arr.eval()# array([[ True, True, False],# [ True, True, True],# [False, True, True]], dtype=bool)tf.boolean_mask(vals, arr).eval()# array([ 1., 2., 4., 5., 6., 8., 9.], dtype=int32)
How to use a TensorFlow LinearClassifier in Java In Python I've trained a TensorFlow LinearClassifier and saved it like:model = tf.contrib.learn.LinearClassifier(feature_columns=columns)model.fit(input_fn=train_input_fn, steps=100)model.export_savedmodel(export_dir, parsing_serving_input_fn)By using the TensorFlow Java API I am able to load this model in Java using:model = SavedModelBundle.load(export_dir, "serve");It seems I should be able to run the graph using something likemodel.session().runner().feed(???, ???).fetch(???, ???).run()but what variable names/data should I feed to/fetch from the graph to provide it features and to fetch the probabilities of the classes? The Java documentation is lacking this information as far as I can see.
The names of the nodes to feed would depend on what parsing_serving_input_fn does, in particular they should be the names of the Tensor objects that are returned by parsing_serving_input_fn. The names of the nodes to fetch would depend on what you're predicting (arguments to model.predict() if using your model from Python).That said, the TensorFlow saved model format does include the "signature" of the model (i.e., the names of all Tensors that can be fed or fetched) as metadata that can provide hints.From Python you can load the saved model and list out its signature using something like:with tf.Session() as sess: md = tf.saved_model.loader.load(sess, ['serve'], export_dir) sig = md.signature_def[tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] print(sig)Which will print something like:inputs { key: "inputs" value { name: "input_example_tensor:0" dtype: DT_STRING tensor_shape { dim { size: -1 } } }}outputs { key: "scores" value { name: "linear/binary_logistic_head/predictions/probabilities:0" dtype: DT_FLOAT tensor_shape { dim { size: -1 } dim { size: 2 } } }}method_name: "tensorflow/serving/classify"Suggesting that what you want to do in Java is:Tensor t = /* Tensor object to be fed */model.session().runner().feed("input_example_tensor", t).fetch("linear/binary_logistic_head/predictions/probabilities").run()You can also extract this information purely within Java if your program includes the generated Java code for TensorFlow protocol buffers (packaged in the org.tensorflow:proto artifact) using something like this:// Same as tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY// in Python. Perhaps this should be an exported constant in TensorFlow's Java API.final String DEFAULT_SERVING_SIGNATURE_DEF_KEY = "serving_default"; final SignatureDef sig = MetaGraphDef.parseFrom(model.metaGraphDef()) .getSignatureDefOrThrow(DEFAULT_SERVING_SIGNATURE_DEF_KEY);You will have to add:import org.tensorflow.framework.MetaGraphDef;import org.tensorflow.framework.SignatureDef;Since the Java API and the saved-model-format are somewhat new, there is much room for improvement in the documentation.Hope that helps.
How to stop gradient of LSTMStateTuple in tensorflow I am running a basic lstm code for language modeling.But I don't want to do BPTT. I want to do something like tf.stop_gradient(state)with tf.variable_scope("RNN"): for time_step in range(N): if time_step > 0: tf.get_variable_scope().reuse_variables() (cell_output, state) = cell(inputs[:, time_step, :], state)However, state is LSTMStateTuple, so I tried:for lli in range(len(state)): print(state[lli].c, state[lli].h) state[lli].c = tf.stop_gradient(state[lli].c) state[lli].h = tf.stop_gradient(state[lli].h)But I got a AttributeError: can't set attribute error:File "/home/liyu-iri/IRRNNL/word-rnn/ptb/models/decoupling.py", line 182, in __init__state[lli].c = tf.stop_gradient(state[lli].c)AttributeError: can't set attributeI also tried to use tf.assign, but state[lli].c is not a variable.So, I wonder how could I stop gradient of LSTMStateTuple?Or, how could I stop BPTT? I only want to do BP in single frames.Thanks a lot!
I think this is a pure python question: LSTMStateTuple is just a collections.namedtuple and python doesn't allow you to assign elements there (as in other tuples). The solution is to create a fully new one, e.g. like in stopped_state = LSTMStateTuple(tf.stop_gradient(old_tuple.c), tf.stop_gradient(old_tuple.h)) and then use this (or a list of those) as the state. If you insist on replacing the existing tuple, I think namedtuple has an _replace method, see here, as in old_tuple._replace(c=tf.stop_gradient(...)). Hope that helps!
using shift() to compare row elements I have the sample data and code below where I'm trying to loop through the dataDF column with the function and find the first case of increasing values then return the Quarter value corresponding the the 1st increasing value from the dataDF column. I'm planning to use the function with apply, but I don't think I'm using shift() properly. If I just try to return dataDF.shift() I get an error. I'm new to python so any tips on how to compare a row to the next row or what I'm doing wrong with shift() are greatly appreciated. Sample Data: return dataDF.head(20).to_dict() {'Quarter': {246: '2008q3', 247: '2008q4', 248: '2009q1', 249: '2009q2', 250: '2009q3', 251: '2009q4', 252: '2010q1', 253: '2010q2', 254: '2010q3', 255: '2010q4', 256: '2011q1', 257: '2011q2', 258: '2011q3', 259: '2011q4', 260: '2012q1', 261: '2012q2', 262: '2012q3', 263: '2012q4', 264: '2013q1', 265: '2013q2'}, 'dataDF': {246: 14843.0, 247: 14549.9, 248: 14383.9, 249: 14340.4, 250: 14384.1, 251: 14566.5, 252: 14681.1, 253: 14888.6, 254: 15057.700000000001, 255: 15230.200000000001, 256: 15238.4, 257: 15460.9, 258: 15587.1, 259: 15785.299999999999, 260: 15973.9, 261: 16121.9, 262: 16227.9, 263: 16297.299999999999, 264: 16475.400000000001, 265: 16541.400000000001}}Code: def find_end(x): qrts = [] if (dataDF < dataDF.shift()): qrts.append(dataDF.iloc[0,:].shift(1)) return qrts
Trydf.Quarter[df.dataDF > df.dataDF.shift()].iloc[0]Returns'2009q3'
How to get cell location from pandas diff? df1 = pd.read_excel(mxln) # Loads master xlsx for comparisondf2 = pd.read_excel(sfcn) # Loads student xlsx for comparisondifference = df2[df2 != df1] # Scans for differencesWherever there is a difference, I want to store those cell locations in a list. It needs to be in the format 'A1' (not something like [1, 1]) so I can pass it through this:redFill = PatternFill(start_color='FFEE1111', end_color='FFEE1111', fill_type='solid')lsws['A1'].fill = redFilllsfh.save(sfcn) I've looked at solutions like this, but I couldn't get it to work/don't understand it. For example, the following doesn't work:def highlight_cells(): df1 = pd.read_excel(mxln) # Loads master xlsx for comparison df2 = pd.read_excel(sfcn) # Loads student xlsx for comparison difference = df2[df2 != df1] # Scans for differences return ['background-color: yellow']df2.style.apply(highlight_cells)
To get the difference cells from two pandas.DataFrame as excel coordinates you can do:Code:def diff_cell_indices(dataframe1, dataframe2): from openpyxl.utils import get_column_letter as column_letter x_ofs = dataframe1.columns.nlevels + 1 y_ofs = dataframe1.index.nlevels + 1 return [column_letter(x + x_ofs) + str(y + y_ofs) for y, x in zip(*np.where(dataframe1 != dataframe2))]Test Code: import pandas as pddf1 = pd.read_excel('test.xlsx')print(df1)df2 = df.copy()df2.C['R2'] = 1print(df2)print(diff_cell_indices(df1, df2))Results: B CR2 2 3R3 4 5 B CR2 2 1R3 4 5['C2']
PandaTables and Exif - adding columns as needed So I'm trying to use the incredible Pandastable to display jpeg exif data from a csv file. I'm processing these files with exifread, writing it to a csv and then importing with Pandastable on a tk.button click with the following code:def load_file():fname = askopenfilename(filetypes=(("JPEG/TIFF files", "*.jpg;*.tiff"), ("All files", "*.*")))f = open(fname,'r')fdata.update(exifread.process_file(f, details=False))with open('tempdata.csv', 'a') as f: w = csv.DictWriter(f, fdata.keys(),extrasaction="raise") w.writeheader() w.writerow(fdata)datatable.importCSV('tempdata.csv')My issue is that each file has variable data fields, so img1 might have 50 fields, whereas img2 might have 51 fields. This throws up the following error:CParserError: Error tokenizing data. C error: Expected 50 fields in line 13, saw 51So what I'd like to do is that if img2 has extra data fields, it adds those to the table. I've tried to create a list of all datafields first in my own dictionary, but due to the way that exifread works, this doesn't seem to work well as there are many, many different variations of tags - I'm also hoping to expand this to other file types which would make this hard to maintain. I also don't want to just ignore these columns, as most of the other similar questions have as an answer.Any ideas how I could add these columns on the fly?
Below is a basic example. I'm not sure what your final output is supposed to be. Are you trying to concat the two dataframes into one?import pandas as pdimport numpy as npdf = pd.DataFrame({'A' : [1,1,3,4,5,5,3,1,5,np.NaN], 'B' : [1,np.NaN,3,5,0,0,np.NaN,9,0,5], 'C' : ['AA1233445','AA1233445', 'rmacy','Idaho Rx','Ab123455','TV192837','RX','Ohio Drugs','RX12345','USA Pharma'], 'D' : [123456,123456,1234567,12345678,12345,12345,12345678,123456789,1234567,np.NaN], 'E' : ['Assign','Unassign','Assign','Ugly','Appreciate','Undo','Assign','Unicycle','Assign','Unicorn',]})print(df)df2 = pd.DataFrame({'Z' : [9,8,7,6,5,4,3,2,1,0] })# if the column in df2 is not in df, create a column in df# I'm just setting it to 0 in the example, but you could fill it with whatever for your casefor columns in df2.columns.tolist(): if columns not in df.columns.tolist(): df[str(columns)] = 0EDIT: or you could do df[str(columns)] = df2[str(columns)] or something like that.
Pandas String Replace Error Python I am doing a bit of webscraping and would like to remove parts of a string.PlayerDataHeadings = soup.select(".auflistung th")PlayerDataItems = soup.select(".auflistung td") PlayerData = pd.DataFrame( {'PlayerDataHeadings': PlayerDataHeadings, 'PlayerDataItems': PlayerDataItems })The above code creates a dataframe and works as expected. In the 'PlayerDataHeadings' column there is an unwanted <th> at the start and </th> at the end of each value which I want to remove. The code I am using is: PlayerData['PlayerDataHeadings'].replace( to_replace['<th>', ':</th>'], value='', inplace=True )This returns "NameError: name 'to_replace' is not defined" as an error. Any thoughts on how to fix this or a better alternative would be great
It seems you miss =:to_replace=Or omit keyword and add regex=True:PlayerData['PlayerDataHeadings'].replace(['<th>', ':</th>'], '', inplace=True, regex=True)Sample:PlayerData = pd.DataFrame({'PlayerDataHeadings':['<th>a:</th>','g']})print (PlayerData) PlayerDataHeadings0 <th>a:</th>1 g PlayerDataHeadingsPlayerData['PlayerDataHeadings'].replace(['<th>', ':</th>'], '', inplace=True, regex=True)print (PlayerData) PlayerDataHeadings0 a1 gWith all keywords:PlayerData['PlayerDataHeadings'].replace(to_replace=['<th>', ':</th>'], value='', inplace=True, regex=True)print (PlayerData) PlayerDataHeadings0 a1 g
Crashes python.exe: ucrtbase.DLL When I try to run tensorflow Python crashes with the following message:Problem signature: Problem Event Name: BEX64 Application Name: python.exe Application Version: 3.5.3150.1013 Application Timestamp: 58ae5709 Fault Module Name: ucrtbase.DLL Fault Module Version: 10.0.10240.16384 Fault Module Timestamp: 559f3851 Exception Offset: 0000000000065a4e Exception Code: c0000409 Exception Data: 0000000000000007 OS Version: 6.3.9600.2.0.0.400.8 Locale ID: 1033 Additional Information 1: 83e2 Additional Information 2: 83e2a3a910bd8aa1d2961e6f372a944e Additional Information 3: 7d79 Additional Information 4: 7d7900ee94188f7fcafaf4c671dcabebIt seams to be something related to the vcc runtime.Any help will be welcome.
I was having same error while trying to run visual studio 15 code using visual studio 12 specific libraries from OpenCV.So, if you are trying to build a c++ program named python.exe check the dependent libraries or if you are trying to run an existing python.exe program, check whether necessary visual studio redistributables are installed as mark jay said in the comment.
Python: Doing Calculations on array elements in a list I have a list of arrays, in which each array represents a cell and the array elements are the coordinates x,y and z, the time point and the cell id. Here a sector of it:cells=[ ..., [ 264.847, 121.056, 30.868, 42. , 375. ], [ 259.24 , 116.875, 29.973, 43. , 375. ], [ 260.757, 118.574, 32.772, 44. , 375. ]]), array([[ 263.967, 154.089, 55.5 , 38. , 376. ], [ 260.744, 152.924, 55.5 , 39. , 376. ], [ 258.456, 151.373, 55.5 , 40. , 376. ], ..., [ 259.086, 159.564, 48.521, 53. , 376. ], [ 258.933, 159.796, 48.425, 54. , 376. ], [ 259.621, 158.719, 51.606, 55. , 376. ]]), array([[ 291.647, 57.582, 28.178, 38. , 377. ], [ 284.625, 59.221, 30.028, 39. , 377. ], [ 282.915, 59.37 , 30.402, 40. , 377. ], ..., [ 271.224, 58.534, 23.166, 42. , 377. ], [ 270.048, 58.738, 21.749, 43. , 377. ], [ 268.38 , 58.138, 20.606, 44. , 377. ]]), array([[ 87.83 , 222.144, 26.258, 39. , 378. ], [ 99.779, 223.631, 24.98 , 40. , 378. ], [ 104.107, 224.177, 23.728, 41. , 378. ], ..., [ 127.778, 222.205, 23.123, 63. , 378. ], [ 126.815, 222.347, 23.934, 64. , 378. ], [ 127.824, 221.048, 25.508, 65. , 378. ]]),...]minimumCellCoors = cellsmaximumCellCoors = cellscentoEdge = radius+fcr_sizeNow i want to change the coordinates x, y and z, so the 0.,1. and 2. element of the arrays in the list to get them in a specific grid. The user gives the spacing for x,y and z and then the operation could look like: x_Coo=round(x_element/x)*x y_Coo=round(y_element/y)*y z_Coo=round(z_element/z)*zSo the real question here is, how could i do a operation on all of the elements in the array ( or in this case the first three elements in the array in the list)?EDITIf i use list comprehension to the list like:[np.round((cellID[:,0]-(centoEdge+1))/x)*x for cellID in minimumCellCoors][np.round((cellID[:,1]-(centoEdge+1))/y)*y for cellID in minimumCellCoors][np.round((cellID[:,2]-(centoEdge+1))/z)*z for cellID in minimumCellCoors][np.round((cellID[:,0]+(centoEdge+1))/x)*x for cellID in maximumCellCoors][np.round((cellID[:,1]+(centoEdge+1))/x)*y for cellID in maximumCellCoors][np.round((cellID[:,2]+(centoEdge+1))/x)*z for cellID in maximumCellCoors]How could i fusion the single lists of arrays to one array again?Best regards!
First off you need to convert your list to a numpy array. It's more proper to create a numpy array instead of a list at first place. Then you can take advantage of numpy's vectorized operation support:Here is an example:In [45]: arr = np.arange(100).reshape(4, 5, 5)In [46]: arrOut[46]: array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]], [[25, 26, 27, 28, 29], [30, 31, 32, 33, 34], [35, 36, 37, 38, 39], [40, 41, 42, 43, 44], [45, 46, 47, 48, 49]], [[50, 51, 52, 53, 54], [55, 56, 57, 58, 59], [60, 61, 62, 63, 64], [65, 66, 67, 68, 69], [70, 71, 72, 73, 74]], [[75, 76, 77, 78, 79], [80, 81, 82, 83, 84], [85, 86, 87, 88, 89], [90, 91, 92, 93, 94], [95, 96, 97, 98, 99]]])In [51]: arr[:,:,:3] = np.round(arr[:,:,:3]/5)*5 In [52]: arrOut[52]: array([[[ 0, 0, 0, 3, 4], [ 5, 5, 5, 8, 9], [10, 10, 10, 13, 14], [15, 15, 15, 18, 19], [20, 20, 20, 23, 24]], [[25, 25, 25, 28, 29], [30, 30, 30, 33, 34], [35, 35, 35, 38, 39], [40, 40, 40, 43, 44], [45, 45, 45, 48, 49]], [[50, 50, 50, 53, 54], [55, 55, 55, 58, 59], [60, 60, 60, 63, 64], [65, 65, 65, 68, 69], [70, 70, 70, 73, 74]], [[75, 75, 75, 78, 79], [80, 80, 80, 83, 84], [85, 85, 85, 88, 89], [90, 90, 90, 93, 94], [95, 95, 95, 98, 99]]])Note that you can also perform the operations with same length arrays as well as scalars:For instance you could also do the following:In [53]: arr[:,:,:3] = np.round(arr[:,:,:3]/5)*[4, 5, 6]
Summing numpy array elements together i'm trying to make a polinomial calculator in which i can insert the largest coefficient, problem is, the xizes variable, that would be the image of the function is coming as multiple arrays, therefore the function graphic (using matplotlib) is coming like this (this is a third degree polynomial(x³+x²+x¹+x^0)): http://imgur.com/a/uRr15is there a way to sum up the elements of each array? that would solve the problemHere's the code:expoente = int(input("insira o grau do polinomio (numero inteiro): "))expoente = expoente+1intervalo_1 = float(input("insira o intervalo desejado \n(ponto inicial): "))intervalo_2 = float(input("(ponto final): " ))expoentes = range(0, expoente)expoentes = [item*1 for item in expoentes]quantidade = (intervalo_2 - intervalo_1)*500x = np.linspace(intervalo_1,intervalo_2,num=quantidade,endpoint=False)xizes = [item**expoentes for item in x]plt.plot(x,xizes, label="Grafico do polinomio")plt.xlim([intervalo_1,intervalo_2])plt.show()
nevermind, already figured it out, if it will help somebody later i added the folowing line to sum itxizes = np.sum(xizes,axis=1)
What is the most efficient method for accessing and manipulating a pandas df I am working on an agent based modelling project and have a 800x800 grid that represents a landscape. Each cell in this grid is assigned certain variables. One of these variables is 'vegetation' (i.e. what functional_types this cell posses). I have a data fame that looks like follows:Each cell is assigned a landscape_type before I access this data frame. I then loop through each cell in the 800x800 grid and assign more variables, so, for example, if cell 1 is landscape_type 4, I need to access the above data frame, generate a random number for each functional_type between the min and max_species_percent, and then assign all the variables (i.e. pollen_loading, succession_time etc etc) for that landscape_type to that cell, however, if the cumsum of the random numbers is <100 I grab function_types from the next landscape_type (so in this example, I would move down to landscape_type 3), this continues until I reach a cumsum closer to 100.I have this process working as desired, however it is incredibly slow - as you can imagine, there are hundreds of thousands of assignments! So far I do this (self.model.veg_data is the above df): def create_vegetation(self, landscape_type): if landscape_type == 4: veg_this_patch = self.model.veg_data[self.model.veg_data['landscape_type'] <= landscape_type].copy() else: veg_this_patch = self.model.veg_data[self.model.veg_data['landscape_type'] >= landscape_type].copy() veg_this_patch['veg_total'] = veg_this_patch.apply(lambda x: randint(x["min_species_percent"], x["max_species_percent"]), axis=1) veg_this_patch['cum_sum_veg'] = veg_this_patch.veg_total.cumsum() veg_this_patch = veg_this_patch[veg_this_patch['cum_sum_veg'] <= 100] self.vegetation = veg_this_patchI am certain there is a more efficient way to do this. The process will be repeated constantly, and as the model progresses, landscape_types will change, i.e. 3 become 4. So its essential this become as fast as possible! Thank you.As per the comment: EDIT.The loop that creates the landscape objects is given below: for agent, x, y in self.grid.coord_iter(): # check that patch is land if self.landscape.elevation[x,y] != -9999.0: elevation_xy = int(self.landscape.elevation[x, y]) # calculate burn probabilities based on soil and temp burn_s_m_p = round(2-(1/(1 + (math.exp(- (self.landscape.soil_moisture[x, y] * 3)))) * 2),4) burn_s_t_p = round(1/(1 + (math.exp(-(self.landscape.soil_temp[x, y] * 1))) * 3), 4) # calculate succession probabilities based on soil and temp succ_s_m_p = round(2 - (1 / (1 + (math.exp(- (self.landscape.soil_moisture[x, y] * 0.5)))) * 2), 4) succ_s_t_p = round(1 / (1 + (math.exp(-(self.landscape.soil_temp[x, y] * 1))) * 0.5), 4) vegetation_typ_xy = self.landscape.vegetation[x, y] time_colonised_xy = self.landscape.time_colonised[x, y] is_patch_colonised_xy = self.landscape.colonised[x, y] # populate landscape patch with values patch = Landscape((x, y), self, elevation_xy, burn_s_m_p, burn_s_t_p, vegetation_typ_xy, False, time_colonised_xy, is_patch_colonised_xy, succ_s_m_p, succ_s_t_p) self.grid.place_agent(patch, (x, y)) self.schedule.add(patch)Then, in the object itself I call the create_vegetation function to add the functional_types from the above df. Everything else in this loop comes from a different dataset so isn't relevant.
You need to extract as many calculations as you can into a vectorized preprocessing step. For example in your 800x800 loop you have:burn_s_m_p = round(2-(1/(1 + (math.exp(- (self.landscape.soil_moisture[x, y] * 3)))) * 2),4)Instead of executing this line 800x800 times, just do it once, during initialization:burn_array = np.round(2-(1/(1 + (np.exp(- (self.landscape.soil_moisture * 3)))) * 2),4)Now in your loop it is simply:burn_s_m_p = burn_array[x, y]Apply this technique to the rest of the similar lines.
Run nltk sent_tokenize through Pandas dataframe I have a dataframe that consists of two columns: ID and TEXT. Pretend data is below:ID TEXT265 The farmer plants grain. The fisher catches tuna.456 The sky is blue.434 The sun is bright.921 I own a phone. I own a book.I know all nltk functions do not work on dataframes. How could sent_tokenize be applied to the above dataframe?When I try:df.TEXT.apply(nltk.sent_tokenize) The output is unchanged from the original dataframe. My desired output is:TEXTThe farmer plants grain.The fisher catches tuna.The sky is blue.The sun is bright.I own a phone.I own a book.In addition, I would like to tie back this new (desired) dataframe to the original ID numbers like this (following further text cleansing):ID TEXT265 'farmer', 'plants', 'grain'265 'fisher', 'catches', 'tuna'456 'sky', 'blue'434 'sun', 'bright'921 'I', 'own', 'phone'921 'I', 'own', 'book'This question is related to another of my questions here. Please let me know if I can provide anything to help clarify my question!
edit: as a result of warranted prodding by @alexis here is a better responseSentence TokenizationThis should get you a DataFrame with one row for each ID & sentence:sentences = []for row in df.itertuples(): for sentence in row[2].split('.'): if sentence != '': sentences.append((row[1], sentence))new_df = pandas.DataFrame(sentences, columns=['ID', 'SENTENCE'])Whose output looks like this:split('.') will quickly break strings up into sentences if sentences are in fact separated by periods and periods are not being used for other things (e.g. denoting abbreviations), and will remove periods in the process. This will fail if there are multiple use cases for periods and/or not all sentence endings are denoted by periods. A slower but much more robust approach would be to use, as you had asked, sent_tokenize to split rows up by sentence:sentences = []for row in df.itertuples(): for sentence in sent_tokenize(row[2]): sentences.append((row[1], sentence))new_df = pandas.DataFrame(sentences, columns=['ID', 'SENTENCE'])This produces the following output:If you want to quickly remove periods from these lines you could do something like:new_df['SENTENCE_noperiods'] = new_df.SENTENCE.apply(lambda x: x.strip('.'))Which would yield:You can also take the apply -> map approach (df is your original table):df = df.join(df.TEXT.apply(sent_tokenize).rename('SENTENCES'))Yielding:Continuing:sentences = df.SENTENCES.apply(pandas.Series)sentences.columns = ['sentence {}'.format(n + 1) for n in sentences.columns]This yields:As our indices have not changed, we can join this back into our original table:df = df.join(sentences)Word TokenizationContinuing with df from above, we can extract the tokens in a given sentence as follows:df['sent_1_words'] = df['sentence 1'].apply(word_tokenize)
Tensorflow Deep Learning Memory Leak? I am doing GPU-accelerated deep learning with Tensorflow, and am experiencing a memory leak (the RAM variety, not on the GPU).I have narrowed it down, almost beyond all doubt, to the training lineself.sess.run(self.train_step, feed_dict={self.x: trainingdata, self.y_true: traininglabels, self.keepratio: self.training_keep_rate})If I comment that line, and only that line, out (but still do all my pre-processing and validation/testing and such for a few thousand training batches), the memory leak does not happen.The leak is on the order of a few GB per hour (I am running Ubuntu, and have 16GB RAM + 16GB swap; the system becomes very laggy and unresponsive after 1-3 hours of running, when about 1/3-1/2 the RAM is used, which is a bit weird to me since I still have lots of RAM and the CPU is mostly free when this happens...)Here is some of the initializer code (only run once, at the beginning) if it is relevant: with tf.name_scope('after_final_layer') as scope: self.layer1 = weights["wc1"] self.y_conv = network(self.x, weights, biases, self.keepratio)['out'] variable_summaries(self.y_conv) # Note: Don't add a softmax reducer in the network if you are going to use this # cross-entropy function self.cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(self.y_conv, self.y_true, name = "softmax/cross_ent"), name = "reduce_mean") self.train_step = tf.train.AdamOptimizer(learning_rate, name = "Adam_Optimizer").minimize(self.cross_entropy) self.prediction = tf.argmax(self.y_conv, 1) self.correct_prediction = tf.equal(self.prediction, tf.argmax(self.y_true, 1)) self.accuracy = tf.reduce_mean(tf.cast(self.correct_prediction, tf.float32)) if tensorboard: # Merge all the summaries and write them out to the directory below self.merged = tf.summary.merge_all() self.my_writer = tf.summary.FileWriter('/home/james/PycharmProjects/AI_Final/my_tensorboard', graph=self.sess.graph) # self.sess.run(tf.initialize_all_variables()) #old outdated way to do below tf.global_variables_initializer().run(session=self.sess)I'm also happy to post all of the network/initialization code, but I think that that is probably irrelevant to this leak.Am I doing something wrong or have I found a Tensorflow bug? Thanks in advance!Update: I will likely submit a bug report soon, but I am first trying to verify that I am not bothering them with my own mistakes. I have addedself.sess.graph.finalize()to the end of my initialization code. As I understand it, it should throw an exception if I am accidentally adding to the graph. No exceptions are thrown. I am using tf version 0.12.0-rc0, np version 1.12.0b1, and Python version 2.7.6. Could those versions be outdated/the problem?
This issue is solved in 1.1. Ignore this page which (at the time of writing) says that the latest stable version is r0.12; 1.1 is the latest stable version. See https://github.com/tensorflow/tensorflow/issues/9590 and https://github.com/tensorflow/tensorflow/issues/9872
Finding local minima on a 2D map using tensorflow I am trying to detect location and values of local minima on a 2D image map using tensorflow. Since this is not trivial I was wondering what a robust and efficient way in tf might be?So far I thought of simple horizontal and vertical convolutions using [-1 1] kernels.
You can find your local maxima with pooling like this:import tensorflow as tfdef get_local_maxima(in_tensor): max_pooled_in_tensor = tf.nn.pool(in_tensor, window_shape=(3, 3), pooling_type='MAX', padding='SAME') maxima = tf.where(tf.equal(in_tensor, max_pooled_in_tensor), in_tensor, tf.zeros_like(in_tensor)) return maximaFor local minima it would be easiest to negate the input and then find the maxima, since for pooling_type only AVG and MAX are supported so far.Why does this work? The only time the value at some index of in_tensor is the same as the the value at the same index in max_pooled_in_tensor is if that value was the highest in the 3x3 neighborhood centered on that index in in_tensor.
Boolean indexing assignment of a numpy array to a numpy array I am seeing some behavior with Boolean indexing that I do not understand, and I was hoping to find some clarification here.First off, this is the behavior I am seeking...>>>>>> a = np.zeros(10, dtype=np.ndarray)>>> aarray([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=object)>>> b = np.arange(10).reshape(2,5)>>> barray([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]])>>> a[5] = b>>> aarray([0, 0, 0, 0, 0, array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]), 0, 0, 0, 0], dtype=object)>>>The reason for choosing an ndarray of ndarrays is because I will be appending the arrays stored in the super array, and they will all be of different lengths. I chose the type ndarray instead of list for the super array so I can have access to all of numpys clever indexing features.anyway if i make a Boolean indexer and use that to assign, say, b+5 at position 1, it does something I didn't expect>>> indexer = np.zeros(10,dtype='bool')>>> indexerarray([False, False, False, False, False, False, False, False, False, False], dtype=bool)>>> indexer[1] = True>>> indexerarray([False, True, False, False, False, False, False, False, False, False], dtype=bool)>>> a[indexer] = b+5>>> aarray([0, 5, 0, 0, 0, array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]), 0, 0, 0, 0], dtype=object)>>>Can anyone help me understand what's going on? I would like the result to be >>> a[1] = b+5>>> aarray([0, array([[ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]), 0, 0, 0, array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]), 0, 0, 0, 0], dtype=object)>>>The final goal is to have a lot of "b" arrays stored in B, and to assign them to a like this>>> a[indexer] = B[indexer]EDIT:found possible work around based on the discussion below. I can wrap my data in a class if i need to >>>>>> class myclass:... def __init__(self):... self.data = np.random.rand(1)...>>>>>> b = myclass()>>> b<__main__.myclass object at 0x000002871A4AD198> >>> b.dataarray([ 0.40185378])>>>>>> a[indexer] = b>>> aarray([None, <__main__.myclass object at 0x000002871A4AD198>, None, None, None, None, None, None, None, None], dtype=object)>>> a[1].dataarray([ 0.40185378])EDIT:this actually fails. I cannot allocate anything to the data field when indexed
In [203]: a = np.empty(5, object)In [204]: aOut[204]: array([None, None, None, None, None], dtype=object)In [205]: a[3]=np.arange(3)In [206]: aOut[206]: array([None, None, None, array([0, 1, 2]), None], dtype=object)So simple indexing works with this object array.Boolean indexing works for reading:In [207]: a[np.array([0,0,0,1,0], dtype=bool)]Out[207]: array([array([0, 1, 2])], dtype=object)In [208]: a[np.array([0,0,1,0,0], dtype=bool)]But has problems when writing:Out[208]: array([None], dtype=object)In [209]: a[np.array([0,0,1,0,0], dtype=bool)]=np.arange(2)---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-209-c1ef5580972c> in <module>()----> 1 a[np.array([0,0,1,0,0], dtype=bool)]=np.arange(2)ValueError: NumPy boolean array indexing assignment cannot assign 2 input values to the 1 output values where the mask is truenp.where(<boolean>) and [2] also give problems:In [221]: a[[2]]=np.arange(3)/usr/local/bin/ipython3:1: DeprecationWarning: assignment will raise an error in the future, most likely because your index result shape does not match the value array shape. You can use `arr.flat[index] = values` to keep the old behaviour.So whatever reason, indexed assignment to an object dtype array does not work as well as with regular ones.Even the recommended flat doesn't workIn [226]: a.flat[[2]]=np.arange(3)In [227]: aOut[227]: array([None, None, 0, array([0, 1, 2]), None], dtype=object)I can assign a non-list/array object In [228]: a[[2]]=NoneIn [229]: aOut[229]: array([None, None, None, array([0, 1, 2]), None], dtype=object)In [230]: a[[2]]={3:4}In [231]: aOut[231]: array([None, None, {3: 4}, array([0, 1, 2]), None], dtype=object)In [232]: idx=np.array([0,0,1,0,0],bool)In [233]: a[idx]=set([1,2,3])In [234]: aOut[234]: array([None, None, {1, 2, 3}, array([0, 1, 2]), None], dtype=object)object dtype arrays are at the edge of numpy array functionality. Look at what we get with getitem. With a scalar index we get what object is stored in that slot (in my latest case, a set). But with [[2]] or boolean, we get another object array.In [235]: a[2]Out[235]: {1, 2, 3}In [236]: a[[2]]Out[236]: array([{1, 2, 3}], dtype=object)In [237]: a[idx]Out[237]: array([{1, 2, 3}], dtype=object)In [238]: a[idx].shapeOut[238]: (1,)I suspect that when a[idx] is on the LHS, it tries to convert the RHS to an object array first:Out[241]: array([0, 1, 2], dtype=object)In [242]: _.shapeOut[242]: (3,)In [243]: np.array(set([1,2,3]), object)Out[243]: array({1, 2, 3}, dtype=object)In [244]: _.shapeOut[244]: ()In the case of a set the resulting array has a single element and can be put in the (1,) slot. But when the RHS is a list or array the result is a n element array, e.g. (3,), which does not fit in the (1,) slot.Solution (sort of)If you want to assign a list/array to a slot in a object array with some form of advanced indexing (boolean or list), first put that item in an object array of the correct size:In [255]: b=np.empty(1,object)In [256]: b[0]=np.arange(3)In [257]: bOut[257]: array([array([0, 1, 2])], dtype=object)In [258]: b.shapeOut[258]: (1,)In [259]: a[idx]=bIn [260]: aOut[260]: array([None, None, array([0, 1, 2]), array([0, 1, 2]), None], dtype=object)Or working with your slightly large arrays:In [264]: a = np.zeros(10, dtype=object)In [265]: b = np.arange(10).reshape(2,5)In [266]: a[5] = bIn [267]: c = np.zeros(1, dtype=object) # intermediate object wrapperIn [268]: c[0] = b+5In [269]: idx = np.zeros(10,bool)In [270]: idx[1]=TrueIn [271]: a[idx] = cIn [272]: aOut[272]: array([0, array([[ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]), 0, 0, 0, array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]), 0, 0, 0, 0], dtype=object)If idx has n True items, the c has to have shape that will broadcast to (n,)
Python How to convert a float as hex to decimal I've read in some data from a csv file with pandas. The data is incomplete and therefore contains many nan values.I want to add a column to the data which converts the hex values to decimal values. Unfortunately, the column with the hex values are all read as floats, not strings because they just happen to have those values.Example data val0 20.01 nan2 20.0The simple way to convert a hex to decimal in python seems to be:int('20.0',16), which should yield 32.However, since this is pandas I cannot convert the values to int, or at least I keep getting an error stating that.My current code is:df['valdec'] = np.where(np.isnan(df['val']), df['val'], int(df['val'].astype(int).astype(str), 16))This fails with the error: ValueError: Cannot convert NA to integerwithout the astype(int) the value is "20.0" which cannot be converted.Is there another way to interpret a float value as a hex value and convert to decimal when working with pandas dataframe?
You can mask the rows of interest and double cast and call apply:In [126]:df['valdec'] = df['val'].dropna().astype(int).astype(str).apply(lambda x: int(x, 16))dfOut[126]: val valdec0 20.0 32.01 NaN NaN2 20.0 32.0So firstly we call dropna to remove the NaN, this allows us to cast to int using .astype(int) then convert to str by calling .astype(str).We then call apply on this to convert to hex and assign the result of all this to the new columnNote that the dtype of the new column will be float as the presence of NaN enforces this, you won't be able to have a mixture of ints and floatsAs pointed out by @jasonharper, casting to int here will lose any fractional part so a higher precision method would be to use float.fromhex:In [128]:df['valdec'] = df['val'].astype(str).dropna().apply(lambda x: float.fromhex(x))dfOut[128]: val valdec0 20.0 32.01 NaN NaN2 20.0 32.0
"Numpy" TypeError: data type "string" not understood I am a newbie trying to learn data visuallizaion using python. Actually, I was just trying to follow the example given by a cookbook,like: import numpyimport osos.chdir("Home/Desktop/Temporal_folder")data = numpy.loadtxt ('ch02-data.csv', dtype= 'string', delimiter=',')print (data)but somehow it did not work out:Traceback (most recent call last): File "Home/PycharmProjects/Learning/Datavisuallization.py", line 5, in <module> data = numpy.loadtxt ('ch02-data.csv', dtype= 'string', delimiter=',') File "Home/anaconda/lib/python3.6/site-packages/numpy/lib/npyio.py", line 930, in loadtxt dtype = np.dtype(dtype)TypeError: data type "string" not understoodthis is the data I used: "ch02-data.csv"there were some similar issues posted, but I am not sure I understood what the answer tried to explain. Also, I checked the manual of numpy.loadtext(), still the answer does not seem to be obvious to me... any suggestion? https://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html
Try dtype='str' instead of dtype='string'.
Addition of 2 dataframes column to column by unique column in pandas I have 2 dataframes df1a b c 1 2 32 4 53 6 7 and df2a b c1 3 43 1 8I want output to bedf3 a b c1 5 72 4 53 7 15I tried df1.add(df2,axis='c') but not getting exact output.referring this link http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add.html
You need set_index by column a in both df with add and parameter fill_value=0.Last if necessary convert values to int and reset_index:df = df1.set_index('a').add(df2.set_index('a'),fill_value=0).astype(int).reset_index()print (df) a b c0 1 5 71 2 4 52 3 7 15For removing not common rows omit fill_value and add dropna if no NaN in both DataFramesdf = df1.set_index('a').add(df2.set_index('a')).dropna().astype(int).reset_index()print (df) a b c0 1 5 71 3 7 15
pandas convert datatime column to timestamp I am beginner in pandas I have dataframe first column is datatime like "19-Sep-2016 10:30:00" and many records like it.I am trying to convert this column to timestamp and write it to another dataframe , i am trying to do it with one step.I am trying to write in python 3.import pandas as pdimport timefrom time import strptimexl = pd.ExcelFile(file)df = xl.parse(sheetname=0)our_df['DateTime'] = int(time.mktime(time.strptime(df['Date Time'], "%d-%b-%Y %H:%M:%S")))but I have error:TypeError: strptime() argument 0 must be str, not <class 'pandas.core.series.Series'>I am trying to google it but I take long time without benefits.Now, how can I do it the right way?
You can use to_datetime:df = pd.DataFrame({'DateTime':['19-Sep-2016 10:30:00','19-Sep-2016 10:30:00']})print (df) DateTime0 19-Sep-2016 10:30:001 19-Sep-2016 10:30:00df['DateTime'] = pd.to_datetime(df['DateTime'])print (df) DateTime0 2016-09-19 10:30:001 2016-09-19 10:30:00If want to specify format:df['DateTime'] = pd.to_datetime(df['DateTime'], format='%d-%b-%Y %H:%M:%S')print (df) DateTime0 2016-09-19 10:30:001 2016-09-19 10:30:00And for times add dt.time:df['DateTime'] = pd.to_datetime(df['DateTime']).dt.timeprint (df) DateTime0 10:30:001 10:30:00Then is possible use join - data are aligned by index values, if length is different get NaTs as last values:df1 = pd.DataFrame({'Col':[5, 4, 0, 7]})print (df1) Col0 51 42 0df1 = df1.join(pd.to_datetime(df['DateTime']))print (df1) Col DateTime0 5 2016-09-19 10:30:001 4 2016-09-19 10:30:002 0 NaT3 7 NaTdf1['DateTime'] = pd.to_datetime(df['DateTime'])print (df1) Col DateTime0 5 2016-09-19 10:30:001 4 2016-09-19 10:30:002 0 NaT3 7 NaTdf1 = df1.join(pd.to_datetime(df['DateTime']).dt.time)print (df1) Col DateTime0 5 10:30:001 4 10:30:002 0 NaN3 7 NaNdf1['DateTime'] = pd.to_datetime(df['DateTime']).dt.timeprint (df1) Col DateTime0 5 10:30:001 4 10:30:002 0 NaN3 7 NaN
Problems with shuffling arrays in numpy? I am having this unusual problem with shuffling arrays in numpyarr = np.arange(9).reshape((3, 3))print "Original constant array"print arrnew_arr=arrfor i in range(3): np.random.shuffle(new_arr) print "Obtained constant array" print arr print "randomized array" print new_arrarr is my original array which I kept as such and created new array new_arr for further computation. But the code is showing this outputOriginal constant array[[0 1 2] [3 4 5] [6 7 8]]Obtained constant array[[6 7 8] [0 1 2] [3 4 5]]randomized array[[6 7 8] [0 1 2] [3 4 5]]I only wants to randomize new_arr and not arr. why this is happening and how to prevent arr from shuffling?
Usenew_arr = np.copy(arr)instead ofnew_arr = arrWhen you do new_arr=arr you basically create a reference new_arr for your array arrfor example (Taken from numpy copy docs):Create an array x, with a reference y and a copy z:>>> x = np.array([1, 2, 3])>>> y = x>>> z = np.copy(x)Note that, when we modify x, y changes, but not z:>>> x[0] = 10>>> x[0] == y[0]True>>> x[0] == z[0]False
Comparison of two NumPy arrays without order I have to compare two numpy arrays regardless of their order. I had hoped that numpy.array_equiv(a, b) will do the trick but unfortunately, it doesn't. Example:a = np.array([[3, 1], [1,2]])b = np.array([[1, 2], [3, 1]])print (np.array_equiv(a, b))`# return falseAny suggestions? Thanks in advance
You could use np.array_equal(np.sort(a.flat), np.sort(b.flat))In [56]: a = np.array([[3, 1], [1, 2]])In [57]: b = np.array([[1, 2], [3, 1]])In [58]: np.array_equal(np.sort(a.flat), np.sort(b.flat))Out[58]: TrueIn [59]: b = np.array([[1, 2], [3, 4]])In [60]: np.array_equal(np.sort(a.flat), np.sort(b.flat))Out[60]: FalseIn [61]: b = np.array([[1, 2], [3, 3]])In [62]: np.array_equal(np.sort(a.flat), np.sort(b.flat))Out[62]: False
How to implement the 'group' of alexnet in tensorlayer group are used to group parameters of the convolution kernel (which connects the previous layer and the current layer) into k parts forcibly in alexnet, is there a simple implement for group in tensorlayer?
This might be a useful link. You need to split the conv layers before convolving and then concatenate the result. If it helps, the explanation of how the weights of BVLC's alexnet in the model provided with Caffe are organized is given here.You will need to convert the caffemodel into a tensorflow-readable format (in my experience). Weights converted to .npy file are provided here (bvlc_alexnet.npy) but it should be straightforward to convert it to your format of choice (for example, .h5).
expanding a dataframe based on start and end columns (speed) I have a pandas.DataFrame containing start and end columns, plus a couple of additional columns. I would like to expand this dataframe into a time series that starts at start values and end at end values, but copying my other columns. So far I came up with the following:import pandas as pdimport datetime as dtdf = pd.DataFrame()df['start'] = [dt.datetime(2017, 4, 3), dt.datetime(2017, 4, 5), dt.datetime(2017, 4, 10)]df['end'] = [dt.datetime(2017, 4, 10), dt.datetime(2017, 4, 12), dt.datetime(2017, 4, 17)]df['country'] = ['US', 'EU', 'UK']df['letter'] = ['a', 'b', 'c']data_series = list()for row in df.itertuples(): time_range = pd.bdate_range(row.start, row.end) s = len(time_range) data_series += (zip(time_range, [row.start]*s, [row.end]*s, [row.country]*s, [row.letter]*s))columns_names = ['date', 'start', 'end', 'country', 'letter']df = pd.DataFrame(data_series, columns=columns_names)Starting Dataframe: start end country letter0 2017-04-03 2017-04-10 US a1 2017-04-05 2017-04-12 EU b2 2017-04-10 2017-04-17 UK cDesired output: date start end country letter0 2017-04-03 2017-04-03 2017-04-10 US a1 2017-04-04 2017-04-03 2017-04-10 US a2 2017-04-05 2017-04-03 2017-04-10 US a3 2017-04-06 2017-04-03 2017-04-10 US a4 2017-04-07 2017-04-03 2017-04-10 US a5 2017-04-10 2017-04-03 2017-04-10 US a6 2017-04-05 2017-04-05 2017-04-12 EU b7 2017-04-06 2017-04-05 2017-04-12 EU b8 2017-04-07 2017-04-05 2017-04-12 EU b9 2017-04-10 2017-04-05 2017-04-12 EU b10 2017-04-11 2017-04-05 2017-04-12 EU b11 2017-04-12 2017-04-05 2017-04-12 EU b12 2017-04-10 2017-04-10 2017-04-17 UK c13 2017-04-11 2017-04-10 2017-04-17 UK c14 2017-04-12 2017-04-10 2017-04-17 UK c15 2017-04-13 2017-04-10 2017-04-17 UK c16 2017-04-14 2017-04-10 2017-04-17 UK c17 2017-04-17 2017-04-10 2017-04-17 UK cProblem with my solution is that when applying it to a much bigger dataframe (mostly in terms of rows), it does not achieve a result fast enough for me. Does anybody have any ideas of how I could improve? I am also considering solutions in numpy.
Inspired by @StephenRauch's solution I'd like to post mine (which is pretty similar):dates = [pd.bdate_range(r[0],r[1]).to_series() for r in df[['start','end']].values]lens = [len(x) for x in dates]r = pd.DataFrame( {col:np.repeat(df[col].values, lens) for col in df.columns} ).assign(date=np.concatenate(dates))Result:In [259]: rOut[259]: country end letter start date0 US 2017-04-10 a 2017-04-03 2017-04-031 US 2017-04-10 a 2017-04-03 2017-04-042 US 2017-04-10 a 2017-04-03 2017-04-053 US 2017-04-10 a 2017-04-03 2017-04-064 US 2017-04-10 a 2017-04-03 2017-04-075 US 2017-04-10 a 2017-04-03 2017-04-106 EU 2017-04-12 b 2017-04-05 2017-04-057 EU 2017-04-12 b 2017-04-05 2017-04-068 EU 2017-04-12 b 2017-04-05 2017-04-079 EU 2017-04-12 b 2017-04-05 2017-04-1010 EU 2017-04-12 b 2017-04-05 2017-04-1111 EU 2017-04-12 b 2017-04-05 2017-04-1212 UK 2017-04-17 c 2017-04-10 2017-04-1013 UK 2017-04-17 c 2017-04-10 2017-04-1114 UK 2017-04-17 c 2017-04-10 2017-04-1215 UK 2017-04-17 c 2017-04-10 2017-04-1316 UK 2017-04-17 c 2017-04-10 2017-04-1417 UK 2017-04-17 c 2017-04-10 2017-04-17
pandas: cumsum ignoring first two rows I have a dataframe which has the following column:|---------------------| | A ||---------------------|| 0 ||---------------------|| 2.63 ||---------------------|| 7.10 ||---------------------|| 5.70 ||---------------------|| 6.96 ||---------------------|| 7.58 ||---------------------|| 3.3 ||---------------------|| 1.93 ||---------------------|I need to get the cumulative sum, but the point is kind of particular. The first element should be 0, and the following are the cumulative sum starting from the previous column, so in this case I need to produce:|---------------------| | B ||---------------------|| 0 ||---------------------|| 0 ||---------------------|| 2.63 ||---------------------|| 9.73 ||---------------------|| 15.43 ||---------------------|| 22.39 ||---------------------|| 29.97 ||---------------------|| 33.27 ||---------------------|I know that it is easily achieve when not having the condition I am asking for by:df['B'] = df.A.cumsum()However, I don't have any idea how to solve this issue, and I was thinking to implement a for loop, but I hope there is a simply way using pandas.
You can add shift and fillna:df = df.A.cumsum().shift().fillna(0)print (df)0 0.001 0.002 2.633 9.734 15.435 22.396 29.977 33.27Name: A, dtype: float64
Deleting a csv file which is created using numpy.savetxt in pyspark I am new to pyspark and python.After saving a file in local system using numpy.savetxt("test.csv",file,delimiter=',')I am using os to delete that file. os.remove("test.csv"). I am getting an error java.io.FileNotFoundException File file:/someDir/test.csv does not exist. The file numpy.savetxt() creates file with only read permission. How can save the same with read and write permission. Using spark version 2.1
Looks like your spark workers are not able to access the file. You are probably running the master and workers on different servers. When you are trying to work on files, while having setup workers across different machines make sure these workers can access the file.You could keep the same copy of files among all the workers in the exact same location. It is always advisable to use DFS like Hadoop like "hdfs://path/file". When you do the workers can access these files. More details on:Spark: how to use SparkContext.textFile for local file system
Division with numpy matrices that might result in nan How can I divide two numpy matrices A and B in python when sometimes the two matrices will have 0 on the same cell?Basically A[i,j]>=B[i,j] for all i, j. I need to calculate C=A/B. But sometimes A[i,j]==B[i,j]==0. And when this happens I need A[i,j]/B[i,j] to be defined as 0.Is there a simple pythonic way other than going through all the indexes?
You can use the where argument for ufuncs like np.true_divide:np.true_divide(A, B, where=(A!=0) | (B!=0))In case you have no negative values (as stated in the comments) and A >= B for each element (as stated in the question) you can simplify this to:np.true_divide(A, B, where=(A!=0))because A[i, j] == 0 implies B[i, j] == 0.For example:import numpy as npA = np.random.randint(0, 3, (4, 4))B = np.random.randint(0, 3, (4, 4))print(A)print(B)print(np.true_divide(A, B, where=(A!=0) | (B!=0)))[[1 0 2 1] [1 0 0 0] [2 1 0 0] [2 2 0 2]][[1 0 1 1] [2 2 1 2] [2 1 0 1] [2 0 1 2]][[ 1. 0. 2. 1. ] [ 0.5 0. 0. 0. ] [ 1. 1. 0. 0. ] [ 1. inf 0. 1. ]]As alternative: Just replace nans after the division:C = A / B # may print warnings, suppress them with np.seterrstate if you wantC[np.isnan(C)] = 0
Struggling when appending dataframes I am looking forward to append various dataframes through a loop that extracts from a web a series of data. The function ratios_funda by its own works correctly, however I don't find a way to loop it according to the different tickers and append them one after the other in the empty dataframe. Here is the code.import pandas as pdcartera = ['FB.O', 'SAN.MC','TRE.MC', 'BBVA.MC']def ratios_funda(x): rat1=x[2].loc[[1,7,8,10],:] rat2=x[3].loc[[1,5],:] rat3=x[5].loc[[1,2,4,5],:] rat5=x[7].loc[[5,6],:] rat6=x[8].loc[[1,7],:] rats=[rat1,rat2,rat3,rat5,rat6] df=pd.concat([df.set_index(df.columns[0]) for df in rats]) df.index.names=['Fundam ratios'] df.rename(columns={1:'Company',2:'Industry',3:'Sector'}, inplace=True) df.index = df.index.str.strip() return dfdef resultados(): dataframe=pd.DataFrame() for titulos in cartera: ruta=pd.read_html('http://www.reuters.com/finance/ stocks/financialHighlights?symbol='+str(titulos),flavor='html5lib') if dataframe.empty: dataframe= ratios_funda(ruta) else: dataframe=pd.concat([dataframe, ratios_funda(ruta)],axis=1) return dataframeprint(resultados())It looks like it does not loop.
the problem is with having return in the for loop.def resultados(): dataframe=pd.DataFrame() for titulos in cartera: ruta=pd.read_html('your url here') if dataframe.empty: dataframe= ratios_funda(ruta) else: dataframe=pd.concat([dataframe, ratios_funda(ruta)],axis=0) return dataframe
Calculating conditional probabilities from joint pmfs in numpy, too slow. Ideas? (python-numpy) I have a conjunctive probability mass function array, with shape, for example (1,2,3,4,5,6) and I want to calculate the probability table, conditional to a value for some of the dimensions (export the cpts), for decision-making purposes.The code I came up with at the moment is the following (the input is the dictionary "vdict" of the form {'variable_1': value_1, 'variable_2': value_2 ... } )for i in vdict: dim = self.invardict.index(i) # The index of the dimension that our Variable resides in val = self.valdict[i][vdict[i]] # The value we want it to be d = d.swapaxes(0, dim) **d = array([d[val]])** d = d.swapaxes(0, dim)...So, what I currently do is:I translate the variables to the corresponding dimension in the cpt.I swap the zero-th axis with the axis I found before.I replace whole 0-axis with just the desired value.I put the dimension back to its original axis.Now, the problem is, in order to do step 2, I have (a.) to calculate a subarrayand (b.) to put it in a list and translate it again to array so I'll have my new array.Thing is, stuff in bold means that I create new objects, instead of using just the references to the old ones and this, if d is very large (which happens to me) and methods that use d are called many times (which, again, happens to me) the whole result is very slow.So, has anyone come up with an idea that will subtitude this little piece of code and will run faster? Maybe something that will allow me to calculate the conditionals in place.Note: I have to maintain original axis order (or at least be sure on how to update the variable to dimensions dictionaries when an axis is removed). I'd like not to resort in custom dtypes.
Ok, found the answer myself after playing a little with numpy's in-place array manipulations.Changed the last 3 lines in the loop to: d = conditionalize(d, dim, val)where conditionalize is defined as: def conditionalize(arr, dim, val): arr = arr.swapaxes(dim, 0) shape = arr.shape[1:] # shape of the sub-array when we omit the desired dimension. count = array(shape).prod() # count of elements omitted the desired dimension. arr = arr.reshape(array(arr.shape).prod()) # flatten the array in-place. arr = arr[val*count:(val+1)*count] # take the needed elements arr = arr.reshape((1,)+shape) # the desired sub-array shape. arr = arr. swapaxes(0, dim) # fix dimensions return arrThat made my program's execution time reduce from 15 minutes to 6 seconds. Huge gain.I hope this helps someone who comes across the same problem.
How to use numpy with cygwin I have a bash shell script which calls some python scripts. I am running windows with cygwin which has python in /usr/bin/python. I also have python and numpy installed as a windows package. When I execute the script from cygwin , I get an ImportError - no module named numpy. I have tried running from windows shell but the bash script does not run. Any ideas? My script is belowfor target in $(ls large_t) ; do ./emulate.py $target ; #done | sort | gawk '{print $2,$3,$4,$5,$6 > $1}{print $1}' | sort | uniq > frames#frames contains a list of filenames, each files name is the timestamp rm -f videotouch video# for each framefor f in $(cat frames)do./make_target_ant.py $f cat $f.bscan >> video doneThanks
Windows python and Cygwin Python are independent; if you're using Cygwin's Python, you need to have numpy installed in cygwin.If you'd prefer to use the Windows python, you should be able to call it from a bash script by either:Calling the windows executable directly: c:/Python/python.exe ./emulate.pyChanging the hash-bang to point at the Windows install: #!c:/Python/python.exe in the script, rather than #!/usr/bin/env python or #!/usr/bin/python.Putting Windows' python in your path before Cygwin python, for the duration of the script:PATH=c:/Python/:$PATH ./emulate.py where emulate.py uses the /bin/env method of running python.
Pandas: Weird transformation required Name Start End Units PlaceSam 04-03-2022 06-03-2022 2 CAUber 24-04-2022 27-05-2022 1 SVLTwitter 26-04-2022 28-04-2022 2 FRMy dataframe is like above. I wish to duplicate each row by n times where n equal to the difference between Start and End entry. But while duplicating the Start has to increment by one each time. So, My output need to be something like below:Name Start Units PlaceSam 04-03-2022 2 CASam 05-03-2022 2 CAUber 24-04-2022 1 SVLUber 25-04-2022 1 SVLUber 26-04-2022 1 SVLTwitter 26-04-2022 2 FRTwitter 27-04-2022 2 FRI am starring at this for quite sometime. But, clueless.
You can apply pd.date_range to each row then explode your dataframe:# Not mandatory if it's already the casedf['Start'] = pd.to_datetime(df['Start'], dayfirst=True)df['End'] = pd.to_datetime(df['End'], dayfirst=True)date_range = lambda x: pd.date_range(x['Start'], x['End']-pd.DateOffset(days=1))out = (df.assign(Start=df.apply(date_range, axis=1)) .explode('Start', ignore_index=True).drop(columns='End'))Output:>>> df Name Start Units Place0 Sam 2022-03-04 2 CA1 Sam 2022-03-05 2 CA2 Uber 2022-04-24 1 SVL3 Uber 2022-04-25 1 SVL4 Uber 2022-04-26 1 SVL5 Uber 2022-04-27 1 SVL6 Uber 2022-04-28 1 SVL7 Uber 2022-04-29 1 SVL8 Uber 2022-04-30 1 SVL9 Uber 2022-05-01 1 SVL10 Uber 2022-05-02 1 SVL11 Uber 2022-05-03 1 SVL12 Uber 2022-05-04 1 SVL13 Uber 2022-05-05 1 SVL14 Uber 2022-05-06 1 SVL15 Uber 2022-05-07 1 SVL16 Uber 2022-05-08 1 SVL17 Uber 2022-05-09 1 SVL18 Uber 2022-05-10 1 SVL19 Uber 2022-05-11 1 SVL20 Uber 2022-05-12 1 SVL21 Uber 2022-05-13 1 SVL22 Uber 2022-05-14 1 SVL23 Uber 2022-05-15 1 SVL24 Uber 2022-05-16 1 SVL25 Uber 2022-05-17 1 SVL26 Uber 2022-05-18 1 SVL27 Uber 2022-05-19 1 SVL28 Uber 2022-05-20 1 SVL29 Uber 2022-05-21 1 SVL30 Uber 2022-05-22 1 SVL31 Uber 2022-05-23 1 SVL32 Uber 2022-05-24 1 SVL33 Uber 2022-05-25 1 SVL34 Uber 2022-05-26 1 SVL35 Twitter 2022-04-26 2 FR36 Twitter 2022-04-27 2 FR
how to multiply three arrays with different dimension in PyTorch enter image description hereL array dimension is (d,a) ,B is (a,a,N) and R is (a,d). By multiplying these arrays I have to get an array size of (d,d,N). How could I implement this is PyTorch
A possible and straightforward approach is to apply torch.einsum (read more here):>>> torch.einsum('ij,jkn,kl->iln', L, B, R)Where j and k are the reduced dimensions of L and R respectively. And n is the "batch" dimension of B.The first matrix multiplication will reduce L@B (let this intermediate result be o):ij,jkn->iknThe second matrix multiplication will reduce o@R:ikn,kl->ilnWhich overall sums up to the following form:ij,jkn,kl->iln
Retain all columns after resample (pandas) My data looks like so:import pandas as pdimport numpy as npBG_test_df = pd.DataFrame( {'PERSON_ID': [1, 1, 1], 'TS': ['2021-08-14 19:00:27', '2021-08-14 20:00:27', '2021-08-14 22:35:27'], 'bias': ["Not outside of acceptable operation. Refer to patient education","Not outside of acceptable operation. Refer to patient education","Suboptimal"]} )CGM_test_df = pd.DataFrame( {'PERSON_ID': [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], 'SG': [50, 51, 52, 53, 54, 55, 400, 400, 400, 400, 400, 400, 400, 400,50, 51, 52, 53, 54, 55, 400], 'TS': ['2021-08-14 18:30:27','2021-08-14 18:35:27','2021-08-14 18:40:27','2021-08-14 18:45:27','2021-08-14 18:50:27','2021-08-14 18:55:27', '2021-08-14 19:00:27', '2021-08-14 19:30:27','2021-08-14 19:35:27','2021-08-14 19:40:27','2021-08-14 19:45:27','2021-08-14 19:50:27','2021-08-14 19:55:27','2021-08-14 20:00:27', '2021-08-14 20:30:27','2021-08-14 20:35:27','2021-08-14 20:40:27','2021-08-14 20:45:27','2021-08-14 20:50:27','2021-08-14 20:55:27','2021-08-14 21:00:27'] } )problematic = BG_test_df.loc[BG_test_df['bias'] == "Suboptimal"]# Convert to datetimeproblematic['BG_TS'] = pd.to_datetime(problematic['TS'])CGM_test_df['CGM_TS'] = pd.to_datetime(CGM_test_df['TS'])merged = CGM_test_df.merge(problematic, on = "PERSON_ID")#resample in 5 min intervals fill in empty rows with nafilled = (merged.set_index('CGM_TS').resample('5T').sum().reset_index())filled.replace(0, np.nan, inplace=True)When I perform a resample to set the CGM_TS column to have 5 minute intervals, I lose my other columns. In particular, I need the BG_TS column to continue the rest of my analysis. How can I retain the BG_TS column in the filled dataset?Thanks in advance
You can specify a different aggregation function (e.g. min) for BG_TS to keep it in the result:merged.set_index('CGM_TS').resample('5T').agg({'PERSON_ID':np.sum, 'SG':np.sum, 'BG_TS':np.min}).reset_index()Output (for your sample data): CGM_TS PERSON_ID SG BG_TS0 2021-08-14 18:30:00 1 50 2021-08-14 22:35:271 2021-08-14 18:35:00 1 51 2021-08-14 22:35:272 2021-08-14 18:40:00 1 52 2021-08-14 22:35:273 2021-08-14 18:45:00 1 53 2021-08-14 22:35:274 2021-08-14 18:50:00 1 54 2021-08-14 22:35:275 2021-08-14 18:55:00 1 55 2021-08-14 22:35:276 2021-08-14 19:00:00 1 400 2021-08-14 22:35:277 2021-08-14 19:05:00 0 0 NaT...14 2021-08-14 19:40:00 1 400 2021-08-14 22:35:2715 2021-08-14 19:45:00 1 400 2021-08-14 22:35:2716 2021-08-14 19:50:00 1 400 2021-08-14 22:35:27...22 2021-08-14 20:20:00 0 0 NaT23 2021-08-14 20:25:00 0 0 NaT24 2021-08-14 20:30:00 1 50 2021-08-14 22:35:2725 2021-08-14 20:35:00 1 51 2021-08-14 22:35:2726 2021-08-14 20:40:00 1 52 2021-08-14 22:35:2727 2021-08-14 20:45:00 1 53 2021-08-14 22:35:2728 2021-08-14 20:50:00 1 54 2021-08-14 22:35:2729 2021-08-14 20:55:00 1 55 2021-08-14 22:35:2730 2021-08-14 21:00:00 1 400 2021-08-14 22:35:27
How to define combine loss function in keras? My model arch isI have two outputs, I want to train a model based on two outputs such as mse, and cross-entropy. At first, I used two keras lossmodel1.compile(loss=['mse','sparse_categorical_crossentropy'], metrics = ['mse','accuracy'], optimizer='adam')it's working fine, the problem is the cross entropy loss is very unstable, sometimes gives accuracy 74% in the next epoch shows 32%. I'm confused why is?Now if define customer loss.def my_custom_loss(y_true, y_pred): mse = mean_squared_error(y_true[0], y_pred[0]) crossentropy = binary_crossentropy(y_true[1], y_pred[1]) return mse + crossentropyBut it's not working, it showed a negative loss in total loss.
It is hard to judge the issues depending on the information given. A reason might be a too small batch size or a too high learning rate, making the training unstable. I also wonder, that you use sparse_categorical_crossentropy in the top example and binary_crossentropy in the lower one. How many classes do you actually have?
Date matching not working on date and object? I have a variable which holds a date inputted by the user and converts it to date format using this code:correct_date = "2022-06-08"correct_date = dt.datetime.strptime(correct_date,'%Y-%m-%d').date()I also have some embedded SQL in the same script that returns dates in YYYY-MM-DD format; these are saved into a dataframe:actual_dates = pd.read_sql_query( sql = f""" SELECT DATE(CONTACTDATETIME) AS CONTACT_DATE FROM TABLE1 GROUP BY DATE(CONTACTDATETIME); """, con = connection)If work carried out elsewhere was done correctly, there should only be one date in the results from the SQL, which should match the date that was entered into the correct_date variable.What I want to do is check whether this is the case. I have the code below to do this, but the problem is it always returns FAIL even when the only value in actual_dates matches correct_date.if actual_dates["contact_date"].any() != correct_date: print("FAIL")else: print("SUCCESS") Does anyone know where I may be going wrong please? I have the suspicion it's because Python doesn't recognise a date and an object as being the same thing even when they're in YYYY-MM-DD format. Is this correct and if so does anyone know how I can work around this to achieve the required result please?
In above case, you are comparing 'String' with Date object. which will always return False.Instead, converting the string to Date, and then comparing will give correct result.Check below code.if datetime.datetime.strptime(actual_dates["contact_date"],'%Y-%m-%d').date() != correct_date: print("FAIL")else: print("SUCCESS")
Merge three different dataframes in Python I want to merge three data frames in Python, the code I have now provide me with some wrong outputs.This is the first data frame df_1 Year Month X_1 Y_1 0 2021 January $90 $100 1 2021 February NaN $120 2 2021 March $100 $130 3 2021 April $110 $140 4 2021 May Nan $150 5 2019 June $120 $160 This is the second data frame df_2 Year Month X_2 Y_2 0 2021 January Nan $120 1 2021 February NaN $130 2 2021 March $80 $140 3 2021 April $90 $150 4 2021 May Nan $150 5 2021 June $120 $170This is the third data frame df_3 Year Month X_3 Y_3 0 2021 January $110 $150 1 2021 February $140 $160 2 2021 March $97 $170 3 2021 April $90 $180 4 2021 May Nan $190 5 2021 June $120 $200The idea is to combine them into one data frame like this: df_combined Year Month X_1 Y_1 X_2 Y_2 X_3 Y_30 2019 January $90 $100 NaN $120 $110 $150 1 2019 February NaN $120 NaN $130 $140 $1602 2019 March $100 $130 $80 $140 $97 $1703 2019 April $110 $140 $90 $150 $90 $1804 2019 May Nan $150 Nan $150 Nan $1905 2019 June $120 $160 $120 $170 $120 $200The code I have for now does not give me the correct outcome, only df_3 has to the correct numbers. # compile the list of data frames you want to merge import functools as ft from functools import reduce data_frames = [df_1, df_2, df_3] df_merged = reduce(lambda cross, right: pd.merge(cross,right,on=['Year'], how='outer'),data_frames) #remove superfluous columns df_merged.drop(['Month_x', 'Month_y'], axis=1, inplace=True)
You can try withdf_1.merge(df_2, how='left', on=['Year', 'Month']).merge(df_3, how='left', on=['Year', 'Month'])
Python: correlation co-efficient between two sets of data Correlation Co-efficient calculation in PythonHow would I calculate the correlation coefficient using Python between the spring training wins column and the regular-season wins column?NameSpr.TRReg SznTeam B0.4290.586Team C0.4170.646Team D0.5690.6Team E0.5690.457Team F0.5330.563Team G0.7240.617Team H0.50.64Team I0.5770.649Team J0.6920.466Team K0.50.477Team L0.7310.699Team M0.6430.588Team N0.4480.531
You can use corr (Pearson correlation by default):df['Spr.TR'].corr(df['Reg Szn'], method='pearson')output: 0.10811116955657629
How to make for loop with pandas DataFrame? I want to make for loop formation with dataframe but i couldn't find grammar rule with this situation. Below is an overview of the functions I want to implement.With Detail, i want to make new column named [df] which is calculated with column[f_adj]'s value.Image of ExcelHow to I fix this code?df1['df'] = 0for i in range(1,len(df1)-1): df1['df'[i]] = df1['f_adj'[i+1]] - df1['f_adj'[i-1]]Thank you in Advance
You should used iloc or loc in your code.
python pandas: attempting to replace value in row updates ALL rows I have a simple CSV file named input.csv as follows:name,moneyDan,200Jimmy,xdAlice,15Deborah,30I want to write a python script that sanitizes the data in the money column:every value that has non-numerical characters needs to be replaced with 0This is my attempt so far:import pandas as pddf = pd.read_csv( "./input.csv", sep = ",")# this line is the problem: it doesn't update on a row by row basis, it updates all rowsdf['money'] = df['money'].replace(to_replace=r'[^0‐9]', value=0, regex=True)df.to_csv("./output.csv", index = False)The problem is that when the script runs, because the invalud money value xd exists on one of the rows, it will change ALL money values to 0 for ALL rows.I want it to ONLY change the money value for the second data row (Jimmy) which has the invalid value.this is what it gives at the end:name,moneyDan,0Jimmy,0Alice,0Deborah,0but what I need it to give is this:name,moneyDan,200Jimmy,0Alice,15Deborah,30What is the problem?
You can use:df['money'] = pd.to_numeric(df['money'], errors='coerce').fillna(0).astype(int)The above assumes all valid values are integers. You can leave off the .astype(int) if you want float values.Another option would be to use a converter function in the read_csv method. Again, this assumes integers. You can use float(x) in place of int(x) if you expect float money values:def convert_to_int(x): try: return int(x) except ValueError: return 0df = pd.read_csv( 'input.csv', converters={'money': convert_to_int})
How do you lookup a particular pandas dataframe column value in a reference table and copy a reference table value to the dataframe? I have a reference table that I imported into a dataframe(df2) from a .csv. It's 3 columns and around 400 rows. I have another dataframe (df) that has many columns and rows. I am looking to lookup a value from the reference table and add it to the appropriate column in df.The data format for the reference table:MANUF PRODTYPE PRODCODE ALPHA 1 ALPHA1ALPHA 2 ALPHA2BETA 1 BETA1BETA 2 BETA2DELTA 1 DELTA1DELTA 2 DELTA2The dataframe (df) is set up like this:MANUF PRODTYPE SERIALNO PRODCODE INVENTORY ALPHA 1 00001 5ALPHA 2 00001 3BETA 1 00001 4DELTA 1 00001 8ALPHA 2 00002 3BETA 1 00002 4DELTA 2 00001 9DELTA 2 00002 9DELTA 1 00002 8BETA 2 00001 12ALPHA 2 00003 3I am trying to populate PRODCODE in df with the appropriate value based on MANUF and PRODTYPE in the reference table.I tried:df3 = df.merge(df2, how='left') anddf3 = df2.merge(df, how='left')but both gave me either inaccurate or incomplete merges.
Another way without merge would be this:df2 = df2.set_index(['MANUF', 'PRODTYPE'])output = df2.combine_first(df1.set_index(['MANUF', 'PRODTYPE'])).reset_index()print(output) MANUF PRODTYPE INVENTORY PRODCODE SERIALNO0 ALPHA 1 5 ALPHA1 11 ALPHA 2 3 ALPHA2 12 ALPHA 2 3 ALPHA2 23 ALPHA 2 3 ALPHA2 34 BETA 1 4 BETA1 15 BETA 1 4 BETA1 26 BETA 2 12 BETA2 17 DELTA 1 8 DELTA1 18 DELTA 1 8 DELTA1 29 DELTA 2 9 DELTA2 110 DELTA 2 9 DELTA2 2Used Input:df1 = pd.DataFrame({'MANUF': {0: 'ALPHA', 1: 'ALPHA', 2: 'BETA', 3: 'BETA', 4: 'DELTA', 5: 'DELTA'}, 'PRODTYPE': {0: 1, 1: 2, 2: 1, 3: 2, 4: 1, 5: 2}, 'PRODCODE': {0: 'ALPHA1', 1: 'ALPHA2', 2: 'BETA1', 3: 'BETA2', 4: 'DELTA1', 5: 'DELTA2'}})df2 = pd.DataFrame({'MANUF': {0: 'ALPHA', 1: 'ALPHA', 2: 'BETA', 3: 'DELTA', 4: 'ALPHA', 5: 'BETA', 6: 'DELTA', 7: 'DELTA', 8: 'DELTA', 9: 'BETA', 10: 'ALPHA'}, 'PRODTYPE': {0: 1, 1: 2, 2: 1, 3: 1, 4: 2, 5: 1, 6: 2, 7: 2, 8: 1, 9: 2, 10: 2}, 'SERIALNO': {0: 1, 1: 1, 2: 1, 3: 1, 4: 2, 5: 2, 6: 1, 7: 2, 8: 2, 9: 1, 10: 3}, 'INVENTORY': {0: 5, 1: 3, 2: 4, 3: 8, 4: 3, 5: 4, 6: 9, 7: 9, 8: 8, 9: 12, 10: 3}})
How to account for value counts that doesn't exist in python? I have the following dataframe: Name----------0 Blue1 Blue2 Blue3 Red4 Red5 Blue6 Blue7 Red8 Red9 BlueI want to count the number of times "Name" = "Blue" and "Name" = "Red" and send that to a dictionary, which for this df would look like:print('Dictionary:')dictionary = df['Name'].value_counts().to_dict()and output the following:Dictionary:{'Blue': 5, 'Red': 4}Ok, straightforward there. So for context, with my data, I KNOW that the only possibilities for "Names" is either "Blue" or "Red". And so I want to account for other dataframes with the same "Name" column, but different frequencies of "Blue" and "Red". Specifically, since the above code works fine, I want to account for instances where there are either NO counts of "Blue" or NO counts of "Red".And so, if the above df looked like: Name----------0 Blue1 Blue2 Blue3 Blue4 Blue5 Blue6 Blue7 Blue8 Blue9 BlueI would want the output dictionary via:print('Dictionary:')dictionary = df['Name'].value_counts().to_dict()to produce:Dictionary:{'Blue': 9, 'Red': 0}However, as the code stands, the following is actually produced:Dictionary:{'Blue': 9}I need that 0 value in there for use in another operation. I would like the same to be true if all of the "Name" names were "Red", and so producing:Dictionary:{'Blue': 0, 'Red': 9}and not:Dictionary:{'Red': 9}The problem is that I am running into a situation where I face the issue of counting the frequency of a value (a string occurrence here) that just does not exist. How can I fix my python code so that if the "Name" blue or red never occur, the dictionary will still include that "Name" in the dictionary, but just mark its value as 0?
In Python 3.9+ you can use PEP 584's Union Operator:base = {'Blue': 0, 'Red': 0}counts = df['Name'].value_counts().to_dict()dictionary = base | counts# or justdictionary = {'Blue': 0, 'Red': 0} | df['Name'].value_counts().to_dict()Before that you could use unpacking and (re)packing:base = {'Blue': 0, 'Red': 0}counts = df['Name'].value_counts().to_dict()dictionary = {**base, **counts}You could also use .update,dictionary = {'Blue': 0, 'Red': 0}dictionary.update(df['Name'].value_counts().to_dict())Or iterate over values and use .setdefault:dictionary = df['Name'].value_counts().to_dict()for k in ['Blue', 'Red']: dictionary.setdefault(k, 0)I'm sure there are other ways as well.
Correlation between two data frames in Python I have a DataFrame with Job Area Profiles which look similar to this:Now I have some user input, which creates an user DataFrame. This looks like this:Now, I want to determine the correlation between User XYZ's Profile and the profile for Cloud and Data Science.I've tried this:job_df.corrwith(user_df)But this is getting me NaN.How do I solve this?
The function is working, but you cannot find the correlation with a dataframe consisting of only one datapoint, since you'll get a divide by zero error.Then numerator and denominator of the correlation coefficient (see the equation) include the sum of the difference between datapoints and their mean. When there is one datapoint this is zero.It therefore returns NaNs, .If you run the function with the full datasets then you'll be fine.
Multiplicate Dataframe row with a matrix I am trying to multiplicate a dataframe with with a matrix consisting of items from the dataframe.I am able to solve the problem with a for-loop, but with a large dataframe it takes very long.df = pd.DataFrame({"A": [1, 2, 3, 4], "B": [5, 6, 7, 8], "C": [9, 10, 11, 12], "D": [1, 1, 1, 1]})l = []for index, row in df.iterrows(): l.append(df.loc[index].dot(np.array([[np.sin(df["A"].loc[index]), 0, 0, np.sin(df["A"].loc[index])], [0, np.sign(df["B"].loc[index]), 0, np.abs(df["C"].loc[index])], [np.sign(df["C"].loc[index]), 0, np.sign(df["C"].loc[index]), 0], [1, 2, 0, np.tan(df["C"].loc[index])]])))df[["U", "V", "W", "X"]] = lprint(df)Thanks for your help.
It may be easier to work with arrays, rather than a dataframe. Indexing will be lot simplerThe frame's numpy values:In [46]: df.valuesOut[46]: array([[ 1, 5, 9, 1], [ 2, 6, 10, 1], [ 3, 7, 11, 1], [ 4, 8, 12, 1]], dtype=int64)And for one "row", the 2d array is:In [47]: index = 0 In [48]: np.array([[np.sin(df["A"].loc[index]), 0, 0, np.sin(df["A"].loc[index])], ...: [0, np.sign(df["B"].loc[index]), 0, np.abs(df["C"].loc[index])], ...: [np.sign(df["C"].loc[index]), 0, np.sign(df["C"].loc[index]), 0], ...: [1, 2, 0, np.tan(df["C"].loc[index])]]) Out[48]: array([[ 0.84147098, 0. , 0. , 0.84147098], [ 0. , 1. , 0. , 9. ], [ 1. , 0. , 1. , 0. ], [ 1. , 2. , 0. , -0.45231566]])In [52]: Out[46][0].dot(Out[48])Out[52]: array([10.84147098, 7. , 9. , 45.38915533])compare that with your applyIn [51]: lOut[51]: [array([10.84147098, 7. , 9. , 45.38915533]), array([12.81859485, 8. , 10. , 62.46695568]), array([ 12.42336002, 9. , 11. , -148.52748643]), array([ 9.97279002, 10. , 12. , 92.33693009])]In array terms, the 2d array is:In [53]: x = df.valuesIn [56]: index=0In [57]: np.array([[np.sin(x[index,0]), 0, 0, np.sin(x[index,0])], ...: [0, np.sign(x[index,1]), 0, np.abs(x[index,2])], ...: [np.sign(x[index,2]), 0, np.sign(x[index,2]), 0], ...: [1, 2, 0, np.tan(x[index,2])]])Out[57]: array([[ 0.84147098, 0. , 0. , 0.84147098], [ 0. , 1. , 0. , 9. ], [ 1. , 0. , 1. , 0. ], [ 1. , 2. , 0. , -0.45231566]])To do this faster we need to construct such an array for all rows of x at once.In einsum matrix multiplication terms, the row operation is: np.einsum('j,jk->k',x,A)generalized, we need a 3d array such that np.einsum('ij,ijk->ik',x,A)We could iterate on index to produce the 3d A. We can't simply replace the scalar index with a slice or arange.By defining a couple of variables, we can construct the 3d A with:In [64]: Z = np.zeros(4); index=np.arange(4)In [65]: A=np.array([[np.sin(x[index,0]), Z, Z, np.sin(x[index,0])], ...: [Z, np.sign(x[index,1]), Z, np.abs(x[index,2])], ...: [np.sign(x[index,2]), Z, np.sign(x[index,2]), Z], ...: [Z+1, Z+2, Z, np.tan(x[index,2])]])In [66]: A.shapeOut[66]: (4, 4, 4)This has placed the index dimension last.In [67]: A[:,:,0]Out[67]: array([[ 0.84147098, 0. , 0. , 0.84147098], [ 0. , 1. , 0. , 9. ], [ 1. , 0. , 1. , 0. ], [ 1. , 2. , 0. , -0.45231566]])So the einsum needs to be:In [68]: res=np.einsum('ij,jki->ik',x,A)In [69]: resOut[69]: array([[ 10.84147098, 7. , 9. , 45.38915533], [ 12.81859485, 8. , 10. , 62.46695568], [ 12.42336002, 9. , 11. , -148.52748643], [ 9.97279002, 10. , 12. , 92.33693009]])This matches your l values.The 3d A could be constructed other ways, but I chose this as requiring a minimum of editing.
Optimal way for "Lookup" type operations between multiple dataframes A common task I seem to have is something like this:DataFrame A contains among its columns an "id" with some kind of "price" and "description". This would typically be a very large dataset.And there are 2 much smaller DataFrames: one containing columns where the (few) "descriptions" are linked to a "type" and another small DataFrame links the types to a price modifier.The operations would be to modify the price column in DataFrame A, using a "Lookup", to find its "type" via the description and in turn find the price modifierFor example,id, desc, price12345, sausage, £3...id, desc, typesausage, meattype, modifiermeat, -0.5My clumsy way of doing this has been as follows:Joining in the extra tables via some common key, which has just been lucky (that they shared a common key)dfA = dfA.join(dfB.set_index('key'), on='key')but that just ends up with the "Lookup" values from a small table being copied into every row of the original DataFrame A, which to me just seems like a clumsy bozo way to do it.The advantage is that it is now easier to do column-wise logic in the newly updated DataFrame A.The kind of functionality I would like (but do not know how to word well enough to google is this:)For the entire DataFrame A, use the value in some column (let's assume the value to be "a") to "lookup" in DataFrame B, to find "a" in some column (which might or might not have the same name as A, but the values will be there), and in turn this value "b" could be referenced into DataFrame C to get some other value etc.In muppet logic, with 1 lookup level:dfA['new_value'] = df['old_value'] + (use value in dfA'type' column to look into dfB column 'bcol' to get the value in dfB'offset')and then a 2 lookup level would instead use the value in the dfB 'offset' column to look at dfC column 'modifier' etc.Is this kind of logic or operation known as something that I can google around a bit, it feels like I often have to do this kind of thing, using smaller tables of modifiers to reference directly or in several chained steps to adjust something in the much bigger main data table(an example would be great but right now I am hitting that frustration wall of not really having a good intuition for this kind of logic since the power of dataframes is all these kind of "behind the curtain" operations using syntax that looks like its addressing only a single cell!, I always think in terms of old style basic type loops and stuff.I finally found a nice explanation of lambda functions where I can understand the "x" being "the thing in this cell" but I am still learning!)sorry this became a bit long and rambling, if you can suggest a better title I am happy to edit this in retrospect if it will be useful to others in future
Let's say you have the following DataFrames:DataFrame A (df_A)iddescprice1sausage32cheese1.53eggs54milk2.5DataFrame B (df_B)iddesctype1sausagemeat2eggspoulty3cheesedairy4milkdairyDataFrame C (df_C)idtypemodifier1meat-0.52poultry0.753dairy-1To get the new price of all items in DataFrame A, you can do something likedef get_modified_price(row): # Gets the item description desc = row['desc'] # Gets the item type type = df_B[df_B['desc'] == desc]['type'] # Gets the item modifier modifier = df_C[df_C['type'] == type]['modifier'] # Returns the modified price return row['price'] + modifierdf_A['modified_price'] = df.apply(get_modified_price)To learn more about these kinds of operations, you can look into row-wise operations and pandas functions such as pd.map and pd.apply.Also, since you mentioned lambda functions in your question, you can also write the code above asdf_A['modified_price'] = df.apply(lambda row: row['price'] + df_C[df_C['type'] == df_B[df_B['desc'] == row['desc']]['type']]['modifier'])
How to show subplots for each row Currently having an issue getting all of my data to show on my subplots. I'm trying to plot a 7 row 6 column subplot using geodataframes. This is what one of the geodataframes looks like (they all look the same).My data is below:# what I want to label the y axis for each rowylab = ['mean_ensemble','mean_disalexi','mean_eemetric','mean_geesebal','mean_ptjpl','mean_ssebop','mean_sims']# the years I want to plot and what the name of each column in the geodataframes areyears = ['2016', '2017', '2018', '2019', '2020', '2021']# the 7 geodataframesgraph = [mean_ensemble,mean_disalexi,mean_eemetric,mean_geesebal,mean_ptjpl,mean_ssebop,mean_sims]f, ax = plt.subplots(nrows = 7, ncols = 6, figsize = (12, 12))ax = ax.flatten()i=0for y, col in enumerate(years): graph[i].plot(column=col, ax=ax[y], legend=True, cmap='Blues') ax[y].axis('off') plt.title(str(y)) i+=1plt.show()This is what I end up with.I also want a title for the overall subplot that says "Mean ET Data for SD-6 Area". I'm not sure if I'm missing anything so any help would be appreciated.
I think you need another loop to go through rows as well as columns. Hard to replicate exactly without your data sets, but I'd suggest something like this:f, axs = plt.subplots(nrows=7, ncols=6, figsize = (12, 12))for i in range(7): for j in range(6): graph[i].plot(column=years[j], ax=axs[i,j], legend=True, cmap='Blues') axs[i,j].axis('off') if i == 0: axs[i,j].set_title(ylab[j])
How to update the data frame column values from another data frame based a conditional match in pandas I have two dataframes as:df_A:{'last_name': {0: 'Williams', 1: 'Henry', 2: 'XYX', 3: 'Smith', 4: 'David', 5: 'Freeman', 6: 'Walter', 7: 'Test_A', 8: 'Mallesham', 9: 'Mallesham', 10: 'Henry', 11: 'Smith'}, 'first_name': {0: 'Henry', 1: 'Williams', 2: 'ABC', 3: 'David', 4: 'Smith', 5: 'Walter', 6: 'Freeman', 7: 'Test_B', 8: 'Yamulla', 9: 'Yamulla', 10: 'Williams', 11: 'David'}, 'full_name': {0: 'Williams Henry', 1: 'Henry Williams', 2: 'XYX ABC', 3: 'Smith David', 4: 'David Smith', 5: 'Freeman Walter', 6: 'Walter Freeman', 7: 'Test_A Test_B', 8: 'Mallesham Yamulla', 9: 'Mallesham Yamulla', 10: 'Henry Williams', 11: 'Smith David'}, 'name_unique_identifier': {0: 'NAME_GROUP-11', 1: 'NAME_GROUP-11', 2: 'NAME_GROUP-12', 3: 'NAME_GROUP-13', 4: 'NAME_GROUP-13', 5: 'NAME_GROUP-14', 6: 'NAME_GROUP-14', 7: 'NAME_GROUP-15', 8: 'NAME_GROUP-16', 9: 'NAME_GROUP-16', 10: 'NAME_GROUP-11', 11: 'NAME_GROUP-13'}} last_name first_name full_name name_unique_identifier0 Williams Henry Williams Henry NAME_GROUP-111 Henry Williams Henry Williams NAME_GROUP-112 XYX ABC XYX ABC NAME_GROUP-123 Smith David Smith David NAME_GROUP-134 David Smith David Smith NAME_GROUP-135 Freeman Walter Freeman Walter NAME_GROUP-146 Walter Freeman Walter Freeman NAME_GROUP-147 Test_A Test_B Test_A Test_B NAME_GROUP-158 Mallesham Yamulla Mallesham Yamulla NAME_GROUP-169 Mallesham Yamulla Mallesham Yamulla NAME_GROUP-1610 Henry Williams Henry Williams NAME_GROUP-1111 Smith David Smith David NAME_GROUP-13df_B:{'name_unique_identifier': {0: 'NAME_GROUP-11', 1: 'NAME_GROUP-13', 2: 'NAME_GROUP-14'}, 'full_name': {0: 'Henry Williams', 1: 'Smith David', 2: 'Freeman Walter'}, 'last_name': {0: 'Henry', 1: 'Smith', 2: 'Freeman'}, 'first_name': {0: 'Williams', 1: 'David', 2: 'Walter'}} name_unique_identifier full_name last_name first_name0 NAME_GROUP-11 Henry Williams Henry Williams1 NAME_GROUP-13 Smith David Smith David2 NAME_GROUP-14 Freeman Walter Freeman WalterHere wherever the name_unique_identifier exists in df_A and df_B, df_A dataframe column's last_name,first_name to be filled in with df_B last_name,first_name, the non matched entries not required to be updated.Example:NAME_GROUP-14 exists in df_A and df_B. So last_name and first_name in df_A for this identifier should be as 'Freeman','Walter'.As I'm dealing with millions of records, an efficient technique is needed.
You can check each unique value in column=name_unique_identifier from df_B where exist in df_A and then insert the value from df_B to df_A.col = 'name_unique_identifier'for val in df_B[col]: msk_A = df_A[col].eq(val) msk_B = df_B[col].eq(val) df_A.loc[msk_A, ['last_name', 'first_name']] = df_B.loc[msk_B, ['last_name', 'first_name']].values# If you want to update 'full_name' base new values of 'last_name' and 'first_name'df_A['full_name'] = df_A['last_name'] + " " + df_A['first_name']print(df_A) last_name first_name full_name name_unique_identifier0 Williams Henry Williams Henry NAME_GROUP-111 Henry Williams Henry Williams NAME_GROUP-112 XYX ABC XYX ABC NAME_GROUP-123 Smith David Smith David NAME_GROUP-134 David Smith David Smith NAME_GROUP-135 Freeman Walter Freeman Walter NAME_GROUP-146 Freeman Walter Freeman Walter NAME_GROUP-147 Test_A Test_B Test_A Test_B NAME_GROUP-158 Mallesham Yamulla Mallesham Yamulla NAME_GROUP-169 Mallesham Yamulla Mallesham Yamulla NAME_GROUP-1610 Henry Williams Henry Williams NAME_GROUP-1111 Smith David Smith David NAME_GROUP-13
dataframe error when comparing expression levels: TypeError: Unordered Categoricals can only compare equality or not I am working with an anndata object gleaned from analyzing single-cell RNAseq data using scanpy to obtain clusters. This is far along in the process (near completed) and I am now trying to obtain a list of the average expression of certain marker genes in the leiden clusters from my data. I am getting an error at the following point.# Backbone importsimport numpy as npimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsfrom pathlib import Path# Single Cell importsimport anndataimport scanpy as scmarkers = ["MS4A1", "CD72", "CD37", "CD79A", "CD79B","CD19"]grouping_column = "leiden"df = sc.get.obs_df(hy_bc, markers + [grouping_column])mean_expression = df.loc[:, ~df.columns.isin([grouping_column])].mean(axis=0)mean_expression:MS4A1 1.594015CD72 0.421510CD37 1.858241CD79A 1.801162CD79B 1.180483CD19 0.430246dtype: float32df, mean_expression = df.align(mean_expression, axis=1, copy=False)Error happens hereg = (df > mean_expression).groupby(grouping_column) ---------------------------------------------------------------------------TypeError Traceback (most recent call last)Input In [88], in <cell line: 1>()----> 1 g = (df > mean_expression).groupby(grouping_column)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\ops\common.py:70, in _unpack_zerodim_and_defer.<locals>.new_method(self, other) 66 return NotImplemented 68 other = item_from_zerodim(other)---> 70 return method(self, other)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\arraylike.py:56, in OpsMixin.__gt__(self, other) 54 @unpack_zerodim_and_defer("__gt__") 55 def __gt__(self, other):---> 56 return self._cmp_method(other, operator.gt)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\frame.py:6934, in DataFrame._cmp_method(self, other, op) 6931 self, other = ops.align_method_FRAME(self, other, axis, flex=False, level=None) 6933 # See GH#4537 for discussion of scalar op behavior-> 6934 new_data = self._dispatch_frame_op(other, op, axis=axis) 6935 return self._construct_result(new_data)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\frame.py:6985, in DataFrame._dispatch_frame_op(self, right, func, axis) 6979 # TODO: The previous assertion `assert right._indexed_same(self)` 6980 # fails in cases with empty columns reached via 6981 # _frame_arith_method_with_reindex 6982 6983 # TODO operate_blockwise expects a manager of the same type 6984 with np.errstate(all="ignore"):-> 6985 bm = self._mgr.operate_blockwise( 6986 # error: Argument 1 to "operate_blockwise" of "ArrayManager" has 6987 # incompatible type "Union[ArrayManager, BlockManager]"; expected 6988 # "ArrayManager" 6989 # error: Argument 1 to "operate_blockwise" of "BlockManager" has 6990 # incompatible type "Union[ArrayManager, BlockManager]"; expected 6991 # "BlockManager" 6992 right._mgr, # type: ignore[arg-type] 6993 array_op, 6994 ) 6995 return self._constructor(bm) 6997 elif isinstance(right, Series) and axis == 1: 6998 # axis=1 means we want to operate row-by-rowFile C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\internals\managers.py:1409, in BlockManager.operate_blockwise(self, other, array_op) 1405 def operate_blockwise(self, other: BlockManager, array_op) -> BlockManager: 1406 """ 1407 Apply array_op blockwise with another (aligned) BlockManager. 1408 """-> 1409 return operate_blockwise(self, other, array_op)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\internals\ops.py:63, in operate_blockwise(left, right, array_op) 61 res_blks: list[Block] = [] 62 for lvals, rvals, locs, left_ea, right_ea, rblk in _iter_block_pairs(left, right):---> 63 res_values = array_op(lvals, rvals) 64 if left_ea and not right_ea and hasattr(res_values, "reshape"): 65 res_values = res_values.reshape(1, -1)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\ops\array_ops.py:269, in comparison_op(left, right, op) 260 raise ValueError( 261 "Lengths must match to compare", lvalues.shape, rvalues.shape 262 ) 264 if should_extension_dispatch(lvalues, rvalues) or ( 265 (isinstance(rvalues, (Timedelta, BaseOffset, Timestamp)) or right is NaT) 266 and not is_object_dtype(lvalues.dtype) 267 ): 268 # Call the method on lvalues--> 269 res_values = op(lvalues, rvalues) 271 elif is_scalar(rvalues) and isna(rvalues): # TODO: but not pd.NA? 272 # numpy does not like comparisons vs None 273 if op is operator.ne:File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\ops\common.py:70, in _unpack_zerodim_and_defer.<locals>.new_method(self, other) 66 return NotImplemented 68 other = item_from_zerodim(other)---> 70 return method(self, other)File C:\ProgramData\Anaconda3\envs\JHH216-hT246\lib\site-packages\pandas\core\arrays\categorical.py:141, in _cat_compare_op.<locals>.func(self, other) 139 if not self.ordered: 140 if opname in ["__lt__", "__gt__", "__le__", "__ge__"]:--> 141 raise TypeError( 142 "Unordered Categoricals can only compare equality or not" 143 ) 144 if isinstance(other, Categorical): 145 # Two Categoricals can only be compared if the categories are 146 # the same (maybe up to ordering, depending on ordered) 148 msg = "Categoricals can only be compared if 'categories' are the same."TypeError: Unordered Categoricals can only compare equality or notCode I have, but have not run yet because of the error:frac = lambda z: sum(z) / z.shape[0]frac.__name__ = "pos_frac"g.aggregate([sum, frac])
It seems that your grouping column is a categorical column and not float or int. try adding this line after the instantiation of the dataframe.df = sc.get.obs_df(hy_bc, markers + [grouping_column])df[grouping_column] = df[grouping_column].astype('int64')another issue I noticed. the expression df > mean_expression will produce all false values in leiden because leiden has the value NaN in the mean expression. therefore when you use groupby, you will only have one group which is the value False. One group defeats the purpose of groupby. Not sure what are you trying to do but wanted to point that out.
module 'torch' has no attribute 'frombuffer' in Google Colab data_root = os.path.join(os.getcwd(), "data")transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5]),])fashion_mnist_dataset = FashionMNIST(data_root, download = True, train = True, transform = transform)Error Message/usr/local/lib/python3.7/dist-packages/torchvision/datasets/mnist.py in read_sn3_pascalvincent_tensor(path, strict)524 # we need to reverse the bytes before we can read them with torch.frombuffer().525 needs_byte_reversal = sys.byteorder == "little" and num_bytes_per_value > 1--> 526 parsed = torch.frombuffer(bytearray(data), dtype=torch_type, offset=(4 * (nd + 1)))527 if needs_byte_reversal:528 parsed = parsed.flip(0)AttributeError: module 'torch' has no attribute 'frombuffer'what can i do for this err in Colab
I tried your code in my Google Colab by adding the codes (to import the libraries) below, but it works well without errors.import osfrom torchvision import transformfrom torchvision.datasets import FashionMNISTI usedtorchvision 0.13.0+cu113google-colab 1.0.0Runtime GPU (when I set "None," it also works)Do you get errors when you also use the same codes above? Do you use another versions?
How can I print the training and validation graphs, and training and validation loss graphs? I need to plot the training and validation graphs, and trarining and validation loss for my model.model.compile(loss=tf.keras.losses.binary_crossentropy, optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate), metrics=['accuracy'])history = model.fit(X_train, y_train, batch_size=batch_size, epochs=no_epochs, verbose=verbosity, validation_split=validation_split)loss, accuracy = model.evaluate(X_test, y_test, verbose=1)
history object contains both accuracy and loss for both the training as well as the validation set. We can use matplotlib to plot from that.In these plots x-axis is no_of_epochs and the y-axis is accuracy and loss value. Below is one basic implementation to achieve that, it can easily be customized according to requirements.import matplotlib.pyplot as pltdef plot_history(history): acc = history.history["accuracy"] loss = history.history["loss"] val_loss = history.history["val_loss"] val_accuracy = history.history["val_accuracy"] x = range(1, len(acc) + 1) plt.figure(figsize=(12,5)) plt.subplot(1, 2, 1) plt.plot(x, acc, "b", label="traning_acc") plt.plot(x, val_accuracy, "r", label="traning_acc") plt.title("Accuracy") plt.subplot(1, 2, 2) plt.plot(x, loss, "b", label="traning_acc") plt.plot(x, val_loss, "r", label="traning_acc") plt.title("Loss") plot_history(history)Plot would look like below:Accuracy and Loss plot
python pandas substring based on columns values Given the following df:data = {'Description': ['with lemon', 'lemon', 'and orange', 'orange'], 'Start': ['6', '1', '5', '1'], 'Length': ['5', '5', '6', '6']}df = pd.DataFrame(data)print (df)I would like to substring the "Description" based on what is specified in the other columns as start and length, here the expected output:data = {'Description': ['with lemon', 'lemon', 'and orange', 'orange'], 'Start': ['6', '1', '5', '1'], 'Length': ['5', '5', '6', '6'], 'Res': ['lemon', 'lemon', 'orange', 'orange']}df = pd.DataFrame(data)print (df)Is there a way to make it dynamic or another compact way?df['Res'] = df['Description'].str[1:2]
You need to loop, a list comprehension will be the most efficient (python ≥3.8 due to the walrus operator, thanks @I'mahdi):df['Res'] = [s[(start:=int(a)-1):start+int(b)] for (s,a,b) in zip(df['Description'], df['Start'], df['Length'])]Or using pandas for the conversion (thanks @DaniMesejo):df['Res'] = [s[a:a+b] for (s,a,b) in zip(df['Description'], df['Start'].astype(int)-1, df['Length'].astype(int))]output: Description Start Length Res0 with lemon 6 5 lemon1 lemon 1 5 lemon2 and orange 5 6 orange3 orange 1 6 orangehandling non-integers / NAsdf['Res'] = [s[a:a+b] if pd.notna(a) and pd.notna(b) else 'NA' for (s,a,b) in zip(df['Description'], pd.to_numeric(df['Start'], errors='coerce').convert_dtypes()-1, pd.to_numeric(df['Length'], errors='coerce').convert_dtypes() )]output: Description Start Length Res0 with lemon 6 5 lemon1 lemon 1 5 lemon2 and orange 5 6 orange3 orange 1 6 orange4 pinapple xxx NA NA NA5 orangiie NA NA NA
What is PyTorch Dataset supposed to return? I'm trying to get PyTorch to work with DataLoader, this being said to be the easiest way to handle mini batches, which are in some cases necessary for best performance.DataLoader wants a Dataset as input.Most of the documentation on Dataset assumes you are working with an off-the-shelf standard data set e.g. MNIST, or at least with images, and can use existing machinery as a black box. I'm working with non-image data I'm generating myself. My best current attempt to distill the documentation about how to do that, down to a minimal test case, is:import torchfrom torch import nnfrom torch.utils.data import Dataset, DataLoaderclass Dataset1(Dataset): def __init__(self): pass def __len__(self): return 80 def __getitem__(self, i): # actual data is blank, just to test the mechanics of Dataset return [0.0, 0.0, 0.0], 1.0train_dataloader = DataLoader(Dataset1(), batch_size=8)for X, y in train_dataloader: print(f"X: {X}") print(f"y: {y.shape} {y.dtype} {y}") breakclass Net(nn.Module): def __init__(self): super(Net, self).__init__() self.layers = nn.Sequential( nn.Linear(3, 10), nn.ReLU(), nn.Linear(10, 1), nn.Sigmoid(), ) def forward(self, x): return self.layers(x)device = torch.device("cpu")model = Net().to(device)criterion = nn.BCELoss()optimizer = torch.optim.SGD(model.parameters(), lr=0.1)for epoch in range(10): for X, y in train_dataloader: X, y = X.to(device), y.to(device) pred = model(X) loss = criterion(pred, y) optimizer.zero_grad() loss.backward() optimizer.step()The output of the above program is:X: [tensor([0., 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64), tensor([0., 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64), tensor([0., 0., 0., 0., 0., 0., 0., 0.], dtype=torch.float64)]y: torch.Size([8]) torch.float64 tensor([1., 1., 1., 1., 1., 1., 1., 1.], dtype=torch.float64)Traceback (most recent call last): File "C:\ml\test_dataloader.py", line 47, in <module> X, y = X.to(device), y.to(device)AttributeError: 'list' object has no attribute 'to'In all the example code I can find, X, y = X.to(device), y.to(device) succeeds, because X is indeed a tensor (whereas it is not in my version). Now I'm trying to find out what exactly converts X to a tensor, because either the example code e.g. https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html does not do so, or I am failing to understand how and where it does.Does Dataset itself convert things to tensors? The answer seems to be 'sort of'.It has converted y to a tensor, a column of the y value for every example in the batch. That much, makes sense, though it has used type float64, whereas in machine learning, we usually prefer float32. I am used to the idea that Python always represents scalars in double precision, so the conversion from double to single precision happens at the time of forming a tensor, and that this can be insured by specifying the dtype parameter. But in this case Dataset seems to have formed the tensor implicitly. Is there a place or way to specify the dtype parameter?X is not a tensor, but a list thereof. It would make intuitive sense if it were a list of the examples in the batch, but instead of a list of 8 elements each containing 3 elements, it's the other way around. So Dataset has transposed the input data, which would make sense if it is forming a tensor to match the shape of y, but instead of making a single 2d tensor, it has made a list of 1d tensors. (And, again, in double precision.) Why? Is there a way to change this behavior?The answer posted so far to Does pytorch Dataset.__getitem__ have to return a dict? says __getitem__ can return anything. Okay, but then how does the anything get converted to the form the training procedure requires?
The dataset instance is only tasked with returning a single element of the dataset, which can take many forms: a dict, a list, an int, a float, a tensor, etc...But the behaviour you are seeing is actually handled by your PyTorch data loader and not by the underlying dataset. Thismechanism is called collating and its implementation is done by collate_fn. You can actually provide your own as an argument to a data.DataLoader. The default collate function is provided by PyTorch as default_collate and will handle the vast majority of cases. Please have a look at its documentation, as it gives insights on what possible use cases it can handle.With this default collate the returned batch will take the same types as the item you returned in your dataset.You should therefore return tensors instead of a list as @dx2-66 explained.
How to remove the number of the Excel row? So I have to sort certain rows from an Excel file with pandas, save them to a text file and show them on a single page website. This is my code:dataTopDivision = pd.read_excel("files/Volleybal_Topdivisie_tussenstand.xlsx")dataTopDivision1 = dataTopDivision[['datum', 'team1', 'team2', 'uitslag', 'scheidsrechter', 'overtredingen']]data_sorted = dataTopDivision1.sort_values("overtredingen", ascending=False)top5 = data_sorted.head(5)blackbook = open("files/examples/zwartboek.txt", "w", encoding="UTF-8")blackbook.write(bamboo.prettify(top5))blackbook.close()On the website the top 5 shows up as desired, but the numbers of the rows are in front of the data like this: numbers of row in front of dataHow would I go about removing these numbers?I hope this is clear, I haven't got a lot of experience with this. Many thanks!
Can you try by adding index_col=NoneReplace your first linedataTopDivision = pd.read_excel("files/Volleybal_Topdivisie_tussenstand.xlsx",index_col=None)
Iterating over rows to find mean of a data frame in Python I have a dataframe of 100 random numbers and I would like to find the mean as follows:mean0 should have mean of 0,5,10,... rowsmean1 should have mean of 1,6,11,16,.... rows...mean4 should have mean of 4,9,14,... rows.So far, I am able to find the mean0 but I am not able to figure out a way to iterate the process in order to obtain the remaining means.My code is as follows:import numpy as npimport pandas as pdimport csvdata = np.random.randint(1, 100, size=100)df = pd.DataFrame(data)print(df)df.to_csv('example.csv', index=False)df1 = df[::5]print("Every 12th row is:\n",df1)df2 = df1.mean()print(df2)
Since df[::5] is equivalent to df[0::5], you could use df[1::5], df[2::5], df[3::5], and df[4::5] for the remaining dataframes with subsequent application of mean by df[i::5].mean().It is not explicitly showcased in the Pandas documentation examples but identical list slicing with [start:stop:step].
How to I reshape the 2D array like this? (By using tensor) I want to resize my image from 32 * 32 to 16 * 16. (By using torch.tensor)Like decreasing the resolution?Can anyone help me?
If you have an image (stored in a tensor) and you want to decrease it's resolution, then you are not reshaping it, but rather resizing it.To that end, you can use pytorch's interpolate:import torchfrom torch.nn import functional as nnfy = nnf.interpolate(x[None, None, ...], size=(16, 16), mode='bicubic', align_corners=False, antialias=True)Notes:nnf.interpolate operates on batches of multi-channel images, that is, it expects its input x to have 4 dimensions: batch-channels-height-width. So, if your x is a single image with a single channel (e.g., an MNIST digit) you'll have to create a singleton batch dimension and a singleton channel dimension.Pay close attention to align_corners and antialias -- make sure you are using the right configuration for your needs.For more information regarding aliasing and alignment when resizing images you can look at ResizeRight.
Creating another column in pandas based on a pre-existing column I have a third column in my data frame where I want to be able to create a fourth column that looks almost the same, except it has no double quotes and there is a 'user/' prefix before each ID in the list. Also, sometimes it is just a single ID vs. list of IDs (as shown in example DF).originalcol1 col2 col3 01 01 "ID278, ID289"02 02 "ID275"desiredcol1 col2 col3 col401 01 "ID278, ID289" user/ID278, user/ID28902 02 "ID275" user/ID275
Given: col1 col2 col30 1.0 1.0 "ID278, ID289"1 2.0 2.0 "ID275"2 2.0 1.0 NaNDoing:df['col4'] = (df.col3.str.strip('"') # Remove " from both ends. .str.split(', ') # Split into lists on ', '. .apply(lambda x: ['user/' + i for i in x if i] # Apply this list comprehension, if isinstance(x, list) # If it's a list. else x) .str.join(', ')) # Join them back together.print(df)Output: col1 col2 col3 col40 1.0 1.0 "ID278, ID289" user/ID278, user/ID2891 2.0 2.0 "ID275" user/ID2752 2.0 1.0 NaN NaN
Split column in several columns by delimiter '\' in pandas I have a txt file which I read into pandas dataframe. The problem is that inside this file my text data recorded with delimiter ''. I need to split information in 1 column into several columns but it does not work because of this delimiter.I found this post on stackoverflow just with one string, but I don't understand how to apply it once I have a whole dataframe: Split string at delimiter '\' in pythonAfter reading my txt file into df it looks something like thisdfcolumn1\tcolumn2\tcolumn30.1\t0.2\t0.30.4\t0.5\t0.60.7\t0.8\t0.9Basically what I am doing now is the following:df = pd.read_fwf('my_file.txt', skiprows = 8) #I use skip rows because there is irrelevant textdf['column1\tcolumn2\tcolumn3'] = "r'" + df['column1\tcolumn2\tcolumn3'] +"'" # i try to make it a row string as in the post suggested but it does not really workdf['column1\tcolumn2\tcolumn3'].str.split('\\',expand=True)and what I get is just the following (just displayed like text inside a data frame)r'0.1\t0.2\t0.3'r'0.4\t0.5\t0.6'r'0.7\t0.8\t0.9'I am not very good with regular expersions and it seems a bit hard, how can I target this problem?
It looks like your file is tab-delimited, because of the "\t". This may workpd.read_csv('file.txt', sep='\t', skiprows=8)
Hugging face: RuntimeError: model_init should have 0 or 1 argument I’m trying to tune hyper-params with the following code:def my_hp_space(trial): return { "learning_rate": trial.suggest_float("learning_rate", 5e-3, 5e-5), "arr_gradient_accumulation_steps": trial.suggest_int("num_train_epochs", 8, 16), "arr_per_device_train_batch_size": trial.suggest_int(2, 4), }def get_model(model_name, config): return AutoModelForSequenceClassification.from_pretrained(model_name, config=config)def compute_metric(eval_predictions): metric = load_metric('accuracy') logits, labels = eval_predictions predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels)training_args = TrainingArguments(output_dir='test-trainer', evaluation_strategy="epoch", num_train_epochs= 10)data_collator = default_data_collatormodel_name = 'sentence-transformers/nli-roberta-base-v2'config = AutoConfig.from_pretrained(model_name,num_labels=3)trainer = Trainer( model_init = get_model(model_name, config), args = training_args, train_dataset = tokenized_datasets['TRAIN'], eval_dataset = tokenized_datasets['TEST'], compute_metrics = compute_metric, tokenizer = None, data_collator = data_collator,)best = trainer.hyperparameter_search(direction="maximize", hp_space=my_hp_space)And getting error: 1173 model = self.model_init(trial) 1174 else:-> 1175 raise RuntimeError("model_init should have 0 or 1 argument.") 1177 if model is None: 1178 raise RuntimeError("model_init should not return None.")RuntimeError: model_init should have 0 or 1 argument.What am I doing wrong ?How can I fix it and run hyper parameter method and get best model parameters ?
According to the documentation you have to pass the model_init as a callable.trainer = Trainer( model_init = get_model, args = training_args, train_dataset = tokenized_datasets['TRAIN'], eval_dataset = tokenized_datasets['TEST'], compute_metrics = compute_metric, tokenizer = None, data_collator = data_collator,)Additionally there seems to be an issue with with the number of defined parameters in your passed model_init function. Your function get_model requires two parameters, while only 0 or 1 may be passed. The huggingface documentation statesThe function may have zero argument, or a single one containing the optuna/Ray Tune/SigOpt trial object, to be able to choose different architectures according to hyper parameters (such as layer count, sizes of inner layers, dropout probabilities etc).You can define your parameters inside the get_model function and it works.def get_model(): model_name = 'sentence-transformers/nli-roberta-base-v2' config = AutoConfig.from_pretrained(model_name,num_labels=3) return AutoModelForSequenceClassification.from_pretrained(model_name, config=config)The official raytune example contains some code how to keep your parametrisation. They define an additional function tune_transformerand define get_model inside the function scope of tune_transformer. You can check their example, if you want to keep your parametrisationHope it helps.
Incorrect df. iloc[:, 0] My df has below columnsID Number Name11 ccc-456 dfg45 ggt-56 ggg33 67889 tttWhen I created a new dataframe (need it for merging with another dataframe)df2 = df[['ID', 'Number']]I got an error message stating ID is not in the index. But when I print(df), I do see the ID column.When I ran the indexdf3 = df.iloc[:, 0] I see first two columns ID and Number in the resultsID24 32666188 33432401 34341448 34490510 34713 ... 14062 10878914204 11071014651 11633214678 11673314726 117600Name: NUMBER, Length: 28149, dtype: int64Why can i access the ID column?
Your issue:Based on the information provided, your "ID" column is set to your dataframe index.If you run this test code, you will get the same error that you described.test_dict = { 'ID': [11,45,33], 'Number': ['ccc-456','ggt-56','67889'], 'Name': ['dfg','ggg','ttt'] }df = pd.DataFrame(test_dict).set_index('ID')df2 = df[['ID', 'Number']]SolutionThe easiest solution would be to do df.reset_index(inplace = True) before you create df2. This will give you the "ID" as a column so you can reference it as desired with df2 = df[['ID', 'Number']]
Is there a way to remove header and split columns with pandas read_csv? [Edited: working code at the end]I have a CSV file with many rows, but only one column. I want to separate the rows' values into columns.I have triedimport pandas as pd df = pd.read_csv("TEST1.csv")final = [v.split(";") for v in df]print(final)However, it didn't work. My CSV file doesn't have a header, yet the code reads the first row as a header. I don't know why, but the code returned only the header with the splits, and ignored the remainder of the data.For this, I've also triedimport pandas as pddf = pd.read_csv("TEST1.csv").shift(periods=1)final = [v.split(";") for v in df]print(final)Which also returned the same error; andimport pandas as pddf = pd.read_csv("TEST1.csv",header=None) final = [v.split(";") for v in df]print(final)Which returnedAttributeError: 'int' object has no attribute 'split'I presume it did that because when header=None or header=0, it appears as 0; and for some reason, the final = [v.split(";") for v in df] is only reading the header.Also, I have tried inserting a new header:import pandas as pddf = pd.read_csv("TEST1.csv")final = [v.split(";") for v in df]headerList = ['Time','Type','Value','Size']pd.DataFrame(final).to_csv("TEST2.csv",header=headerList)And it did work, partly. There is a new header, but the only row in the csv file is the old header (which is part of the data); none of the other data has transferred to the TEST2.csv file.Is there any way you could shed a light upon this issue, so I can split all my data?Many thanks.EDIT: Thanks to @1extralime, here is the working code:import pandas as pddf = pd.read_csv("TEST1.csv",sep=';')df.columns = ['Time','Type','Value','Size']df.to_csv("TEST2.csv")
Try:import pandas as pddf = pd.read_csv('TEST1.csv', sep=';')df.columns = ['Time', 'Type', 'Value', 'Size']
Python + Pandas '>=' not supported between instances of 'str' and 'float' Issue 1 - solved by using pd.to_datetime(df.Date, format='%Y-%m-%d'). Thanks to MichaelI am trying to find the latest date of each user using their IDdf['Latest Date'] = df.groupby(['ID'])['Date'].transform.('max')df.drop_duplicates(subset='ID', keep='last',inplace=True)But I am getting '>=' not supported between instances of 'str' and 'float'I have used the same approach in the past and it worked fine.When I did dytypes, I see 'ID' column is int64 and Date column as object because I converted the date column to df['Date'] = pd.to_datetime(df['Date']).dt.strftime('%Y-%m-%d')Issue 2 solved - See Michael's comment 'For the edit'But the output does not look rightI am trying to find the latest date of each user using their ID and assign those dates to new columns using the categoryDataframe = dfData looks like below, ID CATEGORY NAME DATE 1 fruits 2017-08-07 00:00:00 2 veggies 2018-01-25 00:00:00 1 fruits 2015-08-07 00:00:00 2 veggies 2022-01-01 00:00:00My code is below//Converting the date format df['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')//transforming to identify the latest date df['Latest Date'] = df.groupby(['ID'])['Date'].transform.('max')//keeping the last and dropping the duplicates df.drop_duplicates(subset='ID', keep='last',inplace=True)//inserting new columns df['Fruits'] = ' ' df['Veggies'] = ' '//applying the latest dates to the newly created columns df.loc[((df['CATEGORY NAME'] == 'fruits')), 'Fruits'] = df['Latest Date'] df.loc[((df['CATEGORY NAME'] == 'veggies')), 'Veggies'] = df['Latest Date']I want the output like belowID CATEGORY NAME DATE Latest Date Fruits Veggies 1 fruits 2017-08-07 2017-08-07 2017-08-07 2 veggies 2022-01-01 2022-01-01 2022-01-01But my output looks odd. I don't have an error message but the output is not rightID CATEGORY NAME DATE Latest Date Fruits Veggies 1 fruits 2017-08-07 2 veggies 2022-01-01 2021-01-01 2021-01-01 00:00:00If you notice aboveIt did not identify the latest correctlyWhen applying the date values to the new column, its 00:00:00 time format also shows upIt did not drop duplicatesNot sure what's wrong
strftime converts a date to string. Did you want to keep it as a datetime object but change the format? Try this instead:df.Date = pd.to_datetime(df.Date, format='%Y-%m-%d')For the EditI'm not sure why you want the "Date" and "Latest Date" columns to be the same, but here is the code that will give you your desired table output:# Recreate dataframeID = [1,2,1,2]CATEGORY_NAME = ["fruits", "veggies", "fruits", "veggies"] DATE = ["2017-08-07 00:00:00", "2018-01-25 00:00:00", "2015-08-07 00:00:00", "2022-01-01 00:00:00"]df = pd.DataFrame({"ID":ID,"CATEGORY NAME":CATEGORY_NAME, "Date":DATE})# Convert datetime formatdf['Date'] = pd.to_datetime(df.Date, format='%Y-%m-%d')# Get the max date value and assign the group to a new dataframedfNew = df.groupby(['ID'], as_index=False).max()# The new dataframes Date and Latest Date column are the samedfNew['Latest Date'] = dfNew['Date']# Fix latest Date formatting dfNew["Latest Date"] = dfNew["Latest Date"].dt.date# Add fruit and veggie columnsdfNew['Fruits'] = ' 'dfNew['Veggies'] = ' '# Place in the desired valuesdfNew.loc[((dfNew['CATEGORY NAME'] == 'fruits')), 'Fruits'] = dfNew['Latest Date']dfNew.loc[((dfNew['CATEGORY NAME'] == 'veggies')), 'Veggies'] = dfNew['Latest Date']dfNewOutput: ID CATEGORY NAME Date Latest Date Fruits Veggies0 1 fruits 2017-08-07 2017-08-07 2017-08-07 1 2 veggies 2022-01-01 2022-01-01 2022-01-01
Why do I get the 'loop of ufunc does not support argument 0 of type numpy.ndarray' error for log method? First, I used np.array to perform operations on multiple matrices, and it was successful.import numpy as npimport matplotlib.pyplot as pltf = np.array([[0.35, 0.65]])e = np.array([[0.92, 0.08], [0.03, 0.97]])r = np.array([[0.95, 0.05], [0.06, 0.94]])d = np.array([[0.99, 0.01], [0.08, 0.92]])c = np.array([[0, 1], [1, 0]])D = np.sum(f@(e@r@d*c))u = f@eI = np.sum(f@(e*np.log(e/u)))print(D)print(I)Outcome:0.145385250.45687371996485304Next, I tried to plot the result using one of the elements in the matrix as a variable, but an error occurred.import numpy as npimport matplotlib.pyplot as pltt = np.arange(0.01, 0.99, 0.01)f = np.array([[0.35, 0.65]])e = np.array([[1-t, t], [0.03, 0.97]])r = np.array([[0.95, 0.05], [0.06, 0.94]])d = np.array([[0.99, 0.01], [0.08, 0.92]])c = np.array([[0, 1], [1, 0]])D = np.sum(f@(e@r@d*c))u = f@eI = np.sum(f@(e*np.log(e/u)))plt.plot(t, D)plt.plot(t, I)plt.show()It shows the error below:AttributeError Traceback (most recent call last)AttributeError: 'numpy.ndarray' object has no attribute 'log'The above exception was the direct cause of the following exception:TypeError Traceback (most recent call last)<ipython-input-14-0856df964382> in <module>() 10 11 u = f@e---> 12 I = np.sum(f@(e*np.log(e/u))) 13 14 plt.plot(t, D)TypeError: loop of ufunc does not support argument 0 of type numpy.ndarray which has no callable log methodThere was no problem with the following code, so I think there was something wrong with using np.array.import numpy as npimport matplotlib.pyplot as pltt = np.arange(0.01, 0.99, 0.01)y = np.log(t)plt.plot(t, y)plt.show()Any idea for this problem? Thank you very much.
You can't create a batch of matrices e from the variable t using the constructe = np.array([[1-t, t], [0.03, 0.97]])as this would create a ragged array due to [1-t, t] and [0.03, 0.97] having different shapes. Instead, you can create e by repeating [0.03, 0.97] to match the shape of [1-t, t], then stack them together as follows.t = np.arange(.01, .99, .01) # shape (98,)_t = np.stack([t, 1-t], axis=1) # shape (98, 2)e = np.array([[.03, .97]]) # shape (1, 2)e = np.repeat(e, len(ts), axis=0) # shape (98, 2)e = np.stack([_t, e], axis=1) # shape (98, 2, 2)After this, e will be a batch of 2x2 matricesarray([[[0.01, 0.99], [0.03, 0.97]], [[0.02, 0.98], [0.03, 0.97]], [[0.03, 0.97], [0.03, 0.97]], [[0.04, 0.96], [0.03, 0.97]], ...Finally, expand other variables in the batch dimension to take advantage of numpy broadcast to batch the calculationf = np.array([[0.35, 0.65]])[None,:] # shape (1,1,2)r = np.array([[0.95, 0.05], [0.06, 0.94]])[None,:] # shape (1,2,2)d = np.array([[0.99, 0.01], [0.08, 0.92]])[None,:] # shape (1,2,2)c = np.array([[0, 1], [1, 0]])[None,:] # shape (1,2,2)and only sum across the last axis to get per-matrix result.D = np.sum(f@(e@r@d*c), axis=-1) # shape (98, 1)u = f@eI = np.sum(f@(e*np.log(e/u)), axis=-1) # shape (98, 1)
Pandas: str.extract() giving unexpected NaN I have a data set which has a column that looks like thisBadge Number1323 / gold22 / silver483I need only the numbers. Here's my code:df = pd.read_excel('badges.xlsx')df['Badge Number'] = df['Badge Number'].str.extract('(\d+)')print(df)I was expecting an output like:Badge Number132322483but I gotBadge NumberNanNan2322NanJust to test, I dumped the dataframe to a .csv and read it back with pd.read_csv(). That gave me just the numbers, as I need (though of course that's not a solution)I also trieddf['Badge Number'] = np.where(df['Badge Number'].str.isnumeric(), df['Badge Number'], df['Badge Number'].str.extract('(\d+)'))but that just gave me all 1s. I know I am trying things I don't even remotely understand, but am hoping there's a straightforward solution.
That's almost certainly because the numbers are actually integers, not strings. Try filling the missing values by the original numbers.df['Badge Number'] = df['Badge Number'].str.extract('(\d+)')[0].fillna(df['Badge Number'])#.astype(int)
how to install pandas-profiling with markupsafe error I am trying to install pandas-profiling but I keep getting the error that markupsafe cannot find 2.1.1. version.!pip3 install pandas-profiling >> ERROR: Could not find a version that satisfies the requirement markupsafe~=2.1.1 (from pandas-profiling) (from versions: 0.9, 0.9.1, 0.9.2, 0.9.3, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.20, 0.21, 0.22, 0.23, 1.0, 1.1.0, 1.1.1, 2.0.0a1, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1)ERROR: No matching distribution found for markupsafe~=2.1.1 (from pandas-profiling)WARNING: You are using pip version 19.1.1, however version 21.3.1 is available.You should consider upgrading via the 'pip install --upgrade pip' command.I already tried to run this code!pip3 install MarkupSafe==2.1.1>>ERROR: Could not find a version that satisfies the requirement MarkupSafe==2.1.1 (from versions: 0.9, 0.9.1, 0.9.2, 0.9.3, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.20, 0.21, 0.22, 0.23, 1.0, 1.1.0, 1.1.1, 2.0.0a1, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1)ERROR: No matching distribution found for MarkupSafe==2.1.1WARNING: You are using pip version 19.1.1, however version 21.3.1 is available.You should consider upgrading via the 'pip install --upgrade pip' command.
MarkupSafe 2.0.1 requires Python >= 3.6. MarkupSafe 2.1.1 requires Python >= 3.7. From this I can deduce you're using Python 3.6. Either use MarkupSafe 2.0.1 or upgrade Python to 3.7+.The bug is reported: https://github.com/ydataai/pandas-profiling/issues/1004
How to combine values in a dataframe pandas? I have below dataframeIs there anyway we can combine values in column (Fruit) with respect to values in other two columns and get below result using pandas
Use groupby_agg. If you have other columns, expand the dict with another functions if needed (max, min, first, last, ... or lambda)out = df.groupby(['SellerName', 'SellerID'], as_index=False).agg({'Fruit': ', '.join})print(out)# Output SellerName SellerID Fruit0 Rob 200 Apple, Bannana1 Scott 201 Apple, Kiwi, PineappleInput dataframe:>>> df SellerName SellerID Fruit0 Rob 200 Apple1 Scott 201 Apple2 Rob 200 Bannana3 Scott 201 Kiwi4 Scott 201 Pineapple
using python want to calculate last 6 months average for each month I have a dataframe which has 3 columns [user_id ,year_month & value] , i want to calculate last 6months average for the year automatically for each individual unique user_id and assign it to new column user_id value year_month 1 50 2021-01 1 54 2021-02 .. .. .. 1 50 2021-11 1 47 2021-12 2 36 2021-01 2 48.5 2021-05 .. .. .. 2 54 2021-11 2 30.2 2021-12 3 41.4 2021-01 3 48.5 2021-02 3 41.4 2021-05 .. .. .. 3 30.2 2021-12 Total year has 12-24 months to get jan 2022 value[dec 2021 to july 2021]=[55+32+33+63+54+51]/6 to get feb 2022 value[jan 2022 to aug 2021] =[32+33+37+53+54+51]/6 to get mar 2022 value[feb 2022 to sep 2021] =[45+32+33+63+54+51]/6 to get apr 2022 value[mar 2022 to oct 2021] =[63+54+51+45+32+33]/6
First index, your datetime columndf = df.set_index('year_month')Then do the followingdf.groupby('UserId').rolling('6M').transform('avg')This is the most correct way but hey here is one more intutitivedf.sort_values('year_month').groupby('UserId').rolling(6).transform('avg') # Returns wanted seriesAs paul h said
remove both duplicate rows from DataFrame with negative and positive values pands DF csvThis CSV and i am using it as DataframecolA,colB,colCABC,3,tokenABC,50,added ABC,-50,deletedxyz,20,tokenpqr,50,added pqr,-50,deletedlmn,50,addedoutputcolA,colB,colCABC,3,tokenxyz,20,tokenlmn,50,added
Methods based on abs would incorrectly remove two positive or two negative values.I suggest to perform a self-merge using the opposite of colB:# get indices that have a matching positive/negativeidx = (df.reset_index() .merge(df, left_on=['colA', 'colB'], right_on=['colA', -df['colB']], how='inner')['index'] )# [1, 2, 4, 5] (as list)# drop themout = df.drop(idx)output: colA colB colC0 ABC 3 token3 xyz 20 token6 lmn 50 added
Subdivide values in a tensor I have a PyTorch tensor that contains the labels of some samples.I want to split each label into n_groups groups, introducing new virtual labels.For example, for the labels:labels = torch.as_tensor([0, 0, 0, 1, 1, 1, 2, 2, 2], dtype=torch.long)One possible solution to subdivide each label into n_groups=2 is the following:subdivided_labels = [0, 3, 0, 1, 4, 1, 2, 5, 2]The constraints are the following:There is no assumption about the order of the initial labelsThe labels should be well distributed among the groupsThe label ranking should be consistent among different groups, i.e., the first label 0 in the first group should be the first label also in any other group.The following should always be true torch.equal(labels, subdivided_labels % num_classes)It is possible that the number of groups is greater than the number of samples for a given labelThe following tests should pass for the desired algorithm:@pytest.mark.parametrize( "labels", ( torch.randint(100, size=(50,)), torch.arange(100), torch.ones(100), torch.randint(100, size=(50,)).repeat(4), torch.arange(100).repeat(4), torch.ones(100).repeat(4), torch.randint(100, size=(50,)).repeat_interleave(4), torch.arange(100).repeat_interleave(4), torch.ones(100).repeat_interleave(4), ),)@pytest.mark.parametrize("n_groups", (1, 2, 3, 4, 5, 50, 150))def test_subdivide_labels(labels, n_groups): subdivided_labels = subdivide_labels(labels, n_groups=n_groups, num_classes=100) assert torch.equal(labels, subdivided_labels % 100)@pytest.mark.parametrize( "labels, n_groups, n_classes, expected_result", ( ( torch.tensor([0, 0, 1, 1, 2, 2]), 2, 3, torch.tensor([0, 3, 1, 4, 2, 5]), ), ( torch.tensor([0, 0, 1, 1, 2, 2]), 2, 10, torch.tensor([0, 10, 1, 11, 2, 12]), ), ( torch.tensor([0, 0, 1, 1, 2, 2]), 1, 10, torch.tensor([0, 0, 1, 1, 2, 2]), ), ( torch.tensor([0, 0, 2, 2, 1, 1]), 2, 3, torch.tensor([0, 3, 2, 5, 1, 4]), ), ( torch.tensor([0, 0, 2, 2, 1, 1]), 30, 3, torch.tensor([0, 3, 2, 5, 1, 4]), ), ),)def test_subdivide_labels_with_gt(labels, n_groups, n_classes, expected_result): subdivided_labels = subdivide_labels(labels, n_groups=n_groups, num_classes=n_classes) assert torch.equal(expected_result, subdivided_labels) assert torch.equal(labels, subdivided_labels % n_classes)I have a non-vectorized solution:import torchdef subdivide_labels(labels: torch.Tensor, n_groups: int, num_classes: int) -> torch.Tensor: """Divide each label in groups introducing virtual labels. Args: labels: the tensor containing the labels, each label should be in [0, num_classes) n_groups: the number of groups to create for each label num_classes: the number of classes Returns: a tensor with the same shape of labels, but with each label partitioned in n_groups virtual labels """ unique, counts = labels.unique( sorted=True, return_counts=True, return_inverse=False, ) virtual_labels = labels.clone().detach() max_range = num_classes * (torch.arange(counts.max()) % n_groups) for value, count in zip(unique, counts): virtual_labels[labels == value] = max_range[:count] + value return virtual_labelslabels = torch.as_tensor([0, 0, 0, 1, 1, 1, 2, 2, 2], dtype=torch.long)subdivide_labels(labels, n_groups=2, num_classes=3)tensor([0, 3, 0, 1, 4, 1, 2, 5, 2])Is it possible to vectorize this algorithm?Alternatively, are there any faster algorithms to perform the same operation?
A variation of OP's approach can be vectorized with a grouped cumcount (numpy implementation by @divakar). All tests pass, but the output is slightly different since argsort has no 'stable' option in pytorch, AFAIK.def vector_labels(labels, n_groups, num_classes): counts = torch.unique(labels, return_counts=True)[1] idx = counts.cumsum(0) id_arr = torch.ones(idx[-1], dtype=torch.long) id_arr[0] = 0 id_arr[idx[:-1]] = -counts[:-1] + 1 rng = id_arr.cumsum(0)[labels.argsort().argsort()] % n_groups maxr = torch.arange(n_groups) * num_classes return maxr[rng] + labelslabels = torch.arange(100).repeat_interleave(4)%timeit vector_labels(labels, 2, 100)%timeit subdivide_labels(labels, 2, 100)Output10000 loops, best of 5: 117 µs per loop1000 loops, best of 5: 1.6 ms per loopThis is far from the fastest algorithm. For example a trivial O(n) approach, but only CPU and needs numba to be fast with Python.import numpy as npimport numba as [email protected] numpy_labels(labels, n_groups, num_classes): lookup = np.zeros(labels.max() + 1, np.intp) res = np.empty_like(labels) for i in range(len(labels)): res[i] = num_classes * lookup[labels[i]] + labels[i] lookup[labels[i]] = lookup[labels[i]] + 1 if lookup[labels[i]] < n_groups-1 else 0 return resnumpy_labels(labels.numpy(), 20, 100) # compile run%timeit torch.from_numpy(numpy_labels(labels.numpy(), 20, 100))Output100000 loops, best of 5: 3.63 µs per loop
Pandas Method Chaining: getting KeyError on calculated column I’m scraping web data to get US college football poll top 25 information that I store in a Pandas dataframe. The data has multiple years of poll information, with preseason and final polls for each year. Each poll ranks teams from 1 to 25. Team ranks are determined by the voting points each team received; the team with most points is ranked 1, etc. Both rank and points are included in the dataset. Here's the head of the raw data df:cols = ['Year','Type', 'Team (FPV)', 'Rank', 'Pts']all_wks_raw[cols].head()The dataframe has columns for Rank and Pts (Points). The Rank column (dytpe object) contains numeric ranks of 1-25 plus “RV” for teams that received points but did not rank in the top 25. The Pts column is dtype int64. Since Pts for teams that did not make the top 25 are included in the data, I’m able to re-rank the teams based on Pts and thus extend rankings beyond the top 25. The resulting revrank column ranks teams from 1 to between 37 and 61, depending how many teams received points in that poll. Revrank is the first new column I create.The revrank column should equal the Rank column for the first 25 teams, but before I can test it I need to create a new column that converts Rank to numeric. The result is rank_int, which is my second created column. Then I try to create a third column that calculates the difference between the two created columns, and this is where I get the KeyError. Here's the chain:all_wks_clean = (all_wks_raw #create new column that converts Rank to numeric-this works .assign(rank_int = pd.to_numeric(all_wks_raw['Rank'], errors='coerce').fillna(0)) #create new column that re-ranks teams based on Points: extends rankings beyond original 25-this works .assign(gprank = all_wks_raw.reset_index(drop=True).groupby(['Year','Type'])['Pts'].rank(ascending=0,method='min')) #create new column that takes the difference between gprank and rank_int columns created above-this fails with KeyError: 'gprank' .assign(ck_rank = all_wks_raw['gprank'] - all_wks_raw['rank_int']))Are the results of the first two assignments not being passed to the third? Am I missing something in the syntax? Thanks for the help.Edited 7/20/2022 to add complete code; note that this code scrapes data from the College Poll Archive web site:dict = {1119: [2016, '2016 Final AP Football Poll', 'Final'], 1120: [2017, '2017 Preseason AP Football Poll', 'Preseason'], 1135: [2017, '2017 Final AP Football Poll', 'Final'], 1136: [2018, '2018 Preseason AP Football Poll', 'Preseason'], 1151: [2018, '2018 Final AP Football Poll', 'Final'], 1152: [2019, '2019 Preseason AP Football Poll', 'Preseason']}#get one week of poll data from College Poll Archive ID parameterdef getdata(id): coldefs = {'ID':key, 'Year': value[0], 'Title': value[1], 'Type':value[2]} #define dictionary of scalar columns to add to dataframe urlseg = 'https://www.collegepollarchive.com/football/ap/seasons.cfm?appollid=' url = urlseg + str(id) dfs = pd.read_html(url) df = dfs[0].assign(**coldefs) return dfall_wks_raw = pd.DataFrame()for key, value in dict.items(): print(key, value[0], value[2]) onewk = getdata(key) all_wks_raw = all_wks_raw.append(onewk) all_wks_clean = (all_wks_raw #create new column that converts Rank to numeric-this works .assign(rank_int = pd.to_numeric(all_wks_raw['Rank'], errors='coerce').fillna(0)) #create new column that re-ranks teams based on Points: extends rankings beyond original 25-this works .assign(gprank = all_wks_raw.reset_index(drop=True).groupby(['Year','Type'])['Pts'].rank(ascending=0,method='min')) #create new column that takes the difference between gprank and rank_int columns created above-this fails with KeyError: 'gprank' .assign(ck_rank = all_wks_raw['gprank'] - all_wks_raw['rank_int']))
Adding to BeRT2me's answer, when chaining, lambda's are pretty much always the way to go. When you use the original dataframe name, pandas looks at the dataframe as it was before the statement was executed. To avoid confusion, go with:df = df.assign(rank_int = lambda x: pd.to_numeric(x['Rank'], errors='coerce').fillna(0).astype(int), gprank = lambda x: x.groupby(['Year','Type'])['Pts'].rank(ascending=0,method='min'), ck_rank = lambda x: x['gprank'].sub(x['rank_int']))The x you define is the dataframe at that state in the chain.This helps especially when your chains get longer. E.g, if you filter out some rows or aggregate you get different results (or maybe error) depending what you're trying to do.For example, if you were just looking at the relative rank of 3 teams:df = pd.DataFrame({ 'Team (FPV)': list('abcde'), 'Rank': list(range(5)), 'Pts': list(range(5)),})df['Year'] = 2016df['Type'] = 'final'df = (df .loc[lambda x: x['Team (FPV)'].isin(["b", "c", "d"])] .assign(bcd_rank = lambda x: x.groupby(['Year','Type'])['Pts'].rank(ascending=0,method='min')) )print(df)gives: Team (FPV) Rank Pts Year Type bcd_rank1 b 1 1 2016 final 3.02 c 2 2 2016 final 2.03 d 3 3 2016 final 1.0Whereas:df = pd.DataFrame({ 'Team (FPV)': list('abcde'), 'Rank': list(range(5)), 'Pts': list(range(5)),})df['Year'] = 2016df['Type'] = 'final'df = (df .loc[lambda x: x['Team (FPV)'].isin(["b", "c", "d"])] .assign(bcd_rank = df.groupby(['Year','Type'])['Pts'].rank(ascending=0,method='min')) )print(df)gives a different ranking: Team (FPV) Rank Pts Year Type bcd_rank1 b 1 1 2016 final 4.02 c 2 2 2016 final 3.03 d 3 3 2016 final 2.0If you want to go deeper, I'd recommend https://tomaugspurger.github.io/method-chaining.html to go on your reading list.
Reduce to only row totalRevenue and rename the colunmn names in years using yahoo finance and pandas I try to scrape the yearly total revenues from yahoo finance using pandas and yahoo_fin by using the following code:from yahoo_fin import stock_info as siimport yfinance as yfimport pandas as pdtickers = ('AAPL', 'MSFT', 'IBM')income_statements_yearly= [] #All numbers in thousandsfor ticker in tickers: income_statement = si.get_income_statement(ticker, yearly=True) years = income_statement.columns income_statement.insert(loc=0, column='Ticker', value=ticker) for i in range(4): #print(years[i].year) income_statement.rename(columns = {years[i]:years[i].year}, inplace = True) income_statements_yearly.append(income_statement)income_statements_yearly = pd.concat(income_statements_yearly)income_statements_yearlyThe result I get looks like:I would like to create on that basis another dataframe revenues and reduce the dataframe to only the row totalRevenue instead of getting all rows and at the same time I would love to rename the columns 2021, 2020, 2019, 2018 to revenues_2021, revenues_2020, revenues_2019, revenues_2018.The result shall look like:df = pd.DataFrame({'Ticker': ['AAPL', 'MSFT', 'IBM'], 'revenues_2021': [365817000000, 168088000000, 57351000000], 'revenues_2020': [274515000000, 143015000000, 55179000000], 'revenues_2019': [260174000000, 125843000000, 57714000000], 'revenues_2018': [265595000000, 110360000000, 79591000000]})How can I solve this in an easy and fast way?Ty for your help in advance.
CODErevenues = income_statements_yearly.loc["totalRevenue"].reset_index(drop=True)revenues.columns = ["Ticker"] + ["revenues_" + str(col) for col in revenues.columns if col != "Ticker"]OUTPUT Ticker revenues_2021 revenues_2020 revenues_2019 revenues_20180 AAPL 365817000000 274515000000 260174000000 2655950000001 MSFT 168088000000 143015000000 125843000000 1103600000002 IBM 57351000000 55179000000 57714000000 79591000000
How to remove Index? Remove index number in csv, pythonhow to remove index number, and replace it with eTime value.import numpy as npimport pandas as pd# load data from csvdata = pd.read_csv('log_level_out_mini.csv', delimiter=';')time = data['eTime']angka = data['eValue']# Calculating cross-sectional areadiameter = 0.1016 # 4 InchA = 1/4 * np.pi * diameter ** 2print('Luas penampang:', A)# Calculate Flow RateQ = angka * Aprint('Flow Rate:', Q)# Calculate Volumethis is the result. Time eValue0 2017-07-16 00:00:50.017 -0.2721 2017-07-16 00:01:50.017 -0.2722 2017-07-16 00:02:50.020 -0.2723 2017-07-16 00:03:50.003 -0.2724 2017-07-16 00:04:50.020 -0.272... ... ...10032 2017-07-22 23:55:23.803 0.58810033 2017-07-22 23:56:23.793 0.58010034 2017-07-22 23:57:23.787 0.58310035 2017-07-22 23:58:23.797 0.56910036 2017-07-22 23:59:23.800 0.549how to remove a list index and change to eTime Value?
Set index_col equal to False, as per the documentation listed here:Note: index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line.So change:data = pd.read_csv('log_level_out_mini.csv', delimiter=';')to:data = pd.read_csv('log_level_out_mini.csv', delimiter=';', index_col=False)
ufunc 'boxcox1p' not supported for the input types. the inputs could not be safely coerced to any supported types according to the casting rule 'safe' I'm having this code (for machine learning) below:from scipy.special import boxcox1pfrom scipy.special import boxcoxfrom scipy.special import inv_boxcoxdf_trans=df1.apply(lambda x: boxcox1p(x,0.0))With df1 being a dataframe containing date and some other valuesHowever, after running the above codes, I got this error:TypeError Traceback (most recent call last)Input In [585], in <cell line: 4>() 2 from scipy.special import boxcox 3 from scipy.special import inv_boxcox----> 4 df_trans=df1.apply(lambda x: boxcox1p(x,0.0))TypeError: ufunc 'boxcox1p' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''How do I fix this?Edited: This is part of the code sample: Quantity Price Difference Money Received0 55419 12.908304 8.518790 69665.1337541 45179 28.492719 8.518790 125359.7522892 11985 17.040535 18.776097 19888.813469
The answer for this is to exclude date column. Special thanks to @AlexK for helping!
How to group similar numbers with ranges/conditions and merge IDs using dataframes? Please, I have a dataframe that is listed in ascending order. My goal is to average similar numbers (numbers that are within 10% of each other in ‘both directions’) and concate their ‘Bell’ name together. For example, the image shows the input and output dataframe. I tried coding it but I stuck on how to progress. def full_data_compare(self, df_full = pd.DataFrame()): for k in range(df_full): #current rows for j in range(df_full): #future rows if int(df_full['Size'][k]) - int(df_full['Size'][k])*(1/10) <= int(df_full['Size'][j]) <= int(df_full['Size'][k]) + int(df_full['Size'][k])*(1/10) & int(df_full['Size'][k]) - int(df_full['Size'][k])*(1/10) <= int(df_full['Size'][j]) <= int(df_full['Size'][k]) + int(df_full['Size'][k])*(1/10):
Assuming you really want to check in both directions that the consecutive values are within 10%, you need to compute two Series with pct_change. Then use it to groupby.agg:#df = df.sort_values(by='Size') for non-consecutive groupingm1 = df['Size'].pct_change().abs().gt(0.1)m2 = df['Size'].pct_change(-1).abs().shift().gt(0.1)out = (df .groupby((m1|m2).cumsum()) .agg({'Bell': ' '.join, 'Size': 'mean'}))NB. If you want to group non-consecutive values, you first need to sort them: sort_values(by='Size')Output: Bell SizeSize 0 A1 A2 1493.5000001 A1 A2 A3 5191.3333332 A1 A3 A2 35785.3333333 A2 45968.0000004 A1 78486.0000005 A3 41205.000000
TypeError: 'int' object is not subscriptable (ImageOps.fit an image that calls cv2.cvtColor) I'm working with Keras, PIL.ImageGrab, cv2 and tensorflow and there is an error that rises when I run my code(which is edited code from Teachable Machines stock code)The error I get:Traceback (most recent call last): File "C:\Users\Captey\Downloads\Stuff\Self_driving_cv2\Keras-Neural_net.py", line 23, in <module> image = ImageOps.fit(image, size, Image.ANTIALIAS) File "Z:\Users\Captey\anaconda3\lib\site-packages\PIL\ImageOps.py", line 459, in fit bleed_pixels = (bleed * image.size[0], bleed * image.size[1])TypeError: 'int' object is not subscriptableMy edited code:# -*- coding: utf-8 -*-"""Created on Sun Jul 10 23:31:44 2022@author: Fahim FerdousGithub: FahimFerdou1Youtube: The_official_pyrite """from keras.models import load_modelfrom PIL import ImageGrab, ImageOps,Imageimport numpy as npimport cv2import tensorflow as tfmodel = load_model('self_driving_model.h5')data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)while True: img = ImageGrab.grab(bbox=(285, 397,689, 812)) img = np.array(img) image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) size = (224, 224) image = ImageOps.fit(image, size, Image.ANTIALIAS) image_array = np.asarray(image) normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1 data[0] = normalized_image_array prediction = model.predict(data) print(prediction) if cv2.waitKey(1) == 27: break cv2.imshow('data',image)
Prior to ImageOps.fit convert your numpy array to PIL object:image_PIL = Image.fromarray(image)Now performsize = (224, 224)image = ImageOps.fit(image_PIL, size, Image.ANTIALIAS)Proceed as usual.
How to add a value in dataframe columns with multiple excel sheets? This should be an easy question!! But I'm stuck in it. Hope someone can help me, thanks!SO I have 3 columns in 2 sheets (Ya, I just simplified to 2 sheets here). The dataset is in https://docs.google.com/spreadsheets/d/1qxGNShfrOgGXUfJd5t8qg2RoYDIcgNM9/edit?usp=sharing&ouid=103815541757228048284&rtpof=true&sd=trueHow to add a value in multiple dataframe sheets, then append it in a new dataframe with still 2 sheets?ddf = pd.DataFrame()for i in range(40): df = pd.read_excel(xls, i)For example,for i in range(len(df["first"]): df["first"].iloc[i] + 4 df["second"].iloc[i] + 8But it is just in one sheet, I need to do the same thing in 40 sheets.p.s. Each sheets have same column and same index length
Trying an answer because I think I understand. Hopefully this helps.writer = pd.ExcelWriter('pandas_multiple.xlsx', engine='xlsxwriter')for i in range(40): df = pd.read_excel(xls, i) df['first'] += 2 # constant of your choice df.to_excel(writer, sheet_name=i)writer.save()If you want to loop through multiple columns and constants:writer = pd.ExcelWriter('pandas_multiple.xlsx', engine='xlsxwriter')for i in range(40): df = pd.read_excel(xls, i) for col, val in zip(['first', 'second'], [10, 5]): df[col] += val df.to_excel(writer, sheet_name=i)writer.save()
faster way to append value Suppose I have a big list of float values and I want to select only some of them looking at an other array:result = []for x,s in zip(xlist, slist): if f(s): result.append(x)at the begin of the loop I can have a rough estimation of how many entries will pass the f selectionnow this is very slow, I've tried to change list to array but looking only at appending I get slowerdef f(v): for ii in a: v.append(ii)a = range(int(1E7))v = []t = time(); f(v); print time()-t # -> 1.3v = array.array('i')t = time(); f(v); print time()-t # -> 3.4I need to be faster because this loop is really slow in my program. can numpy.array help me? There is no append method.
There may be a better numpy solution to this, but in pure-python you can try iterators:from itertools import izipxlist = [1,2,3,4,5,6,7,8]slist = [0,1,0,1,0,0,0,1]def f(n): return nresults = (x for x,s in izip(xlist, slist) if f(s))# results is an iterator--you don't have values yet# and no extra memory is consumed# you can retrieve results one by one with iteration# or you can exhaust all values and store in a listassert list(results)==[2,4,8]# you can use an array too# import array# a = array.array('i', results)You can also combine this approach with numpy arrays to see if it is faster. See the fromiter constructor.However if you can restructure your code to use iterators, you can avoid ever having to generate a full list and thus avoid using append at all.It goes without saying, too, that you should see if you can speed up your f() filtering function because it's called once for every element.
Basic NumPy data comparison I have an array of N-dimensional values arranged in a 2D array. Something like:import numpy as npdata = np.array([[[1,2],[3,4]],[[5,6],[1,2]]])I also have a single value x that I want to compare against each data point, and I want to get a 2D array of boolean values showing whether my data is equal to x.x = np.array([1,2])If I do:data == xI get# array([[[ True, True],# [False, False]],## [[False, False],# [ True, True]]], dtype=bool)I could easily combine these to get the result I want. However, I don't want to iterate over each of these slices, especially when data.shape[2] is larger. What I am looking for is a direct way of getting:array([[ True, False], [False, True]])Any ideas for this seemingly easy task?
Well, (data == x).all(axis=-1) gives you what you want. It's still constructing a 3-d array of results and iterating over it, but at least that iteration isn't at Python-level, so it should be reasonably fast.
How to split/partition a dataset into training and test datasets for, e.g., cross validation? What is a good way to split a NumPy array randomly into training and testing/validation dataset? Something similar to the cvpartition or crossvalind functions in Matlab.
If you want to split the data set once in two parts, you can use numpy.random.shuffle, or numpy.random.permutation if you need to keep track of the indices (remember to fix the random seed to make everything reproducible):import numpy# x is your datasetx = numpy.random.rand(100, 5)numpy.random.shuffle(x)training, test = x[:80,:], x[80:,:]orimport numpy# x is your datasetx = numpy.random.rand(100, 5)indices = numpy.random.permutation(x.shape[0])training_idx, test_idx = indices[:80], indices[80:]training, test = x[training_idx,:], x[test_idx,:]There are many ways other ways to repeatedly partition the same data set for cross validation. Many of those are available in the sklearn library (k-fold, leave-n-out, ...). sklearn also includes more advanced "stratified sampling" methods that create a partition of the data that is balanced with respect to some features, for example to make sure that there is the same proportion of positive and negative examples in the training and test set.
Different function outputs for integers and numpy arrays I am having trouble understanding, why I get different values in the following two cases:-Case 1:def myfunc(a,b,c): xx = a+b yy = b+c return xx, yyq,w = myfunc(1,2,3)print(q,w)Output 1: 3 5-Case 2:import numpy as npq=w=np.zeros(3)def myfunc(a,b,c): xx = a+b yy = b+c return xx, yyfor i in range(3): q[i],w[i] = myfunc(1,2,3)print(q,w)Output 2: [5. 5. 5.] [5. 5. 5.]In the second case, both arrays have their entries equal to 5. Could someone explain, why?
I won't talk about the first case because it's simple and clear. For the second case, you have defined the variables q and w as followingq=w=np.zeros(3)In this case, what ever changes you make in q they will be applied to w because they q and w have the same address.When you run this:q[i],w[i] = myfunc(1,2,3)q[i] gets the value 3 and w[i] gets the value 5 and since q and w have the same address, q[i] will get the value 5 as well. That explains why you have 5 every time.If you want to solve it, change the variable definition line from:q=w=np.zeros(3)to :w=np.zeros(3)q=np.zeros(3)
Using groupby() with appending additional rows With the following csv input fileID,Name,Metric,Value0,K1,M1,2000,K1,M2,51,K2,M1,11,K2,M2,102,K2,M1,5002,K2,M2,8This code, groups the rows by the name column, e.g. two groups. Then it appends the values as columns for the same Name.df = pd.read_csv('test.csv', usecols=['ID','Name','Metric','Value'])print(df)my_array = []for name, df_group in df.groupby('Name'): my_array.append( pd.concat( [g.reset_index(drop=True) for _, g in df_group.groupby('ID')['Value']], axis=1) )print(my_array)The output looks like ID Name Metric Value0 0 K1 M1 2001 0 K1 M2 52 1 K2 M1 13 1 K2 M2 104 2 K2 M1 5005 2 K2 M2 8[ Value0 2001 5, Value Value0 1 5001 10 8]For example, my_array[1] which is K2 has two rows corresponding to M1 and M2. I would like to keep the IDs as well in the final data frames in my_array. So I want to add a third row and save it (M1, M2 and ID). Therefore, the final my_array should be[ Value0 2001 52 0, Value Value0 1 500 <-- For K2, there are two M1 (1 and 500)1 10 8 <-- For K2, there are two M2 (10 and 8)2 1 2] <-- For K2, there are two ID (1 and 2)How can I modify the code for that purpose?
You can use DataFrame.pivot for DataFrames pe groups and then append df1.columns in np.vstack:my_array = []for name, df_group in df.groupby('Name'): df1 = df_group.pivot('Metric','ID','Value') my_array.append(pd.DataFrame(np.vstack([df1, df1.columns])))print (my_array)[ 00 2001 52 0, 0 10 1 5001 10 82 1 2]
Multiplie Specific row with specific condition pandas I want to multiple a column in pandas and replace the old value by the value computed.Example :EURUSD = 2print(df) Instrument Price, 1 BTC/EUR 40000 2 ETH/EUR 3000 3 SOL/USD 3200 4 ADA/EUR 2.2 5 DOT/USD 29) If the instrument ends by "EUR", I would like to multiply the price by the exchange rate EURUSD to convert the price in USD.The result would be :print(df) Instrument Price, 1 BTC/EUR 80000 2 ETH/EUR 6000 3 SOL/USD 3200 4 ADA/EUR 4.4 5 DOT/USD 29) I tried the following code :df.loc[df["Instrument"].str[-3:] == "EUR",df["Price"]]=df["Price"]*EURUSD
You can use np.where with Series.str.contains:In [167]: import numpy as np# Check whether the 'Instrument' value contains 'EUR', if TRUE then PRICE * 2, otherwise leave PRICE as is. In [168]: df['Price'] = np.where(df.Instrument.str.contains('EUR'), df.Price.mul(2), df.Price)In [169]: dfOut[169]: Instrument Price1 BTC/EUR 80000.02 ETH/EUR 6000.03 SOL/USD 3200.04 ADA/EUR 4.45 DOT/USD 29.0
Sub setting dataframe I have Dataframe with three columns asDate, Id, pages. In pages values are according to time of visit. So I want customers who visit page A after page B on the same date.As in the image below ID 2 visit page A after B on 2 Nov
Try:A_after_B = lambda x: x.eq('B').idxmax() < x.eq('A').idxmax()m1 = df['Page'].isin(['A', 'B'])m2 = df.groupby(['ID', 'Date'])['Page'].transform(A_after_B)out = df.loc[m1 & m2]print(out)# Output: ID Date Page5 2 02-Nov B6 2 02-Nov ASetup:data = {'ID': [1, 1, 1, 2, 2, 2, 2, 2], 'Date': ['01-Nov', '01-Nov', '01-Nov', '01-Nov', '01-Nov', '02-Nov', '02-Nov', '02-Nov'], 'Page': ['A', 'A', 'B', 'B', 'B', 'B', 'C', 'A']}df = pd.DataFrame(data)
Multiplication between different rows of a dataframe I have several dataframes looking like this:time_hrcell_houridattitudehour0.028611xxx1Cruise1.00.028333xxx4Cruise1.00.004722xxx16Cruise1.0I want to do a specific multiplications between rows of the 'time_hr' column.I need to multiply each row with other rows and store the value to use later.eg. if the column values are [2,3,4], I would need 2x3, 2x4, 3x2, 3x4, 4x2, 4x3 values.A part of the problem is that I have several dataframes which have different number of rows so I would need a generic way of doing this.Is there a way? Thanks in advance.
It sounds like a cartesian product to me:from io import StringIO#sample data readingdata1 = """time_hr cell_hour id attitude hour0.028611 xxx 1 Cruise 1.00.028333 xxx 4 Cruise 1.00.004722 xxx 16 Cruise 1.0"""df = pd.read_csv(StringIO(data1), sep="\t")#filtering dataset to needed columnsdf_time = df[["id", "time_hr"]]df_comb = df_time.merge(df_time, how='cross')df_comb = df_comb[df_comb["id_x"] != df_comb["id_y"]]df_comb["time_hr"] = df_comb["time_hr_x"] * df_comb["time_hr_y"]df_comb.drop(columns=["time_hr_x", "time_hr_y"]).set_index(["id_x", "id_y"])# time_hr#id_x id_y #1 4 0.000811# 16 0.000135#4 1 0.000811# 16 0.000134#16 1 0.000135# 4 0.000134If you want to have more pythonic code you automatise itid_column = "id"product_columns = ["time_hr"]df_time = df[[id_column, *product_columns]]df_comb = df_time.merge(df_time, how='cross')df_comb = df_comb[df_comb[f"{id_column}_x"] != df_comb[f"{id_column}_y"]]for column in product_columns: df_comb[column] = df_comb[f"{column}_x"] * df_comb[f"{column}_y"]df_comb.set_index([f"{id_column}_x", f"{id_column}_y"])\ .drop(columns=[drop for column in product_columns for drop in [f"{column}_x", f"{column}_y"]])PS. I am not sure if that is what you were trying to achieve, if not, please add expected output data for those 3 input rows.
panda looping large size file how to get the amount of chunks? I'm using pandas to read a large size file,the file size is 11 GBchunksize=100000for df_ia in pd.read_csv(file, chunksize=n, iterator=True, low_memory=False):My question is how to get the amount of the all the chunks,now what I can do is setting a index and count one by one,but this looks not a smart way:index = 0chunksize=100000for df_ia in pd.read_csv(file, chunksize=n, iterator=True, low_memory=False): index + =1So after looping the whole size file the final index will be the amount of all the chunks,but is there any faster way to direct get it ?
You can use the enumerate function like:for i, df_ia in enumerate(pd.read_csv(file, chunksize=5, iterator=True, low_memory=False)):Then after you finish iteration, the value of i will be len(number_of_dataframes)-1.
Cumulative concatenation from last to first within a group in Python I'm looking to concatenate in cumulative manner values within a column in a data frame. However, the column will be partitioned/grouped by the values in another column.I have been able to do this from the top down with the following code:df['Col_to_cum_Concat']=[y.CUM_CONCAT_TOP.tolist()[:z+1] for x, y in df.groupby('Group_Col')for z in range(len(y))]df['Col_to_cum_Concat'] = df['Col_to_cum_Concat'].astype(str).str.lower()Is there an easier way to go from last to first row within the group?Example:<img src="https://i.stack.imgur.com/YRtZo.png" alt="Group_Col Text112223" />I have tried the code below but is not exactly working.df['Col_to_cum_Concat']=[y.CUM_CONCAT_TOP.tolist()[z:] for x, y in df.groupby('Group_Col')for z in range(len(y))]df['Col_to_cum_Concat'] = df['Col_to_cum_Concat'].astype(str).str.lower()Also, I apologize in advance if this is a dumb question. I'm still a newbie at Python.
You can group by Group_Col and for each group, reverse Text and use cumsum to concatenate accumulatively:df['Col_to_cum_Concat'] = df.Text.groupby(df.Group_Col).transform(lambda g: g[::-1].add(' ').cumsum()).str.rstrip()df Group_Col Text Col_to_cum_Concat0 1 B A B1 1 A A2 2 C A B C3 2 B A B4 2 A A5 3 B A B6 3 A AData:df = pd.DataFrame({'Group_Col': [1,1,2,2,2,3,3], 'Text': ['B', 'A', 'C', 'B', 'A', 'B', 'A']})