questions
stringlengths
56
48k
answers
stringlengths
13
43.8k
How to convert only one axis when constructing a dataframe from a JSON string? The read_json function has an argument convert_axes.The problem is that for my data the column labels MUST NOT be converted (i.e. keep them as strings), but the index MUST be converted.My dumb solution is to parse the string twice. Surely there is a better way?json_str = '{"1": {"1970-01-02 00:00:00": "foo"}}'temp = pd.read_json(json_str, convert_axes=False)want = pd.read_json(json_str, convert_axes=True)want.columns = temp.columnsjson_str always comes in the format {column -> {index -> value}}, i.e. orient='columns'. The index does not have to be in datetime format, it could be an integer index, or something else.
Judging by the documentation and the source code, I don't think there is a way to apply convert_axes to just one axis.I'm not sure this is any better than your own solution:import pandas as pdjson_str = '{"1": {"1970-01-02 00:00:00": "foo"}}'df = pd.read_json(json_str, convert_axes=False)df.index = pd.to_datetime(df.index)Edit: I misunderstood the question the first time. Here's another go, which as requested leaves the column labels as strings but tries to convert the index:import pandas as pddef read_json_convert_index(json_str,dtypes=['int','float']): ''' Leaves the columns untouched but tries to convert the index to datetime and then subsequentially to the types provided in the list dtypes ''' df = pd.read_json(json_str,convert_axes=False) try: df.index = pd.to_datetime(df.index) return df except: for dtype in dtypes: try: df.index = df.index.astype(dtype) # check if floats are actually just integers in disguise if dtype == 'float' and all( [abs(i - int(i)) <= 0.1**10 for i in df.index]): df.index = df.index.astype('int') return df else: return df except: continue return dfSubsequent edit: As far I can see from the source code and from experimentation, convert_axes tries to cast each axis as either a timestamp, an integer or a float, although I may well have overlooked something. Incidentally, through this experimentation I found some potentially unexpected (unwanted?) behaviour: If you run this...import pandas as pdjson_str = '{"1": {"1.0": "foo"},"2": {"2.0": "bar"}}'df = pd.read_json(json_str, convert_axes=True)... then the axis is converted to a DatetimeIndex ['1970-01-01 00:00:01', '1970-01-01 00:00:02']. I think the reason for this is that the float 1.0 is interpreted as the timestamp 1970-01-01 00:00:01. The function read_json_convert_index defined above does not do this, as it tries to cast the string '1.0' as a timestamp, which fails.As for the condition abs(i - int(i)) <= 0.1**10: This checks whether the floats are very close to integer values and thus can be safely cast as integers. For instance, the codeimport pandas as pdjson_str = '{"1": {"1.0": "foo"},"2": {"2.0": "bar"}}'df = read_json_convert_index(json_str)produces the index [1, 2], rather than [1.0, 2.0].Just a general point: I think one should be wary with automatic type conversion, since it can lead to unexpected behaviour, as demonstrated above.
How can I count words based on the column? Hello. I stuck here.Could you tell me how can I count words based on the tags on the second column?I want to find mostly used words using .most_common() using the categorize: most 10 in VB(Verb), 10 in Noun.
To spell out what Ari Cooper-Davis suggested:pos.loc[pos.tag == 'VBN'].word.value_counts()pos.loc[pos.tag == 'TO'].word.value_counts()etc.
Removing rows that does not start with/contain specific words I have the following outputAge'1 year old','14 years old', 'music store', '7 years old ','16 years old ',created after using this line of codedf['Age']=df['Age'].str.split('.', expand=True,n=0)[0]df['Age'].tolist()I would like to remove rows from the dataset (it would be better using a copy of it or a new one after filtering it) that does not start with a number or a number + year + old or a number + years + old. Expected outputAge (in a new dataset filtered)'1 year old','14 years old', '7 years old ','16 years old ',How could I do?
Use, Series.str.contains and create a boolean mask to filter the dataframe:m = df['Age'].str.contains(r'(?i)^\d+\syears?\sold')df1 = df[m]Result:# print(df1) Age0 1 year old1 14 years old 3 7 years old4 16 years oldYou can test the regex pattern here.
how to calculate a running total in a pandas dataframe I have a data frame that contains precipitation data that looks like thisDate Time, Raw Measurement, Site ID, Previous Raw Measurement, Raw - Previous2020-05-06 14:15:00,12.56,8085,12.56,0.02020-05-06 14:30:00,12.56,8085,12.56,0.02020-05-06 14:45:00,12.56,8085,12.56,0.02020-05-06 15:00:00,2.48,8085,12.56,-10.082020-05-06 15:30:00,2.48,8085,2.47,0.012020-05-06 15:45:00,2.48,8085,2.48,0.02020-05-06 16:00:00,2.50,8085,2.48,0.022020-05-06 16:15:00,2.50,8085,2.50,0.02020-05-06 16:30:00,2.50,8085,2.50,0.02020-05-06 16:45:00,2.51,8085,2.50,0.012020-05-06 17:00:00,2.51,8085,2.51,0.0I would like to use the last column 'Raw - Previous', which is simply the difference between the most recent observation and the previous observation, to create a running total of the positive changes to make an accumulation column. From time to time I have to empty out the rain gauge so the 'Raw - Previous' will be negative when that occurs and I would like to filter this out of my df while keeping a tally of the total accumulation of the gauge. I've come across solutions that usedf.sum()but from what I can gather, they only provide the total sum of the entire column and not the running total after each row.In all my goal is to have something like thisDate Time, Raw Measurement, Site ID, Previous Raw Measurement, Raw - Previous, Total Accumulation2020-05-06 14:15:00,12.56,8085,12.56,0.0,12.562020-05-06 14:30:00,12.56,8085,12.56,0.0,12.562020-05-06 14:45:00,12.56,8085,12.56,0.0,12.562020-05-06 15:00:00,2.48,8085,12.56,-10.08,12.562020-05-06 15:15:00,2.47,8085,2.48,-0.01,12.562020-05-06 15:30:00,2.48,8085,2.47,0.01,12.572020-05-06 15:45:00,2.48,8085,2.48,0.0,12.572020-05-06 16:00:00,2.50,8085,2.48,0.02,12.592020-05-06 16:15:00,2.50,8085,2.50,0.0,12.592020-05-06 16:30:00,2.50,8085,2.50,0.0,12.592020-05-06 16:45:00,2.51,8085,2.50,0.01,12.602020-05-06 17:00:00,2.51,8085,2.51,0.0,12.60EDIT: Changed title to better reflect what the question became
np.where will also do the job.import pandas as pd, numpy as npdf['Total Accumulation'] = np.where((df['Raw - Previous'] > 0), df['Raw - Previous'], 0).cumsum() + df.iloc[0,3]dfOutput: Date Time Raw Measurement Site ID Previous Raw Measurement Raw - Previous Total Accumulation0 2020-05-06 14:15:00 12.56 8085 12.56 0.00 12.561 2020-05-06 14:30:00 12.56 8085 12.56 0.00 12.562 2020-05-06 14:45:00 12.56 8085 12.56 0.00 12.563 2020-05-06 15:00:00 2.48 8085 12.56 -10.08 12.564 2020-05-06 15:30:00 2.48 8085 2.47 0.01 12.575 2020-05-06 15:45:00 2.48 8085 2.48 0.00 12.576 2020-05-06 16:00:00 2.50 8085 2.48 0.02 12.597 2020-05-06 16:15:00 2.50 8085 2.50 0.00 12.598 2020-05-06 16:30:00 2.50 8085 2.50 0.00 12.599 2020-05-06 16:45:00 2.51 8085 2.50 0.10 12.6910 2020-05-06 17:00:00 2.51 8085 2.51 0.00 12.69
Consistent ColumnTransformer for intersecting lists of columns I want to use sklearn.compose.ColumnTransformer consistently (not parallel, so, the second transformer should be executed only after the first) for intersecting lists of columns in this way:log_transformer = p.FunctionTransformer(lambda x: np.log(x))df = pd.DataFrame({'a': [1,2, np.NaN, 4], 'b': [1,np.NaN, 3, 4], 'c': [1 ,2, 3, 4]})compose.ColumnTransformer(n_jobs=1, transformers=[ ('num', impute.SimpleImputer() , ['a', 'b']), ('log', log_transformer, ['b', 'c']), ('scale', p.StandardScaler(), ['a', 'b', 'c']) ]).fit_transform(df)So, I want to use SimpleImputer for 'a', 'b', then log for 'b', 'c', and then StandardScaler for 'a', 'b', 'c'.But:I get array of (4, 7) shape.I still get Nan in a and b columns.So, how can I use ColumnTransformer for different columns in the manner of Pipeline?UPD:pipe_1 = pipeline.Pipeline(steps=[ ('imp', impute.SimpleImputer(strategy='constant', fill_value=42)),])pipe_2 = pipeline.Pipeline(steps=[ ('imp', impute.SimpleImputer(strategy='constant', fill_value=24)),])pipe_3 = pipeline.Pipeline(steps=[ ('scl', p.StandardScaler()),])# in the real situation I don't know exactly what cols these arrays contain, so they are not static: cols_1 = ['a']cols_2 = ['b']cols_3 = ['a', 'b', 'c']proc = compose.ColumnTransformer(remainder='passthrough', transformers=[ ('1', pipe_1, cols_1), ('2', pipe_2, cols_2), ('3', pipe_3, cols_3),])proc.fit_transform(df).TOutput:array([[ 1. , 2. , 42. , 4. ], [ 1. , 24. , 3. , 4. ], [-1.06904497, -0.26726124, nan, 1.33630621], [-1.33630621, nan, 0.26726124, 1.06904497], [-1.34164079, -0.4472136 , 0.4472136 , 1.34164079]])I understood why I have cols duplicates, nans and not scaled values, but how can I fix this in the correct way when cols are not static? UPD2:A problem may arise when the columns change their order. So, I want to use FunctionTransformer for columns selection:def select_col(X, cols=None): return X[cols]ct1 = compose.make_column_transformer( (p.OneHotEncoder(), p.FunctionTransformer(select_col, kw_args=dict(cols=['a', 'b']))), remainder='passthrough')ct1.fit(df)But get this output: ValueError: No valid specification of the columns. Only a scalar, list or slice of all integers or all strings, or boolean mask is allowedHow can I fix it?
The intended usage of ColumnTransformer is that the different transformers are applied in parallel, not sequentially. To accomplish your desired outcome, three approaches come to mind:First approach:pipe_a = Pipeline(steps=[('imp', SimpleImputer()), ('scale', StandardScaler())])pipe_b = Pipeline(steps=[('imp', SimpleImputer()), ('log', log_transformer), ('scale', StandardScaler())])pipe_c = Pipeline(steps=[('log', log_transformer), ('scale', StandardScaler())])proc = ColumnTransformer(transformers=[ ('a', pipe_a, ['a']), ('b', pipe_b, ['b']), ('c', pipe_c, ['c'])])This second one actually won't work, because the ColumnTransformer will rearrange the columns and forget the names*, so that the later ones will fail or apply to the wrong columns. When sklearn finalizes how to pass along dataframes or feature names, this may be salvaged, or you may be able to tweak it for your specific usecase now. (* ColumnTransformer does already have a get_feature_names, but the actual data passed through the pipeline doesn't have that information.)imp_tfm = ColumnTransformer( transformers=[('num', impute.SimpleImputer() , ['a', 'b'])], remainder='passthrough' )log_tfm = ColumnTransformer( transformers=[('log', log_transformer, ['b', 'c'])], remainder='passthrough' )scl_tfm = ColumnTransformer( transformers=[('scale', StandardScaler(), ['a', 'b', 'c']) )proc = Pipeline(steps=[ ('imp', imp_tfm), ('log', log_tfm), ('scale', scl_tfm)])Third, there may be a way to use the Pipeline slicing feature to have one "master" pipeline that you cut down for each feature... this would work mostly like the first approach, might save some coding in the case of larger pipelines, but seems a little hacky. For example, here you can:pipe_a = clone(pipe_b)[1:]pipe_c = clone(pipe_b)pipe_c.steps[1] = ('nolog', 'passthrough')(Without cloning or otherwise deep-copying pipe_b, the last line would change both pipe_c and pipe_b. The slicing mechanism returns a copy, so pipe_a doesn't strictly need to be cloned, but I've left it in to feel safer. Unfortunately you can't provide a discontinuous slice, so pipe_c = pipe_b[0,2] doesn't work, but you can set the individual slices as I've done above to "passthrough" to disable them.)
FutureWarning: elementwise comparison failed; when dropping all rows from pandas dataframe I want to drop those rows in a dataframe that have value '0' in the column 'candidate'. Some of my dataframes only have value '0' in this column. I expected that in this case I will get an empty dataframe, but instead I get the following warning and the unchanged dataframe. How can I get an empty dataframe in this case? Or prevent returning an unchanged dataframe?Warning message:C:\Users\User\Anaconda3\lib\site-packages\pandas\core\ops\array_ops.py:253: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison res_values = method(rvalues)My code:with open(filename, encoding='utf-8') as file: df = pd.read_csv(file, sep=',') df.drop(df.index[(df['candidate'] == '0')], inplace=True) print(df) post id ... candidate0 1 ... 01 1 ... 02 1 ... 03 1 ... 04 1 ... 0.. ... ... ...182 10 ... 0183 10 ... 0184 10 ... 0185 10 ... 0186 10 ... 0[187 rows x 4 columns]
Thanks everyone for your suggestions!Indeed, the value type is int, but only if 0 is the only value in the column. Where other values are present, the type is object.So I solved the problem by using:df = df.loc[(df["candidate"] != "0") & (df["candidate"] != 0)]
Numpy split array without copying I have a very large array of images (multiple GBs) and want to split it using numpy. This is my code:images = ... # this is the very large array which contains a lot of images.images.shape => (50000, 256, 256)indices = ... # array containing ranges, that group the images array like [(0, 300), (301, 580), (581, 860), ...]train_indices, test_indices = ... # both arrays contain indices like [1, 6, 8, 19] which determine which groups are in the train and which are in the test groupimages_train, images_test = np.empty([0, images.shape[1], images.shape[2]]), np.empty([0, images.shape[1], images.shape[2]])# assign the image groups to either train or test setfor (i, rng) in enumerate(indices): group_range = range(rng[0], rng[1]+1) if i in train_indices: images_train = np.concatenate((images_train, images[group_range])) else: images_test = np.concatenate((images_test, images[group_range]))The problem with this code is, that images_train and images_test are new arrays and the single images are always copied in this new array. This leads to double the memory needed to run the program.Is there a way to split my images array into images_train and images_test without having to copy the images, but rather reuse them?My intention with the indices is to group the images into roughly 150 groups, where images from one group should be either in the train or test set
Without a running code it's difficult to understand the details. But I can try to give some ideas. If you have images_train and images_test then you will probabely use them to train and to test with a command that is something like.fit(images_train);.score(images_test)An approach might be that you do not build images_train and images_test but that you use part of images directely.fit(images[...]);.score(images[...])Now the question is, what should be in the [...]-brackets ? Or is there a numpy operater that extracts the right images[...]. First we have to think about what we should avoid:for loop is always slowiterative filling of an array like A = np.concatenate((A, B[j])) is always slowPython's "fancy indexing" is always slow, as group_range = range(rng[0], rng[1]+1); images[group_range]Some ideas:use slices instead of "fancy indexing" see hereimages[rng[0] : rng[1]+1] , orgroup_range = slice(rng[0] , rng[1]+1); images[group_range]Is images_train = images[train_indices, :, :] and images_test = images[test_indices, :, :] ?images.shape => (50000, 256, 256) is 3-dimensional ? try wether numpy.where can give some assitancebelow the methods I've mentioned...import numpy as np A = np.arange(20); print("A =",A) B = A[5:16:2]; print("B =",B) # view of A only, faster j = slice(5, 16, 2); C = A[j]; print("C =",C) # view of A only, faster k = [2, 4, 8, 12]; D = A[k]; print("D =",D) # generates internal copies A = [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19] B = [ 5 7 9 11 13 15] C = [ 5 7 9 11 13 15] D = [ 2 4 8 12]
Reshaping a numpy vector I am really new to numpy. I have a numpy vector that when I run y.shape returns (4000,). Is there a way, I can have it return (4000, 1)?
np.reshape(y,(4000,1))Reshape function can be used to do this
Easiest way to print the head of a data in python? I'm not defining my array with pandas, I'm using numpy to do it and I would like to know if there is any other way to print the first 5 rows of a data. Using pandas this is how I would do it: print(data.head()).This is how i defined my data:with open('B0_25.txt', 'r') as simulation_data:simulation_data = [x.strip() for x in simulation_data if x.strip()]data = [tuple(map(float, x.split())) for x in simulation_data[2:100]] x = [x[1] for x in data]y = [x[2] for x in data]z = [x[3] for x in data]mx = [x[4] for x in data]my = [x[5] for x in data]mz = [x[6] for x in data]mydata = np.array([x, y, z, mx, my, mz])
You need the transpose of mydata, otherwise x, y, z, mx, my, mz are the rows rather than the columns.mydata = np.array([x, y, z, mx, my, mz]).Tprint(mydata[:5, :])
print array as a matrix by having all elements in the right columns I am trying to print my dataframe as a matrix. To do so, I want to use an array. To be clear:I have a dictionary, Y, which is like this:{(0, 0): {(0, 0): 0, (1, 0): 1, (0, 1): 1, (0, 2): 2, (0, 3): 3, (1, 3): 4, (0, 4): 10, (1, 4): 9, (0, 5): 11, (1, 1): 2, (1, 2): 5, (2, 2): 6, (2, 4): 8, (1, 5): 10, (2, 0): 10, (3, 0): 9, (2, 1): 7, (3, 1): 8, (3, 2): 7, (2, 3): 7, (3, 4): 9, (2, 5): 9, (3, 5): 10, (3, 3): 8}, (1, 0): {(1, 0): 0, (0, 0): 1, (1, 1): 1, (0, 1): 2, (0, 2): 3, (0, 3): 4, (1, 3): 5, (0, 4): 11, (1, 4): 10, (0, 5): 12, (1, 2): 6, (2, 2): 7, (2, 4): 9, (1, 5): 11, (2, 0): 11, (3, 0): 10, (2, 1): 8, (3, 1): 9, (3, 2): 8, (2, 3): 8, (3, 4): 10, (2, 5): 10, (3, 5): 11, (3, 3): 9}, (0, 1): {(0, 1): 0, (0, 0): 1, (0, 2): 1, (1, 0): 2, (0, 3): 2, (1, 3): 3, (0, 4): 9, (1, 4): 8, (0, 5): 10, (1, 1): 3, (1, 2): 4, (2, 2): 5, (2, 4): 7, (1, 5): 9, (2, 0): 9, (3, 0): 8, (2, 1): 6, (3, 1): 7, (3, 2): 6, (2, 3): 6, (3, 4): 8, (2, 5): 8, (3, 5): 9, (3, 3): 7}, (0, 2): {(0, 2): 0, (0, 1): 1, (0, 3): 1, (0, 0): 2, (1, 0): 3, (1, 3): 2, (0, 4): 8, (1, 4): 7, (0, 5): 9, (1, 1): 4, (1, 2): 3, (2, 2): 4, (2, 4): 6, (1, 5): 8, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 5, (3, 4): 7, (2, 5): 7, (3, 5): 8, (3, 3): 6}, (0, 3): {(0, 3): 0, (0, 2): 1, (1, 3): 1, (0, 0): 3, (1, 0): 4, (0, 1): 2, (0, 4): 7, (1, 4): 6, (0, 5): 8, (1, 1): 5, (1, 2): 2, (2, 2): 3, (2, 4): 5, (1, 5): 7, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 4, (3, 4): 6, (2, 5): 6, (3, 5): 7, (3, 3): 5}, (1, 3): {(1, 3): 0, (0, 3): 1, (1, 2): 1, (0, 0): 4, (1, 0): 5, (0, 1): 3, (0, 2): 2, (0, 4): 6, (1, 4): 5, (0, 5): 7, (1, 1): 6, (2, 2): 2, (2, 4): 4, (1, 5): 6, (2, 0): 6, (3, 0): 5, (2, 1): 3, (3, 1): 4, (3, 2): 3, (2, 3): 3, (3, 4): 5, (2, 5): 5, (3, 5): 6, (3, 3): 4}, (0, 4): {(0, 4): 0, (1, 4): 1, (0, 5): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 2, (1, 5): 2, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 3, (3, 4): 3, (2, 5): 3, (3, 5): 4, (3, 3): 6}, (1, 4): {(1, 4): 0, (0, 4): 1, (2, 4): 1, (1, 5): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 5): 2, (1, 1): 11, (1, 2): 4, (2, 2): 3, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 2, (3, 4): 2, (2, 5): 2, (3, 5): 3, (3, 3): 5}, (0, 5): {(0, 5): 0, (0, 4): 1, (0, 0): 11, (1, 0): 12, (0, 1): 10, (0, 2): 9, (0, 3): 8, (1, 3): 7, (1, 4): 2, (1, 1): 13, (1, 2): 6, (2, 2): 5, (2, 4): 3, (1, 5): 3, (2, 0): 9, (3, 0): 8, (2, 1): 6, (3, 1): 7, (3, 2): 6, (2, 3): 4, (3, 4): 4, (2, 5): 4, (3, 5): 5, (3, 3): 7}, (1, 1): {(1, 1): 0, (1, 0): 1, (0, 0): 2, (0, 1): 3, (0, 2): 4, (0, 3): 5, (1, 3): 6, (0, 4): 12, (1, 4): 11, (0, 5): 13, (1, 2): 7, (2, 2): 8, (2, 4): 10, (1, 5): 12, (2, 0): 12, (3, 0): 11, (2, 1): 9, (3, 1): 10, (3, 2): 9, (2, 3): 9, (3, 4): 11, (2, 5): 11, (3, 5): 12, (3, 3): 10}, (1, 2): {(1, 2): 0, (1, 3): 1, (2, 2): 1, (0, 0): 5, (1, 0): 6, (0, 1): 4, (0, 2): 3, (0, 3): 2, (0, 4): 5, (1, 4): 4, (0, 5): 6, (1, 1): 7, (2, 4): 3, (1, 5): 5, (2, 0): 5, (3, 0): 4, (2, 1): 2, (3, 1): 3, (3, 2): 2, (2, 3): 2, (3, 4): 4, (2, 5): 4, (3, 5): 5, (3, 3): 3}, (2, 2): {(2, 2): 0, (1, 2): 1, (2, 1): 1, (3, 2): 1, (2, 3): 1, (0, 0): 6, (1, 0): 7, (0, 1): 5, (0, 2): 4, (0, 3): 3, (1, 3): 2, (0, 4): 4, (1, 4): 3, (0, 5): 5, (1, 1): 8, (2, 4): 2, (1, 5): 4, (2, 0): 4, (3, 0): 3, (3, 1): 2, (3, 4): 3, (2, 5): 3, (3, 5): 4, (3, 3): 2}, (2, 4): {(2, 4): 0, (1, 4): 1, (2, 3): 1, (3, 4): 1, (2, 5): 1, (0, 0): 8, (1, 0): 9, (0, 1): 7, (0, 2): 6, (0, 3): 5, (1, 3): 4, (0, 4): 2, (0, 5): 3, (1, 1): 10, (1, 2): 3, (2, 2): 2, (1, 5): 2, (2, 0): 6, (3, 0): 5, (2, 1): 3, (3, 1): 4, (3, 2): 3, (3, 5): 2, (3, 3): 4}, (1, 5): {(1, 5): 0, (1, 4): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (0, 4): 2, (0, 5): 3, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 2, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 3, (3, 4): 3, (2, 5): 3, (3, 5): 4, (3, 3): 6}, (2, 0): {(2, 0): 0, (3, 0): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (0, 4): 8, (1, 4): 7, (0, 5): 9, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 6, (1, 5): 8, (2, 1): 3, (3, 1): 2, (3, 2): 5, (2, 3): 5, (3, 4): 7, (2, 5): 7, (3, 5): 8, (3, 3): 6}, (3, 0): {(3, 0): 0, (2, 0): 1, (3, 1): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 4): 7, (1, 4): 6, (0, 5): 8, (1, 1): 11, (1, 2): 4, (2, 2): 3, (2, 4): 5, (1, 5): 7, (2, 1): 2, (3, 2): 4, (2, 3): 4, (3, 4): 6, (2, 5): 6, (3, 5): 7, (3, 3): 5}, (2, 1): {(2, 1): 0, (2, 2): 1, (3, 1): 1, (0, 0): 7, (1, 0): 8, (0, 1): 6, (0, 2): 5, (0, 3): 4, (1, 3): 3, (0, 4): 5, (1, 4): 4, (0, 5): 6, (1, 1): 9, (1, 2): 2, (2, 4): 3, (1, 5): 5, (2, 0): 3, (3, 0): 2, (3, 2): 2, (2, 3): 2, (3, 4): 4, (2, 5): 4, (3, 5): 5, (3, 3): 3}, (3, 1): {(3, 1): 0, (3, 0): 1, (2, 1): 1, (0, 0): 8, (1, 0): 9, (0, 1): 7, (0, 2): 6, (0, 3): 5, (1, 3): 4, (0, 4): 6, (1, 4): 5, (0, 5): 7, (1, 1): 10, (1, 2): 3, (2, 2): 2, (2, 4): 4, (1, 5): 6, (2, 0): 2, (3, 2): 3, (2, 3): 3, (3, 4): 5, (2, 5): 5, (3, 5): 6, (3, 3): 4}, (3, 2): {(3, 2): 0, (2, 2): 1, (3, 3): 1, (0, 0): 7, (1, 0): 8, (0, 1): 6, (0, 2): 5, (0, 3): 4, (1, 3): 3, (0, 4): 5, (1, 4): 4, (0, 5): 6, (1, 1): 9, (1, 2): 2, (2, 4): 3, (1, 5): 5, (2, 0): 5, (3, 0): 4, (2, 1): 2, (3, 1): 3, (2, 3): 2, (3, 4): 4, (2, 5): 4, (3, 5): 5}, (2, 3): {(2, 3): 0, (2, 2): 1, (2, 4): 1, (0, 0): 7, (1, 0): 8, (0, 1): 6, (0, 2): 5, (0, 3): 4, (1, 3): 3, (0, 4): 3, (1, 4): 2, (0, 5): 4, (1, 1): 9, (1, 2): 2, (1, 5): 3, (2, 0): 5, (3, 0): 4, (2, 1): 2, (3, 1): 3, (3, 2): 2, (3, 4): 2, (2, 5): 2, (3, 5): 3, (3, 3): 3}, (3, 4): {(3, 4): 0, (2, 4): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 4): 3, (1, 4): 2, (0, 5): 4, (1, 1): 11, (1, 2): 4, (2, 2): 3, (1, 5): 3, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 2, (2, 5): 2, (3, 5): 3, (3, 3): 5}, (2, 5): {(2, 5): 0, (2, 4): 1, (3, 5): 1, (0, 0): 9, (1, 0): 10, (0, 1): 8, (0, 2): 7, (0, 3): 6, (1, 3): 5, (0, 4): 3, (1, 4): 2, (0, 5): 4, (1, 1): 11, (1, 2): 4, (2, 2): 3, (1, 5): 3, (2, 0): 7, (3, 0): 6, (2, 1): 4, (3, 1): 5, (3, 2): 4, (2, 3): 2, (3, 4): 2, (3, 3): 5}, (3, 5): {(3, 5): 0, (2, 5): 1, (0, 0): 10, (1, 0): 11, (0, 1): 9, (0, 2): 8, (0, 3): 7, (1, 3): 6, (0, 4): 4, (1, 4): 3, (0, 5): 5, (1, 1): 12, (1, 2): 5, (2, 2): 4, (2, 4): 2, (1, 5): 4, (2, 0): 8, (3, 0): 7, (2, 1): 5, (3, 1): 6, (3, 2): 5, (2, 3): 3, (3, 4): 3, (3, 3): 6}, (3, 3): {(3, 3): 0, (3, 2): 1, (0, 0): 8, (1, 0): 9, (0, 1): 7, (0, 2): 6, (0, 3): 5, (1, 3): 4, (0, 4): 6, (1, 4): 5, (0, 5): 7, (1, 1): 10, (1, 2): 3, (2, 2): 2, (2, 4): 4, (1, 5): 6, (2, 0): 6, (3, 0): 5, (2, 1): 3, (3, 1): 4, (2, 3): 3, (3, 4): 5, (2, 5): 5, (3, 5): 6}}Using pandas I converted the dictionary to a dataframe:df = pd.DataFrame(Y)df.index = [*df.index]df.columns = [*df.columns]arraydf = df.to_numpy()This is the dataframe I get: (0, 0) (1, 0) (0, 1) (0, 2) ... (3, 4) (2, 5) (3, 5) (3, 3)(0, 0) 0 1 1 2 ... 9 9 10 8(1, 0) 1 0 2 3 ... 10 10 11 9(0, 1) 1 2 0 1 ... 8 8 9 7(0, 2) 2 3 1 0 ... 7 7 8 6(0, 3) 3 4 2 1 ... 6 6 7 5(1, 3) 4 5 3 2 ... 5 5 6 4(0, 4) 10 11 9 8 ... 3 3 4 6(1, 4) 9 10 8 7 ... 2 2 3 5(0, 5) 11 12 10 9 ... 4 4 5 7(1, 1) 2 1 3 4 ... 11 11 12 10(1, 2) 5 6 4 3 ... 4 4 5 3(2, 2) 6 7 5 4 ... 3 3 4 2(2, 4) 8 9 7 6 ... 1 1 2 4(1, 5) 10 11 9 8 ... 3 3 4 6(2, 0) 10 11 9 8 ... 7 7 8 6(3, 0) 9 10 8 7 ... 6 6 7 5(2, 1) 7 8 6 5 ... 4 4 5 3(3, 1) 8 9 7 6 ... 5 5 6 4(3, 2) 7 8 6 5 ... 4 4 5 1(2, 3) 7 8 6 5 ... 2 2 3 3(3, 4) 9 10 8 7 ... 0 2 3 5(2, 5) 9 10 8 7 ... 2 0 1 5(3, 5) 10 11 9 8 ... 3 1 0 6(3, 3) 8 9 7 6 ... 5 5 6 0Then, I convert the df to an array:arraydf = df.to_numpy()This is my output now:[ 0 1 1 2 3 4 10 9 11 2 5 6 8 10 10 9 7 8 7 7 9 9 10 8][ 1 0 2 3 4 5 11 10 12 1 6 7 9 11 11 10 8 9 8 8 10 10 11 9][ 1 2 0 1 2 3 9 8 10 3 4 5 7 9 9 8 6 7 6 6 8 8 9 7][2 3 1 0 1 2 8 7 9 4 3 4 6 8 8 7 5 6 5 5 7 7 8 6][3 4 2 1 0 1 7 6 8 5 2 3 5 7 7 6 4 5 4 4 6 6 7 5][4 5 3 2 1 0 6 5 7 6 1 2 4 6 6 5 3 4 3 3 5 5 6 4][10 11 9 8 7 6 0 1 1 12 5 4 2 2 8 7 5 6 5 3 3 3 4 6][ 9 10 8 7 6 5 1 0 2 11 4 3 1 1 7 6 4 5 4 2 2 2 3 5][11 12 10 9 8 7 1 2 0 13 6 5 3 3 9 8 6 7 6 4 4 4 5 7][ 2 1 3 4 5 6 12 11 13 0 7 8 10 12 12 11 9 10 9 9 11 11 12 10][5 6 4 3 2 1 5 4 6 7 0 1 3 5 5 4 2 3 2 2 4 4 5 3][6 7 5 4 3 2 4 3 5 8 1 0 2 4 4 3 1 2 1 1 3 3 4 2][ 8 9 7 6 5 4 2 1 3 10 3 2 0 2 6 5 3 4 3 1 1 1 2 4][10 11 9 8 7 6 2 1 3 12 5 4 2 0 8 7 5 6 5 3 3 3 4 6][10 11 9 8 7 6 8 7 9 12 5 4 6 8 0 1 3 2 5 5 7 7 8 6][ 9 10 8 7 6 5 7 6 8 11 4 3 5 7 1 0 2 1 4 4 6 6 7 5][7 8 6 5 4 3 5 4 6 9 2 1 3 5 3 2 0 1 2 2 4 4 5 3][ 8 9 7 6 5 4 6 5 7 10 3 2 4 6 2 1 1 0 3 3 5 5 6 4][7 8 6 5 4 3 5 4 6 9 2 1 3 5 5 4 2 3 0 2 4 4 5 1][7 8 6 5 4 3 3 2 4 9 2 1 1 3 5 4 2 3 2 0 2 2 3 3][ 9 10 8 7 6 5 3 2 4 11 4 3 1 3 7 6 4 5 4 2 0 2 3 5][ 9 10 8 7 6 5 3 2 4 11 4 3 1 3 7 6 4 5 4 2 2 0 1 5][10 11 9 8 7 6 4 3 5 12 5 4 2 4 8 7 5 6 5 3 3 1 0 6][ 8 9 7 6 5 4 6 5 7 10 3 2 4 6 6 5 3 4 1 3 5 5 6 0]My question is: How can I get the final array to seem a matrix? I want all the lines of the same lenghts and to be in the right order (have "nice" readable columns also)EDIT:asked infos:arraydf.shape(24, 24)arraydf.dtypeint64df.dtypes(0, 0) int64(0, 1) int64(0, 2) int64(1, 2) int64(0, 3) int64(0, 4) int64(1, 4) int64(0, 5) int64(1, 5) int64(1, 0) int64(2, 0) int64(1, 1) int64(1, 3) int64(2, 3) int64(3, 0) int64(2, 1) int64(2, 2) int64(2, 4) int64(2, 5) int64(3, 5) int64(3, 1) int64(3, 2) int64(3, 3) int64(3, 4) int64dtype: objectdf.info<bound method DataFrame.info of (0, 0) (0, 1) (0, 2) (1, 2) ... (3, 1) (3, 2) (3, 3) (3, 4)(0, 0) 0 1 2 3 ... 8 9 10 11(0, 1) 1 0 1 2 ... 7 8 9 10(0, 2) 2 1 0 1 ... 6 7 8 9(1, 2) 3 2 1 0 ... 5 6 7 8(0, 3) 3 2 1 2 ... 7 8 9 10(0, 4) 4 3 2 3 ... 8 9 10 11(1, 4) 5 4 3 4 ... 9 10 11 12(0, 5) 5 4 3 4 ... 9 10 11 12(1, 5) 6 5 4 5 ... 10 11 12 13(1, 0) 5 4 3 2 ... 3 4 5 6(2, 0) 6 5 4 3 ... 2 3 4 5(1, 1) 4 3 2 1 ... 4 5 6 7(1, 3) 4 3 2 1 ... 6 7 8 9(2, 3) 5 4 3 2 ... 7 8 9 10(3, 0) 7 6 5 4 ... 1 2 3 4(2, 1) 7 6 5 4 ... 3 4 5 6(2, 2) 8 7 6 5 ... 4 5 6 7(2, 4) 14 13 12 11 ... 6 5 4 3(2, 5) 13 12 11 10 ... 5 4 3 2(3, 5) 12 11 10 9 ... 4 3 2 1(3, 1) 8 7 6 5 ... 0 1 2 3(3, 2) 9 8 7 6 ... 1 0 1 2(3, 3) 10 9 8 7 ... 2 1 0 1(3, 4) 11 10 9 8 ... 3 2 1 0
If you want to print line-by-line and still have things aligned you can do the following:>>> for l in str(df.to_numpy()).split("\n"):... print(l)... [[ 0 1 1 2 3 4 10 9 11 2 5 6 8 10 10 9 7 8 7 7 9 9 10 8] [ 1 2 0 1 2 3 9 8 10 3 4 5 7 9 9 8 6 7 6 6 8 8 9 7] [ 2 3 1 0 1 2 8 7 9 4 3 4 6 8 8 7 5 6 5 5 7 7 8 6] [ 3 4 2 1 0 1 7 6 8 5 2 3 5 7 7 6 4 5 4 4 6 6 7 5]...
Keras Creating CNN Model "The added layer must be an instance of class Layer" from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalizationfrom tensorflow.keras.layers import Dropout, Flatten, Input, Densedef create_model(): def add_conv_block(model, num_filters): model.add(Conv2D(num_filters, 3, activation='relu', padding='same')) model.add(BatchNormalization()) model.add(Conv2D(num_filters, 3, activation='relu', padding='valid')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.2)) return model model = tf.keras.models.Sequential() model.add(Input(shape=(32, 32, 3))) model = add_conv_block(model, 32) model = add_conv_block(model, 64) model = add_conv_block(model, 128) model.add(Flatten()) model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return modelmodel = create_model()model.summary()enter image description here
The solution is to use InputLayer instead of Input. InputLayer is meant to be used with Sequential models. You can also omit the InputLayer entirely and specify input_shape in the first layer of the sequential model.Input is meant to be used with the TensorFlow Keras functional API, not the sequential API.from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalizationfrom tensorflow.keras.layers import Dropout, Flatten, InputLayer, Densedef create_model(): def add_conv_block(model, num_filters): model.add(Conv2D(num_filters, 3, activation='relu', padding='same')) model.add(BatchNormalization()) model.add(Conv2D(num_filters, 3, activation='relu', padding='valid')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.2)) return model model = tf.keras.models.Sequential() model.add(InputLayer((32, 32, 3))) model = add_conv_block(model, 32) model = add_conv_block(model, 64) model = add_conv_block(model, 128) model.add(Flatten()) model.add(Dense(3, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return modelmodel = create_model()model.summary()
Unique data for each day using Python/Pandas Dataframe I'm trying to process each day's data using pandas. Below is my code, data and current output. However, function getUniqueDates() has to traverse full df to get the unique dates in the list as shown below. Is there any simple and efficient way to get each day's data which can be passed to function processDataForEachDate() . Traversing big list is time consuming.I have stripped down columns in this example to keep it simple. data = {'date': ['2014-05-01 18:47:05.069722', '2014-05-01 18:47:05.119994', '2014-05-02 18:47:05.178768', '2014-05-02 18:47:05.230071', '2014-05-02 18:47:05.230071', '2014-05-02 18:47:05.280592', '2014-05-03 18:47:05.332662', '2014-05-03 18:47:05.385109', '2014-05-04 18:47:05.436523', '2014-05-04 18:47:05.486877'], 'noOfJobs': [34, 25, 26, 15, 15, 14, 26, 25, 62, 41]} df = pd.DataFrame(data, columns = ['date', 'noOfJobs']) df = df.astype(dtype= {"date":'datetime64[ns]'}) print(df) #Ouput==================================== date noOfJobs 0 2014-05-01 18:47:05.069722 34 1 2014-05-01 18:47:05.119994 25 2 2014-05-02 18:47:05.178768 26 3 2014-05-02 18:47:05.230071 15 4 2014-05-02 18:47:05.230071 15 5 2014-05-02 18:47:05.280592 14 6 2014-05-03 18:47:05.332662 26 7 2014-05-03 18:47:05.385109 25 8 2014-05-04 18:47:05.436523 62 9 2014-05-04 18:47:05.486877 41 def getUniqueDates(): todaysDate = datetime.datetime.today().strftime('%Y-%m-%d') listOfDates=[] for c,r in df.iterrows(): if r.date.date() != todaysDate: todaysDate=r.date.date() listOfDates.append(todaysDate) return listOfDates listOfDates = getUniqueDates() print(listOfDates) # Output==================================== [datetime.date(2014, 5, 1), datetime.date(2014, 5, 2), datetime.date(2014, 5, 3), datetime.date(2014, 5, 4)] for eachDate in listOfDates: processDataForEachDate(eachDate)
You can access a NumPy array of unique dates with:>>> df.date.dt.date.unique()array([datetime.date(2014, 5, 1), datetime.date(2014, 5, 2), datetime.date(2014, 5, 3), datetime.date(2014, 5, 4)], dtype=object)dt is an accessor method of the pandas Series df.date. Basically, it's a class that acts as a property-like interface to a bunch of date-time-related methods. The benefit is that it is vectorized (see here for a comparison to .iterrows() from a Pandas developer), and that accessor methods also use a "cached property" design:Link to sourceLink to an explanation
tf.keras.backend way of replacing a tensors value if it's less than 1 I am using Keras with the Tensorflow backend.In my loss function I have a tensor where I need to replace the elements that are less than 1 with a 1.I can see loads of functions available to me in the docshttps://www.tensorflow.org/api_docs/python/tf/keras/backendbut I'm not sure how to go about this.If I do:a_ = tf.Print( message='a_shape', input_=a_, data=[tf.shape(a_)])I get the shape as:y_shape[128]I need to essentially iterate through this tensor replacing elements that are less than 1 with a 1.How would I do this using the keras tensorflow API?Thanks -
if a is your tensor you can do the following:b = a*tf.cast(a>1, 'float32') + tf.cast(a<=1, 'float32')
compare the next row value and change the current row value using pandas python any way of comparing a row value with the next row value and change the current row value using pandas?Basically in the the first Data frame DF1, in the value column, one of the value is '999', so the values of the next rows for that 'user-id' is less than the value '999'. so in this case i want to add '1000' which is 10^(len(999)) to the all successive values of that 'user-id'.I tried using shift, but I found that it skips one of the row value by giving a 'Null'. And I am also not sure how to do it without creating a new value. For example,if this is the data set I have, DF1user-id serial-number value day1 2 10 11 2 20 21 2 30 31 2 40 41 2 50 51 2 60 61 2 70 71 2 80 81 2 90 91 2 100 101 2 999 111 2 300 121 2 400 132 3 11 12 3 12 22 3 13 32 3 14 42 3 99 52 3 16 62 3 17 72 3 18 8I need the resultant data frame to be DF1:user-id serial-number value day1 2 10 11 2 20 11 2 30 11 2 40 11 2 50 11 2 60 11 2 70 11 2 80 11 2 90 1 1 2 100 1 1 2 999 1 1 2 1300 11 2 1400 1. .2 3 11 12 3 12 12 3 13 12 3 14 12 3 99 12 3 116 12 3 117 12 3 118 1I think I've explained the question properly.similarly i want to do it for all the values in the "value" column for each user ID.Any suggestions?
I have 2 methods for this:This method we multiply by the max value of each user-id - it works on the sample dataset you porivded but it might not work overal.df.set_index('user-id', inplace=True)df['value'] += df.groupby('user-id')['value'].apply(lambda x:(x.shift() > x).astype(int).cumsum()) * 10**df.groupby('user-id')['value'].max().apply(lambda x: len(str(x)))The other on is looping through each item:def foo(x): for i in range(1,len(x)): if x.iloc[i] < x.iloc[i-1]: x.iloc[i:] = x.iloc[i:] + 10**(len(str(x.iloc[i-1]))) return xdf['value'] = df.groupby('user-id')['value'].apply(foo)
How to visualize a matrix of categories as an RGB image? I am using neural network to do semantic segmentation(human parsing), something like taking a photo of people as input and the neural network tells that every pixel is most likely to be head, leg, background or some other parts of human. The algorithm runs smoothly and giving a numpy.ndarray as output . The shape of the array is (1,23,600,400), where 600*400 is the resolution of the input image and 23 is the number of categories. The 3d matrix looks like a 23-layer stacked 2d matrices, where each layer using a matrix of float to tell the possibility that each pixel is of that category.To visualize the matrix like the following figure, I used numpy.argmax to squash the 3d matrix into a 2d matrix that holds the index of the most possible category. But I don't have any idea how to proceed to get the visualization I want.EDITActually, I can do it in a trivial way. That is, use a for loop to traverse through every pixel and assign a color to it to get a image. However, this is not a vectorized coding, since numpy has built-in way to speed up matrix manipulation. And I need to save CPU cycles for real time segmentation.
It's fairly easy. All you need to have is a lookup table mapping the 23 labels into unique colors. The easiest way is to have a 23-by-3 numpy array with each row storing the RGB values for the corresponding label:import numpy as npimport matplotlib.pyplot as pltlut = np.random.rand(23, 3) # using random mapping - but you can do betterlb = np.argmax(prediction, axis=1) # converting probabilities to discrete labelsrgb = lut[lb[0, ...], :] # this is all it takes to do the mapping.plt.imshow(rgb)plt.show()Alternatively, if you are only interested in the colormap for display purposes, you can use cmap argument of plt.imshow, but this will requires you to transform lut into a "colormap":from matplotlib.colors import LinearSegmentedColormapcmap = LinearSegmentedColormap.from_list('new_map', lut, N=23)plt.imshow(lb[0, ...], cmap=cmap)plt.show()
In Pandas, how to make a PivotTable for counting and skip replicates? In Python3 and pandas I have a dataframe like this:IdComissao SiglaComissao NomeMembro12444 CCJR Abelardo Camarinha12444 CCJR Abelardo Camarinha12448 CAD Abelardo Camarinha12448 CAD Abelardo Camarinha12453 CMADS Abelardo Camarinha12453 CMADS Abelardo Camarinha12453 CMADS Abelardo Camarinha13297 CPI-InvTer Abelardo Camarinha8509 CFC Abelardo Camarinha8509 CFC Abelardo Camarinha13149 CPIATFC Abelardo Camarinha12444 CCJR Vaz de Lima12445 CFOP Vaz de Lima12445 CFOP Vaz de Lima12445 CFOP Vaz de Lima12454 CAE Vaz de Lima12455 CDD Vaz de Lima8501 CCJ Vaz de Lima8503 CAP Vaz de Lima8509 CFC Vaz de Lima8509 CFC Vaz de Lima8511 CEP Vaz de Lima8515 CFO Vaz de Lima8515 CFO Vaz de Lima8515 CFO Vaz de Lima8515 CFO Vaz de Lima8515 CFO Vaz de Lima8519 CSOP Vaz de Lima8521 CEDP Vaz de LimaI am looking for a way to count how many times each name "NomeMembro" has an item "SiglaComissao", without repeatsFor example, the name "Abelardo Camarinha" has six types of "SiglaComissao" and the name "Vaz de Lima" has 11 typesPlease, is there a way to make a PivotTable to count items without repeats?
I think you're looking for groupby and nunique:df.groupby('NomeMembro')['SiglaComissao'].nunique()Which returns:NomeMembroAbelardo Camarinha 6Vaz de Lima 11
How to reset GPU on keras / tensorflow hang? Sometimes I have to kill my python application which use GPU with Keras or Tensorflow and after that I can't run them anymore. This is probably because GPU is still used by something. How to free GPU by force without machine reboot?I tried the following shell script$ cat ~/bin/nvidia-reset#!/bin/shsudo rmmod nvidia_uvmsudo rmmod nvidia_drmsudo rmmod nvidia_modesetsudo rmmod nvidiasudo nvidia-smiBut often it is unable to do the job saying nvidia_uvm is busy.
try this: keras.backend.clear_session()
Python List append with respective index I need help on list append. I have to export it into CSV with the respective list index.lst1 = ['a', 'b', 'c']lst2 = ['w', 'f', 'g']lst3 = ['e', 'r', 't']ap = []ap.append((lst1, lst2, lst3))output: [(['a', 'b', 'c'], ['w', 'f', 'g'], ['e', 'r', 't'])]Expected output:[('a', 'w', 'e') ('b', 'f', 'r') ('c', 'g', 't')]I need to export to Excel via Pandas, please help. col1 col2 col3 a w e b f r c g t
You need a list of tuples, not a list of a tuple of lists. For your result, you can use zip with unpacking to extract items in an iterable of lists by index.df = pd.DataFrame(list(zip(*(lst1, lst2, lst3))), columns=['col1', 'col2', 'col3'])print(df) col1 col2 col30 a w e1 b f r2 c g tThen export to Excel as you normally would:df.to_excel('file.xlsx', index=False)
Perform operations on last iteration values using iterrows I have two datasets.dfName Date QuantityZMTD 2018-06-30 1000ZMTD 2018-05-31 975ZMTD 2018-04-30 920ZMTD 2018-03-30 900ZMTD 2018-02-28 840ZMTD 2018-01-31 820ZMTD 2017-12-30 760ZMTD 2017-11-31 600ZMTD 2017-10-30 1200ZMTD 2017-09-31 1170ZMTD 2017-08-30 1090ZMTD 2017-07-30 1100df2 Name Date FactorKOC 2018-01-15 0.5ZMTD 2017-11-10 1.5ZMTD 2018-03-20 2.5 BND 2016-03-20 25I am trying to divide the the column 'Quantity' in df with the column 'Factor' in df2 on all rows that satisfy the condition df['Date'] < df2['Date'].I wrote the following codename = df['Name'].iloc[0]for i, row in df2.iterrows(): if row[0] == name: factor_date = row[1] ratio = row[2] for j, rows in df.iterrows(): new_quantity = rows[2] if (rows[1] < factor_date): new_quantity = (new_quantity / ratio) df.at[i, 'Quantity'] = new_quantitywhen I run this code, I expect the following values Name Date QuantityZMTD 2018-06-30 1000ZMTD 2018-05-31 975 ZMTD 2018-04-30 920ZMTD 2018-03-30 900ZMTD 2018-02-28 336ZMTD 2018-01-31 328ZMTD 2017-12-30 304ZMTD 2017-11-31 240ZMTD 2017-10-30 320ZMTD 2017-09-31 312ZMTD 2017-08-30 290.66ZMTD 2017-07-30 293.34But I get the values where the Quantity column is divided by the latest Factor column value 2.5 but not on the values which are initially divided by 1.5I was wondering if we can save the values of the initial iteration and then run the new iteration on the previous values using iterrows.
This will give you what you seek:df = df1.merge(df2, on='Name', how='left', suffixes=('', '2'))df['Factor'] = ((df['Date'] < df['Date2']).astype(int) * df['Factor']).replace(0, 1)df = df.groupby(['Name', 'Date']).agg({'Quantity': 'max', 'Factor': 'prod'}).reset_index()df['Quantity'] = df['Quantity'] / df['Factor']df[['Name', 'Date', 'Quantity']].sort_values(['Name', 'Date'], ascending=False).reset_index(drop=True)# Name Date Quantity#0 ZMTD 2018-06-30 1000.000000#1 ZMTD 2018-05-31 975.000000#2 ZMTD 2018-04-30 920.000000#3 ZMTD 2018-03-30 900.000000#4 ZMTD 2018-02-28 336.000000#5 ZMTD 2018-01-31 328.000000#6 ZMTD 2017-12-30 304.000000#7 ZMTD 2017-11-31 240.000000#8 ZMTD 2017-10-30 320.000000#9 ZMTD 2017-09-31 312.000000#10 ZMTD 2017-08-30 290.666667#11 ZMTD 2017-07-30 293.333333
Seaborn and Pandas: Make multiple x-category bar plot using multi index data in python I have a multi-index dataframe that I've melted to look something like this:Color Frequency variable valueRed 2-3 times a month x 22Red A few days a week x 45Red At least once a day x 344Red Never x 5Red Once a month x 1Red Once a week x 0Red Once every few months x 4Blue 2-3 times a month x 4Blue A few days a week x 49Blue At least once a day x 200Blue Never x 7Blue Once a month x 19Blue Once a week x 10Blue Once every few months x 5Red 2-3 times a month y 3Red A few days a week y 97Red At least once a day y 144Red Never y 4Red Once a month y 0Red Once a week y 0Red Once every few months y 4Blue 2-3 times a month y 44Blue A few days a week y 62Blue At least once a day y 300Blue Never y 2Blue Once a month y 4Blue Once a week y 23Blue Once every few months y 6Red 2-3 times a month z 4Red A few days a week z 12Red At least once a day z 101Red Never z 0Red Once a month z 0Red Once a week z 10Red Once every few months z 0Blue 2-3 times a month z 100Blue A few days a week z 203Blue At least once a day z 299Blue Never z 0Blue Once a month z 0Blue Once a week z 204Blue Once every few months z 100I'm trying to make a seaborn plot where there are two categories for the x-axis variable and Frequency and the hue is based on Color. Moreover, I want the y-axis to be the proportion of value over the sum of the values for that variable for each Color; e.g. the y-value for variable "x.2-3 times a month" should be 22/(22+45+344+5+1+0+4) or 5.22%.So far I have this:import seaborn as snsfig, ax1 = plt.subplots(figsize=(20, 10))sns.factorplot(x='variable',y='value', hue='Frequency', data=df, kind='bar', ax=ax1)This is part of the way there. How do I also groupby 1) Color and 2) take the proportion of values for each variable & Frequency, rather than the count?
This is what you need to find the portion of each number for that group:df['proportion'] = df['value'] / df.groupby(['Color','variable'])['value'].transform('sum')Output: variable Frequency Color value portion0 x 2-3 times a month Red 22 0.0522571 x A few days a week Red 45 0.1068882 x At least once a day Red 344 0.8171023 x Never Red 5 0.0118764 x Once a month Red 1 0.0023755 x Once a week Red 0 0.0000006 x Once every few months Red 4 0.0095017 x 2-3 times a month Blue 4 0.0136058 x A few days a week Blue 49 0.1666679 x At least once a day Blue 200 0.68027210 x Never Blue 7 0.02381011 x Once a month Blue 19 0.06462612 x Once a week Blue 10 0.03401413 x Once every few months Blue 5 0.01700714 y 2-3 times a month Red 3 0.01190515 y A few days a week Red 97 0.38492116 y At least once a day Red 144 0.57142917 y Never Red 4 0.01587318 y Once a month Red 0 0.00000019 y Once a week Red 0 0.00000020 y Once every few months Red 4 0.01587321 y 2-3 times a month Blue 44 0.09977322 y A few days a week Blue 62 0.14059023 y At least once a day Blue 300 0.68027224 y Never Blue 2 0.00453525 y Once a month Blue 4 0.00907026 y Once a week Blue 23 0.05215427 y Once every few months Blue 6 0.01360528 z 2-3 times a month Red 4 0.03149629 z A few days a week Red 12 0.09448830 z At least once a day Red 101 0.79527631 z Never Red 0 0.00000032 z Once a month Red 0 0.00000033 z Once a week Red 10 0.07874034 z Once every few months Red 0 0.00000035 z 2-3 times a month Blue 100 0.11037536 z A few days a week Blue 203 0.22406237 z At least once a day Blue 299 0.33002238 z Never Blue 0 0.00000039 z Once a month Blue 0 0.00000040 z Once a week Blue 204 0.22516641 z Once every few months Blue 100 0.110375
Can't Import Tensor Flow in Anaconda 3.6 on Windows 10 I just installed CUDA 92 CUDANN and Tensor Flow on my Windows 10 laptop. I am unable to import tensor flow in Python. I get a trace from Python that says: can't load a dllBut it doesn't say which one it is. Here is a directory listing the trace I received. Can you help. PS C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin> python Python 3.6.0 |Anaconda 4.3.0 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. import tensorflow as tf Traceback (most recent call last): File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 17, in swig_import_helper return importlib.import_module(mname) File "C:\Program Files\Anaconda3\lib\importlib__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: DLL load failed: The specified module could not be found.During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow__init__.py", line 24, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Program Files\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 17, in swig_import_helper return importlib.import_module(mname) File "C:\Program Files\Anaconda3\lib\importlib__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/install_sources#common_installation_problems for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.
Mostly in windows its caused by MSVCP140.dll missing if you installMicrosoft Visual C++if that doesn't help, following dependencies are also there for tensotorflow:KERNEL32.dllWSOCK32.dllWS2_32.dllSHLWAPI.dllpython35.dllMSVCP140.dllVCRUNTIME140.dllapi-ms-win-crt-runtime-l1-1-0.dllapi-ms-win-crt-heap-l1-1-0.dllapi-ms-win-crt-utility-l1-1-0.dllapi-ms-win-crt-stdio-l1-1-0.dllapi-ms-win-crt-string-l1-1-0.dllapi-ms-win-crt-math-l1-1-0.dllapi-ms-win-crt-convert-l1-1-0.dllapi-ms-win-crt-environment-l1-1-0.dllapi-ms-win-crt-filesystem-l1-1-0.dllapi-ms-win-crt-time-l1-1-0.dll
Semi-Interactive Pandas Dataframe in a GUI There are a number of excellent answers to this question GUIs for displaying dataframes, but what I'm looking to do is a bit more advanced.I'd like to display a dataframe, but have a couple of the columns be interactive where the user can manually overwrite values (and the rest be static). It would be useful to have "total" rows that change with the overwritten values and eventually have some interactive buttons around the dataframe for loading and clearing data.QTPandas looks promising, but appears to be dead as it is build off of a really old version of Pandas (0.17.1). Can this be done in QT? Is something else better?
I love Rstudio as my IDE as I can not only view all objects created but I can also edit data in the IDE itself. There are many other great features too.And you can use R Studio for Python coding too (using reticulate package).Spyder too gives this feature of viewing or editing the data frame.However, if you're looking for a dedicated GUI with drag & drop features, you can use Pandas GUI.Features of pandasgui are:View DataFrames and Series (with MultiIndex support)Interactive plottingFilteringStatistical summaryData editing and copy / pasteImport CSV files with drag & drop Search toolbarIt's first version was released in Mar 2019 & still developing. As of date, you can't use it in Colab
how to get a hidden layer of tensorflow hub module I want to use tensorflow hub to generate features for my images, but it seems that the 2048 features of Inception Module are not enough for my problem because my class images are very similar. so I decided to use the features of a hidden layer of this module, for example: "module/InceptionV3/InceptionV3/Mixed_7c/concat:0"so how can I write a function that gives me this ?*8*8*2048 features from my input images?
Please trymodule = hub.Module(...) # As before.outputs = module(dict(images=images), signature="image_feature_vector", as_dict=True)print(outputs.items())Besides the default output with the final feature vector output, you should see a bunch of intermediate feature maps, under keys starting with InceptionV3/ (or whichever other architecture you select). These are 4D tensors with shape [batch_size, feature_map_height, feature_map_width, num_features], so you might want to remove those middle dimensions by avg- or max-pooling over them before feeding this into classification.
Keras: different validation AUROC during training and on epoch end I'm getting different AUROC depending on when I calculate it. My code is def auc_roc(y_true, y_pred): # any tensorflow metric value, update_op = tf.metrics.auc(y_true, y_pred) return update_op model.compile(loss='binary_crossentropy', optimizer=optim, metrics=['accuracy', auc_roc]) my_callbacks = [roc_callback(training_data=(x_train, y_train),validation_data=(x_test,y_test))] model.fit(x_train, y_train, validation_data=(x_test, y_test), callbacks=my_callbacks)Where roc_callback is a Keras callback that calculates the AUROC at the end of each epoch using roc_auc_score from sklearn. I use the code that is defined here.When I train the model, I get the following statistics: Train on 38470 samples, validate on 9618 samples Epoch 1/15 38470/38470 [==============================] - auc_roc: 0.5116 - val_loss: 0.6899 - val_acc: 0.6274 - val_auc_roc: 0.5440 roc-auc_val: 0.5973 Epoch 2/15 38470/38470 [==============================] - auc_roc: 0.5777 - val_loss: 0.6284 - val_acc: 0.6870 - val_auc_roc: 0.6027 roc-auc_val: 0.6391 . . . . . . . Epoch 12/15 38470/38470 [==============================] - auc_roc: 0.8754 - val_loss: 0.9569 - val_acc: 0.7747 - val_auc_roc: 0.8779 roc-auc_val: 0.6369So how is the AUROC calculated during training going up with each epoch? Why is it different from the one calculated at the epoch end?
During training, the metrics are calculated "per batch". And they keep updating for each new batch in some sort of "mean" between the current batch metrics and the previous results. Now, your callback calculates on the "entire data", and only at the end. There will be normal differences between the two methods. It's very common to see the next epoch start with a metric way better than the value shown for the last epoch, because the old metric includes in its mean value a lot of batches that weren't trained at that time. You can perform a more precise comparison by calling model.evaluate(x_test,y_test). Not sure if there will be conflicts by calling this "during" training, but you could train each epoch individually and call this between each epoch.Something strange:There isn't any y_pred in your roc_callback. Are you calling a model.predict() inside it?
Replace None with NaN and ignore NoneType in Pandas I'm attempting to create a raw string variable from a pandas dataframe, which will eventually be written to a .cfg file, by firstly joining two columns together as shown below and avoiding None:Section of df: command value...439 sensitivity "0.9"440 cl_teamid_overhead_always 1441 host_writeconfig None...code:...df = df['value'].replace('None', np.nan, inplace=True)print dfdf = df['command'].astype(str)+' '+df['value'].astype(str)print dfcfg_output = '\n'.join(df.tolist())print cfg_outputI've attempted to replace all the None values with NaN firstly so that no lines in cfg_output contain "None" as part of of the string. However, by doing so I seem to get a few undesired results. I made use of print statements to see what is going on.It seems that df = df['value'].replace('None', np.nan, inplace=True), simply outputs None.It seems that df = df['command'].astype(str)+' '+df['value'].astype(str) and cfg_output = '\n'.join(df.tolist()), cause the following error:TypeError: 'NoneType' object has no attribute '__getitem__'Therefore, I was thinking that by ignoring any occurrences of NaN, the code may run smoothly, although I'm unsure about how to do so using PandasUltimately, my desired output would be as followed:sensitivity "0.9"cl_teamid_overhead_always 1host_writeconfig
First of all, df['value'].replace('None', np.nan, inplace=True) returns None because you're calling the method with the inplace=True argument. This argument tells replace to not return anything but instead modify the original dataframe as it is. Similar to how pop or append work on lists. With that being said, you can also get the desired output calling fillna with an empty string:import pandas as pdimport numpy as npd = { 'command': ['sensitivity', 'cl_teamid_overhead_always', 'host_writeconfig'], 'value': ['0.9', 1, None]}df = pd.DataFrame(d)# df['value'].replace('None', np.nan, inplace=True)df = df['command'].astype(str) + ' ' + df['value'].fillna('').astype(str)cfg_output = '\n'.join(df.tolist())>>> print(cfg_output)sensitivity 0.9cl_teamid_overhead_always 1host_writeconfig
How to display GroupBy Count as Bokeh vbar for categorical data I have a small issue creating a Bokeh vbar in 0.13.0from a dataframe groupby count operation. The response here was for a multi level group by where as mine isn't. Updates since postingadded sample data and code based on provided answer to see if issue is my code or something elseOutlineThe pandas dataframe contains survey responses ExcellentGoodPoorSatisfactoryVery Goodunder columns ('ResponseID','RateGeneral','RateAccomodation','RateClean','RateServices')and the dtype as been set as catagory. I want to display a bokeh vbar of the Response Count groupby using DemoDFCount = DemoDF.groupby('RateGeneral').count()My bokeh code looks like thispTest= figure(title='Rating in General',plot_height=350)pTest.vbar(width=0.9,source=DemoDFCount, x='RateGeneral',top='ResponseID')show(pTest))but doesn't produce any chart only a title and toolbarIf I use pandas DemoDFCount.plot.bar(legend=False) I can plot something but how do I create this chart in bokeh?Sample data as json export50 rows of sample data from DemoDF.to_json()'{"ResponseID":{"0":1,"1":2,"2":3,"3":4,"4":5,"5":6,"6":7,"7":8,"8":9,"9":10,"10":11,"11":12,"12":13,"13":14,"14":15,"15":16,"16":17,"17":18,"18":19,"19":20,"20":21,"21":22,"22":23,"23":24,"24":25,"25":26,"26":27,"27":28,"28":29,"29":30,"30":31,"31":32,"32":33,"33":34,"34":35,"35":36,"36":37,"37":38,"38":39,"39":40,"40":41,"41":42,"42":43,"43":44,"44":45,"45":46,"46":47,"47":48,"48":49,"49":50},"RateGeneral":{"0":"Good","1":"Satisfactory","2":"Good","3":"Poor","4":"Good","5":"Satisfactory","6":"Excellent","7":"Good","8":"Good","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Satisfactory","13":"Excellent","14":"Satisfactory","15":"Very Good","16":"Satisfactory","17":"Excellent","18":"Very Good","19":"Excellent","20":"Satisfactory","21":"Good","22":"Satisfactory","23":"Excellent","24":"Satisfactory","25":"Good","26":"Excellent","27":"Very Good","28":"Good","29":"Very Good","30":"Good","31":"Satisfactory","32":"Very Good","33":"Very Good","34":"Very Good","35":"Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Good","41":"Satisfactory","42":"Very Good","43":"Very Good","44":"Poor","45":"Excellent","46":"Good","47":"Excellent","48":"Satisfactory","49":"Good"},"RateAccomodation":{"0":"Very Good","1":"Excellent","2":"Satisfactory","3":"Satisfactory","4":"Good","5":"Good","6":"Very Good","7":"Very Good","8":"Good","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Satisfactory","13":"Excellent","14":"Good","15":"Very Good","16":"Good","17":"Excellent","18":"Excellent","19":"Very Good","20":"Good","21":"Satisfactory","22":"Good","23":"Excellent","24":"Satisfactory","25":"Very Good","26":"Excellent","27":"Excellent","28":"Good","29":"Very Good","30":"Very Good","31":"Very Good","32":"Excellent","33":"Very Good","34":"Very Good","35":"Very Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Excellent","41":"Poor","42":"Very Good","43":"Very Good","44":"Poor","45":"Excellent","46":"Satisfactory","47":"Excellent","48":"Good","49":"Good"},"RateClean":{"0":"Excellent","1":"Excellent","2":"Satisfactory","3":"Good","4":"Excellent","5":"Very Good","6":"Very Good","7":"Excellent","8":"Excellent","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Good","13":"Good","14":"Excellent","15":"Excellent","16":"Good","17":"Excellent","18":"Excellent","19":"Excellent","20":"Good","21":"Very Good","22":"Poor","23":"Very Good","24":"Satisfactory","25":"Very Good","26":"Excellent","27":"Good","28":"Poor","29":"Good","30":"Excellent","31":"Good","32":"Good","33":"Very Good","34":"Satisfactory","35":"Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Very Good","41":"Satisfactory","42":"Excellent","43":"Excellent","44":"Very Good","45":"Excellent","46":"Good","47":"Excellent","48":"Good","49":"Excellent"},"RateServices":{"0":"Very Good","1":"Excellent","2":"Good","3":"Good","4":"Excellent","5":"Good","6":"Good","7":"Very Good","8":"Good","9":"Satisfactory","10":"Satisfactory","11":"Excellent","12":"Good","13":"Very Good","14":"Good","15":"Excellent","16":"Poor","17":"Excellent","18":"Excellent","19":"Excellent","20":"Good","21":"Good","22":"Very Good","23":"Excellent","24":"Satisfactory","25":"Very Good","26":"Excellent","27":"Very Good","28":"Good","29":"Excellent","30":"Very Good","31":"Excellent","32":"Good","33":"Excellent","34":"Very Good","35":"Very Good","36":"Excellent","37":"Satisfactory","38":"Excellent","39":"Good","40":"Very Good","41":"Satisfactory","42":"Excellent","43":"Excellent","44":"Good","45":"Excellent","46":"Very Good","47":"Excellent","48":"Good","49":"Very Good"}}'
The fact that it is multi-level in the other question is not really relevant. When you use a Pandas GroupBy as a data source for Bokeh, Bokeh uses the results of group.describe (which includes counts for each column per group) as the contents of the data source. Here is a complete example that shows Counts-per-Origin from the "cars" data set:from bokeh.io import show, output_filefrom bokeh.plotting import figurefrom bokeh.sampledata.autompg import autompg as dfoutput_file("groupby.html")df.origin = df.origin.astype(str)group = df.groupby('origin')p = figure(plot_height=350, x_range=group, title="Count by Origin", toolbar_location=None, tools="")# using yr_count, but count for any column would workp.vbar(x='origin', top='yr_count', width=0.8, source=group)p.y_range.start = 0p.xgrid.grid_line_color = Noneshow(p)
Defining a default argument after with None: what if it's an array? I'm passing an argument to a function such that I want to delay giving the default parameter, in the usual way:def f(x = None): if x == None: x = ...The only problem is that x is likely to be a numpy array. Then x == None returns a boolean array, which I can't condition on. The compiler suggests to use .any() or .all()But if I writedef f(x = None): if (x == None).any(): x = ...this won't work if x goes to its default value, because then None == None is a Boolean, which has no .any() or .all() methods. What's my move here?
When comparing against None, it is a good practice to use is as opposed to ==. Usually it doesn't make a difference, but since objects are free to implement equality any way they see fit, it is not always a reliable option.Unfortunately, this is one of those cases where == doesn't cut it, since comparing to numpy arrays returns a boolean mask based on the condition. Luckily, there is only a single instance of None in any given Python program, so we can actually check the identity of an object using the is operator to figure out if it is None or not.>>> None is NoneTrue >>> np.array([1,2,3]) is NoneFalseSo no need for any or all, you can update your function to something like:def f(x=None): if x is None: print('None') else: print('Not none')In action:>>> f()None>>> f(np.array([1,2,3]))Not none
Extract string if match the value in another list I want to get the value of the lookup list instead of a boolean. I have tried the following codes:val = pd.DataFrame(['An apple','a Banana','a cat','a dog'])lookup = ['banana','dog']# I tried the follow code:val.iloc[:,0].str.lower().str.contains('|'.join(lookup))# it returns:0 False1 True2 False3 TrueName: 0, dtype: boolWhat I want:0 False1 banana2 False3 dogAny help is appreciated.
You can use extract instead of contains, and fillna with False:import rep = rf'\b({"|".join(lookup)})\b'val[0].str.extract(p, expand=False, flags=re.I).fillna(False) 00 False1 banana2 False3 dog
pandas multiindex set_labels I have a pandas multiindex like this oneresult.indexMultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]], labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]], names=['ref', None])And I want to change the second label by this onenew_label[-0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4]so the result should be result.index MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]], labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [-0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4, -0.9, -0.85, -0.8, -0.75, -0.7, -0.65, -0.6, -0.55, -0.5, -0.45, -0.4]], names=['ref', None])I tried withresult.index.set_labels(labels=new_label,level=1)But instead I get thisMultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]], labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], names=['wnd dir ref', None])The labels are fulfilled with 0What is wrong or missing?
If want use set_label need same types, here integers (it seems bug):#test if working with integersmux1 = mux.set_labels((np.array(new_label) * 100).astype(int), level=1)print (mux1)MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]], labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [-90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40, -90, -85, -80, -75, -70, -65, -60, -55, -50, -45, -40]], names=['ref', None])mux = pd.MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 64, 66, 67, 70, 71, 72, 73, 74]], labels=[[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14], [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]], names=['ref', None])df = pd.DataFrame([0] * 55, index=mux, columns=['a'])Possible solution is set_index for new 3 level MultiIndex and remove second one by reset_index:df = df.set_index([new_label], append=True).reset_index(level=1, drop=True)Or create new MultiIndex:df.index = [df.index.get_level_values(0), new_label]print (df.head(10)) aref 10 -0.90 0 -0.85 0 -0.80 0 -0.75 0 -0.70 0 -0.65 0 -0.60 0 -0.55 0 -0.50 0 -0.45 0Also if need set MultiIndex names:df.index = pd.MultiIndex.from_arrays([df.index.get_level_values(0), new_label], names=('ref','new'))print (df.head(10)) aref new 10 -0.90 0 -0.85 0 -0.80 0 -0.75 0 -0.70 0 -0.65 0 -0.60 0 -0.55 0 -0.50 0 -0.45 0
Renaming columns in a Dataframe given that column contains data in a loop Scenario: I have a list of dataframes. I am trying to rename the columns and change their order, but the column names do not exactly match, for example: a column might be "iterationlist" or "iteration".I tried a loop inside a loop to read all the columns and if the name contains what I need, change the name of that column, but I get the error:TypeError: unhashable type: 'list'Code:import pandas as pdimport osfrom Tkinter import Tkfrom tkFileDialog import askdirectoryfrom os import listdirfrom os.path import isfile, joinimport glob# Get contentmypath = "//DGMS/Desktop/uploaded"all_files = glob.glob(os.path.join(mypath, "*.xls*"))contentdataframes = [pd.read_excel(f).assign(Datanumber=os.path.basename(f).split('.')[0].split('_')[0], ApplyOn='') for f in all_files]#get list of dates and put to dfsfor dfs in contentdataframes: dfs.rename(index=str, columns={[col for col in dfs.columns if 'iteration' in col]: "iterationlistfinal"})Question: What is the proper way to do this?
I think need str.contains for get columns names by substring and then reorder columns by subset with join both lists:contentdataframes = []for f in all_files: df = pd.read_excel(f) df['Datanumber'] = os.path.basename(f).split('.')[0].split('_')[0] df['ApplyOn']= '' mask = df.columns.str.contains('iteration') c1 = df.columns[mask].tolist() c2 = df.columns[~mask].tolist() df = df[c1 + c2] contentdataframes.append(df)
Pandas: shifting columns depending on if NaN or not I have a dataframe like so:phone_number_1_clean phone_number_2_clean phone_number_3_clean NaN NaN 8546987 8316589 8751369 NaN 4569874 NaN 2645981I would like phone_number_1_clean to be as populated as possible. This will require shifting either phone_number_2_clean or phone_number_3_clean to phone_number_1_clean and vice versa meaning getting phone_number_2_clean as populated as possible if phone_number_1_clean is populated etc. The output should look something like:phone_number_1_clean phone_number_2_clean phone_number_3_clean 8546987 NaN NaN 8316589 8751369 NaN 4569874 2645981 NaNI might be able to do it np.wherestatements but could be messy.The approach would preferably be vectorised as will be applied to large-ish dataframes.
Use:#for each row remove NaNs and create new Series - rows in final df df1 = df.apply(lambda x: pd.Series(x.dropna().values), axis=1)#if possible different number of columns like original df is necessary reindexdf1 = df1.reindex(columns=range(len(df.columns)))#assign original columns namesdf1.columns = df.columnsprint (df1) phone_number_1_clean phone_number_2_clean phone_number_3_clean0 8546987 NaN NaN1 8316589 8751369 NaN2 4569874 2645981 NaNOr:s = df.stack()s.index = [s.index.get_level_values(0), s.groupby(level=0).cumcount()]df1 = s.unstack().reindex(columns=range(len(df.columns)))df1.columns = df.columnsprint (df1) phone_number_1_clean phone_number_2_clean phone_number_3_clean0 8546987 NaN NaN1 8316589 8751369 NaN2 4569874 2645981 NaNOr a bit changed justify function:def justify(a, invalid_val=0, axis=1, side='left'): """ Justifies a 2D array Parameters ---------- A : ndarray Input array to be justified axis : int Axis along which justification is to be made side : str Direction of justification. It could be 'left', 'right', 'up', 'down' It should be 'left' or 'right' for axis=1 and 'up' or 'down' for axis=0. """ if invalid_val is np.nan: mask = pd.notnull(a) #changed to pandas notnull else: mask = a!=invalid_val justified_mask = np.sort(mask,axis=axis) if (side=='up') | (side=='left'): justified_mask = np.flip(justified_mask,axis=axis) out = np.full(a.shape, invalid_val, dtype=object) if axis==1: out[justified_mask] = a[mask] else: out.T[justified_mask.T] = a.T[mask.T] return outdf = pd.DataFrame(justify(df.values, invalid_val=np.nan), index=df.index, columns=df.columns)print (df) phone_number_1_clean phone_number_2_clean phone_number_3_clean0 8546987 NaN NaN1 8316589 8751369 NaN2 4569874 2645981 NaNPerformance:#3k rowsdf = pd.concat([df] * 1000, ignore_index=True)In [442]: %%timeit ...: df1 = df.apply(lambda x: pd.Series(x.dropna().values), axis=1) ...: #if possible different number of columns like original df is necessary reindex ...: df1 = df1.reindex(columns=range(len(df.columns))) ...: #assign original columns names ...: df1.columns = df.columns ...: 1.17 s ± 10.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)In [443]: %%timeit ...: s = df.stack() ...: s.index = [s.index.get_level_values(0), s.groupby(level=0).cumcount()] ...: ...: df1 = s.unstack().reindex(columns=range(len(df.columns))) ...: df1.columns = df.columns ...: ...: 5.88 ms ± 74.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)In [444]: %%timeit ...: pd.DataFrame(justify(df.values, invalid_val=np.nan), index=df.index, columns=df.columns) ...: 941 µs ± 131 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The truth value of a Series is ambiguous Pandas What's the problem with this code? I used many comparison lambda function on the dataframe,but this one returns ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', u'occurred at index 2') error.I searched about it and found many question asked before about it,but none of them fit my problem.My code:def Return(close,pClose): i = ((close - pClose) / close) * 100 if (i > 0): return 1 if (i < 0): return 0df['return'] = df.apply(lambda y:Return(close=df['Close'], pClose=df['pClose']),axis=1)
The Problem with your code is that you pass the whole column of the dataframe to your function:df.apply(lambda y:Return(close=df['Close'], pClose=df['pClose']),axis=1)In the function you are calculating a new value i which is in fact a column:i = ((close - pClose) / close) * 100In the comparison statement thencannot decide how to evaluate what you are trying to do because it gets a column as input:if (i > 0):So I think what you want is something like:df['return'] = df.apply(lambda y:Return(close=y['Close'], pClose=y['pClose']),axis=1)
TFrecords occupy more space than original JPEG images I'm trying to convert my Jpeg image set into to TFrecords. But TFrecord file is taking almost 5x more space than the image set. After a lot of googling, I learned that when JPEG are written into TFrecords, they aren't JPEG anymore. However I haven't come across an understandable code solution to this problem. Please tell me what changes ought to be made in the code below to write JPEG to Tfrecords.def print_progress(count, total): pct_complete = float(count) / total msg = "\r- Progress: {0:.1%}".format(pct_complete) sys.stdout.write(msg) sys.stdout.flush()def wrap_int64(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=value))def wrap_bytes(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))def convert(image_paths , labels, out_path): # Args: # image_paths List of file-paths for the images. # labels Class-labels for the images. # out_path File-path for the TFRecords output file. print("Converting: " + out_path) # Number of images. Used when printing the progress. num_images = len(image_paths) # Open a TFRecordWriter for the output-file. with tf.python_io.TFRecordWriter(out_path) as writer: # Iterate over all the image-paths and class-labels. for i, (path, label) in enumerate(zip(image_paths, labels)): # Print the percentage-progress. print_progress(count=i, total=num_images-1) # Load the image-file using matplotlib's imread function. img = imread(path) # Convert the image to raw bytes. img_bytes = img.tostring() # Create a dict with the data we want to save in the # TFRecords file. You can add more relevant data here. data = \ { 'image': wrap_bytes(img_bytes), 'label': wrap_int64(label) } # Wrap the data as TensorFlow Features. feature = tf.train.Features(feature=data) # Wrap again as a TensorFlow Example. example = tf.train.Example(features=feature) # Serialize the data. serialized = example.SerializeToString() # Write the serialized data to the TFRecords file. writer.write(serialized)Edit: Can someone please answer this ?!!
Instead of converting image to array and back to bytes, we can just use inbuilt open function to get the bytes. That way, compressed image will be written into TFRecord. Replace these two linesimg = imread(path)img_bytes = img.tostring()with img_bytes = open(path,'rb').read()Reference :https://github.com/tensorflow/tensorflow/issues/9675
API download data, recommendations? I am trying to decode data from an API, I just cannot think of a clean way to extract the value and time values. I been trying to do string manipulations, but ends up very complex. {"max_scale": "0", "min_scale": "0", "graph_label": "Light Level", "average": "1", "length_of_time": "3600", "upper_warn": "1000", "lower_warn": "30", "cached": false, "values": [{"value": 0.0, "time": 1531170219}, {"value": 0.0, "time": 1531170159}, {"value": 0.0, "time": 1531170099}, {"value": 0.0, "time": 1531170039}, {"value": 0.0, "time": 1531169979}, {"value": 0.0, "time": 1531169919}, {"value": 0.0, "time": 1531169859}, {"value": 0.0, "time": 1531169799}, {"value": 0.0, "time": 1531169739}, {"value": 0.0, "time": 1531169679}, {"value": 0.0, "time": 1531169619}, {"value": 0.0, "time": 1531166679}], "timestamp_to": "1531170222.798", "format_string": "%f Lux"}
This is in JSON format. Use the python json encoder/decoder to load this data. It will turn it into a dictionary, and something likemy_json_dict['values'] will return you that list.
Pandas - min and max of a column up until each line I have a dataframe like this:pd.DataFrame({'group': {0: 1, 1: 1, 2: 1, 3: 1, 4: 2, 5: 2, 6: 2}, 'year': {0: 2007, 1: 2008, 2: 2009, 3: 2010, 4: 2006, 5: 2007, 6: 2008}, 'amount': {0: 2.0, 1: -4.0, 2: 5, 3: 7.0, 4: 8.0, 5: -10.0, 6: 12.0}}]) group year amount0 1 2007 21 1 2008 -42 1 2009 53 1 2010 74 2 2006 85 2 2007 -106 2 2008 12I want to add min, max, number of years that amount is negative,number of years that amount is positive for each group, up until each year (inclusive). My ideal dataframe looks like this group year amount min_utd max_utd no_n_utd no_p_utd0 1 2007 2 2 2 0 11 1 2008 -4 -4 2 1 12 1 2009 5 -4 5 1 23 1 2010 7 -4 7 1 34 2 2006 8 8 8 0 15 2 2007 -10 -10 8 1 1 6 2 2008 12 -10 12 1 2I am only aware of agg with which you can do for the whole group, or rolling when you do for a sliding window, but I dont know how to calculate from the beginning up to each line.
Use DataFrameGroupBy.cummax with DataFrameGroupBy.cummin and then DataFrameGroupBy.cumsum with comparing by lt (<) and ge (>=):df[['min_utd','max_utd']] = df.groupby('group')['amount'].agg(['cummin','cummax'])df['no_n_utd'] = df['amount'].lt(0).astype(int).groupby(df['group']).cumsum()df['no_p_utd'] = df['amount'].ge(0).astype(int).groupby(df['group']).cumsum()print (df) group year amount min_utd max_utd no_n_utd no_p_utd0 1 2007 2 2 2 0 11 1 2008 -4 -4 2 1 12 1 2009 5 -4 5 1 23 1 2010 7 -4 7 1 34 2 2006 8 8 8 0 15 2 2007 -10 -10 8 1 16 2 2008 12 -10 12 1 2Another solution with same principe but custom function:def f(x): a = x.cummin() b = x.cummax() c = x.lt(0).cumsum() d = x.ge(0).cumsum() return pd.DataFrame({'min_utd':a, 'max_utd':b, 'no_n_utd':c, 'no_p_utd':d})df = df.join(df.groupby('group')['amount'].apply(f))print (df) group year amount min_utd max_utd no_n_utd no_p_utd0 1 2007 2 2 2 0 11 1 2008 -4 -4 2 1 12 1 2009 5 -4 5 1 23 1 2010 7 -4 7 1 34 2 2006 8 8 8 0 15 2 2007 -10 -10 8 1 16 2 2008 12 -10 12 1 2
Pandas groupby function returns NaN values I have a list of people with fields unique_id, sex, born_at (birthday) and I’m trying to group by sex and age bins, and count the rows in each segment.Can’t figure out why I keep getting NaN or 0 as the output for each segment. Here’s the latest approach I've taken...Data sample:|---------------------|------------------|------------------|| unique_id | sex | born_at ||---------------------|------------------|------------------|| 1 | M | 1963-08-04 ||---------------------|------------------|------------------|| 2 | F | 1972-03-22 ||---------------------|------------------|------------------|| 3 | M | 1982-02-10 ||---------------------|------------------|------------------|| 4 | M | 1989-05-02 ||---------------------|------------------|------------------|| 5 | F | 1974-01-09 ||---------------------|------------------|------------------|Code:df[‘num_people’]=1breakpoints = [18,25,35,45,55,65]df[[‘sex’,’born_at’,’num_people’]].groupby([‘sex’,pd.cut(df.born_at.dt.year, bins=breakpoints)]).agg(‘count’)I’ve tried summing as the agg type, removing NaNs from the data series, pivot_table using the same pd.cut function but no luck. Guessing there’s also probably a better way to do this that doesn’t involve creating a column of 1s.Desired output would be something like this...The extra born_at column isn't necessary in the output and I'd also like the age bins to be 18 to 24, 25 to 34, etc. instead of 18 to 25, 25 to 35, etc. but I'm not sure how to specify that either.
I think you missed the calculation of the current age. The ranges you define for splitting the bithday years only make sense when you use them for calculating the current age (or all grouped cells will be nan or zero respectively because the lowest value in your sample is 1963 and the right-most maximum is 65). So first of all you want to calculate the age:datetime.now().year-df.birthday.dt.yearThis information then can be used to group the data (which are previously grouped by gender):df.groupby(['gender', pandas.cut(datetime.now().year-df.birthday.dt.year, bins=breakpoints)]).agg('count')In order to get rid of the nan cells you simply do a fillna(0) like this:df.groupby(['gender', pandas.cut(datetime.now().year-df.birthday.dt.year, bins=breakpoints)]).agg('count').fillna(0).rename(columns={'birthday':'count'})
TensorFlow FailedPreconditionError: iterator has not been initialized I want to display the values of tensors.Here is my code:#some code heredata = [data_tensor for data_tensor in data_dict.items()]for i in data: with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print (sess.run(i[1])) print('_'*100)However, I got the error:FailedPreconditionError (see above for traceback): GetNext() failed because the iterator has not been initialized. Ensure that you have run the initializer operation for this iterator before getting the next element.How to solve the problem?Thank you very much.
It looks like you have a dataset iterator that has not been initialized. A dataset iterator is not a variable, hence does not get initialized with tf.global_variables_intializer(). You have to initialize it explicitly by calling sess.run(iterator.initializer) on whatever dataset iterator you created (e.g. with iterator = dataset.make_initializable_iterator(). Additionally, note that each dataset iteration (running the GetNext node) yields a complete element of the dataset, even if you only care about a subset of the element. If data_dict is the output of an iteration (created with data_dict = iterator.get_next()), doing print(sess.run(i[1])), while only giving you one of the k,v pairs in the dictionary, actually yields the whole data_dict. I expect that this pipeline would not give you the output you expect unless you reinitialize the iterator within the for loop.To make what I'm saying more concrete, if you had a dataset created as follows, you would expect the following iteration outputs:## dataset: [{'a':0, 'b':10}, {'a':1, 'b':11}, {'a':2, 'b':12}, ...]dataset = tf.data.Dataset.range(10).map(lambda x: {'a': x, 'b': x + 10})iterator = dataset.make_initializable_iterator()next_elem = iterator.get_next()with tf.Session() as sess: sess.run(iterator.initializer) print(sess.run(next_elem['a'])) # 0 print(sess.run(next_elem['a'])) # 1 print(sess.run(next_elem['b'])) # 12
Bar plot from dataframe I have a data frame that looks something like this. print (df) a b0 1 58961 1 40002 1 896473 2 544 2 35685 2 487616 3 58967 3 28008 3 5894And I want to make a bar plot. That looks like this. I tried with groupby.()but it only prints only one value of 1 one values of 2 etc... a = df_result.groupby(['column1'])['column2'].mean()a.plot.bar()plt.show()Would appreciate some guidance how to solve the problem, so I would have all of the values in a chart.
I think need cumcount with set_index and unstack first for reshape data:a = df.set_index(['a',df.groupby('a').cumcount()])['b'].unstack()print (a) 0 1 2a 1 5896 4000 896472 54 3568 487613 5896 2800 5894a.plot.bar()
Perform a 'join' on two numpy arrays I have two numpy array's that look like the following:a = np.array([[1, 10], [2, 12], [3, 5]])b = np.array([[1, 0.78], [3, 0.23]])The first number in the list is the id parameter, and the second one is a value. I'm looking to combine them. The expected output to be equal to this:np.array([1, 10, 0.78], [2, 12, 0], [3, 5, 0.23])Is there a function (or combination of functions that can do this for me? Any help is greatly appreciated.If an object is not found, a 0 is put in it's place.
You are using the the first element like a key of a dictionary or an index of a Pandas series. So I used those tools which are better suited for the combination you are looking to do. I then convert back to the array you are looking for.import pandas as pdimport numpy as npa = np.array([[1, 10], [2, 12], [3, 5]])b = np.array([[1, 0.78], [3, 0.23]])pd.concat( map(pd.Series, map(dict, (a, b))), axis=1).fillna(0).reset_index().valuesarray([[ 1. , 10. , 0.78], [ 2. , 12. , 0. ], [ 3. , 5. , 0.23]])Notes:I map dict and pd.Series on the iterable (a, b)I pass those to pd.concat which produces a Pandas DataFrameFill in missing values with 0Reset the index to get back those keys of yoursGet at just the valuesIf you have another arraya = np.array([[1, 10], [2, 12], [3, 5]])b = np.array([[1, 0.78], [3, 0.23]])c = np.array([[1, 3.14], [2, 3.14]])pd.concat( map(pd.Series, map(dict, (a, b, c))), axis=1).fillna(0).reset_index().valuesarray([[ 1. , 10. , 0.78, 3.14], [ 2. , 12. , 0. , 3.14], [ 3. , 5. , 0.23, 0. ]])If you want to quickly convert you arrays to the Pandas seriesNotice that I wrote to new names a_, b_, and c_ to avoid overwriting your other namesa_, b_, c_ = map(pd.Series, map(dict, (a, b, c)))To get a DataFramedf = pd.concat(map(pd.Series, map(dict, (a, b, c))), axis=1).fillna(0)df 0 1 21 10 0.78 3.142 12 0.00 3.143 5 0.23 0.00
set_printoptions for numpy array doesn't work for numpy ndarray? I'm trying to use set_printoptions from the answer to the question How to pretty-printing a numpy.array without scientific notation and with given precision?But I get this error:Traceback (most recent call last): File "neural_network.py", line 57, in <module> output.set_printoptions(precision=3)AttributeError: 'numpy.ndarray' object has no attribute 'set_printoptions'Apparently, not all numpy arrays are created equal, and what works for a regular numpy.array doesn't work for a numpy.ndarray.How can I format a numpy.ndarray for priting such as to remove scientific notation?UPDATEChanging the call to numpy.set_printoptions() removes the error, but has no effect on the print format of the ndarray contents.
Try numpy.array2string which takes ndarray as input and you can set precision.Scroll down in below documentation link for examples.https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.array2string.html
Debug Pytorch Optimizer When I run optimizer.step on my code, I get this errorRuntimeError: sqrt not implemented for 'torch.LongTensor'C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k) 186 # but it's overkill for just that one bit of state. 187 def magic_deco(arg):--> 188 call = lambda f, *a, **k: f(*a, **k) 189 190 if callable(arg):C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magics\execution.py in time(self, line, cell, local_ns) 1178 else: 1179 st = clock2()-> 1180 exec(code, glob, local_ns) 1181 end = clock2() 1182 out = None<timed exec> in <module>()C:\Program Files\Anaconda3\lib\site-packages\torch\optim\adam.py in step(self, closure) 98 denom = max_exp_avg_sq.sqrt().add_(group['eps']) 99 else:--> 100 denom = exp_avg_sq.sqrt().add_(group['eps']) 101 102 bias_correction1 = 1 - beta1 ** state['step']RuntimeError: sqrt not implemented for 'torch.LongTensor'I am using my own loss function. My question is how will I debug this error? Is there a quick way to see the type of all my variables? I am manually doing it and all of them are type float (including the output of my custom loss). I can't figure out why we are even getting an error related to a LongTensor. How does the optimizer.step function work in PyTorch?Just in case, below is most of the code.This is the model:class LSTM(nn.Module): def __init__(self, mel_channels=40, frames=81, hidden_dim=768, proj_dim=256): super(LSTM, self).__init__() self.hidden_dim = hidden_dim self.mel_channels = mel_channels self.frames = frames self.proj_dims = proj_dim weight = torch.tensor([10]) bias = torch.tensor([-5]) self.w = nn.Parameter(weight) self.b = nn.Parameter(bias) # The LSTM takes word embeddings as inputs, and outputs hidden states # with dimensionality hidden_dim. self.lstm1 = nn.LSTM(mel_channels, hidden_dim, batch_first=False) print("here1") self.lstm2 = nn.LSTM(proj_dim, hidden_dim, batch_first=False) self.lstm3 = nn.LSTM(proj_dim, hidden_dim, batch_first=False) self.lstms = [self.lstm1, self.lstm2, self.lstm3] self.proj1 = nn.Linear(hidden_dim, proj_dim) self.proj2 = nn.Linear(hidden_dim, proj_dim) self.proj3 = nn.Linear(hidden_dim, proj_dim) self.projs = [self.proj1, self.proj2, self.proj3] def init_states(self, batchsize): # Before we've done anything, we dont have any hidden state. # Refer to the Pytorch documentation to see exactly # why they have this dimensionality. # The axes semantics are (num_layers, minibatch_size, hidden_dim) return [(torch.zeros(1, batchsize, self.hidden_dim), torch.zeros(1, batchsize, self.hidden_dim)), (torch.zeros(1, batchsize, self.hidden_dim), torch.zeros(1, batchsize, self.hidden_dim)), (torch.zeros(1, batchsize, self.hidden_dim), torch.zeros(1, batchsize, self.hidden_dim)), ] def forward(self, inputs, states=None): time, batchsize, inputdim = list(inputs.shape) if states is None: states = self.init_states(batchsize) output = inputs print(output.type()) for i in range(3): print(output.type()) output, state = self.lstms[i](output, states[i]) output = self.projs[i](output) # perform normalization on this output here output = output[-1] print(output.type()) output = F.normalize(output, p=2, dim=-1) print(output.type()) self.state = state print(output.type()) return output def get_w(self): print(get_w.type()) return(self.w) def get_b(self): print(get_b.type()) return(self.b) def get_state(self): print(get_state()) return(self.state)This is the custom loss:class CustomLoss(_Loss): def __init__(self, size_average=True, reduce=True): super(CustomLoss, self).__init__(size_average, reduce) def forward(self, S, N, M, type='softmax',): return self.loss_cal(S, N, M, type) def loss_cal(self, S, N, M, type="softmax",): self.A = torch.cat([S[i * M:(i + 1) * M, i:(i + 1)] for i in range(N)], dim=0) if type == "softmax": self.B = torch.log(torch.sum(torch.exp(S.float()), dim=1, keepdim=True) + 1e-8) total = torch.abs(torch.sum(self.A - self.B)) else: raise AssertionError("loss type should be softmax or contrast !") return totalFinally, this is the main filemodel=LSTM()optimizer = optim.Adam(list(model.parameters()), lr=LEARNING_RATE)model = model.to(device)best_loss = 100.generator = SpeakerVerificationDataset()dataloader = DataLoader(generator, batch_size=4, shuffle=True, num_workers=0)loss_history = []update_counter = 1for epoch in range(NUM_EPOCHS): print("Epoch # : ", epoch + 1) for step in range(STEPS_PER_EPOCH): # get batch dataset for i_batch, sample_batched in enumerate(dataloader): print(sample_batched['MelData'].size()) inputs = sample_batched['MelData'].float() inputs=sample_batched['MelData'].view(180, M*N, 40).float() print((inputs.size())) inputs = inputs #print(here) # remove previous gradients optimizer.zero_grad() # get gradients and loss at this iteration #predictions,state,w,b = model(inputs) predictions = model(inputs) w = model.w b = model.b predictions = similarity(output=predictions,w=w,b=b) #loss = CustomLoss() S = predictions loss_func = CustomLoss() loss = loss_func.loss_cal(S=S,N=N,M=M) loss.backward() # update the weights print("start optimizing") optimizer.step() loss_history.append(loss.item()) print(update_counter, ":", loss_history[-1]) update_counter += 1 print() # save the weights torch.save(model.state_dict(), CHECKPOINT_PATH) print("Saving weights") print()print()
The error comes from here:weight = torch.tensor([10])bias = torch.tensor([-5])self.w = nn.Parameter(weight)self.b = nn.Parameter(bias)Had to change it toweight = torch.tensor([10.0])bias = torch.tensor([-5.0])self.w = nn.Parameter(weight)self.b = nn.Parameter(bias)
Create a line graph per bin in Python 3 I have a dataframe called 'games':Game_id Goals P_value 1 2 0.4 2 3 0.321 45 0 0.64I need to split the P value to 0.05 steps, bin the rows per P value and than create a line graph that shows the sum per p value.What I currently have:games.set_index('p value', inplace=True)games.sort_index()np.cumsum(games['goals']).plot()But I get this:No matter what I tried, I couldn't group the P values and show the sum of goals per P value.. I also tried to use matplotlib.pyplot but than I couldn't use the cumsum function..
If I understood you correctly, you want to have discrete steps in the p-value of width 0.05 and show the cumulative sum?import pandas as pdimport numpy as npimport matplotlib.pyplot as plt# create some random example datadf = pd.DataFrame({ 'goals': np.random.poisson(3, size=1000), 'p_value': np.random.uniform(0, 1, size=1000)})# define binning in p-valuebin_edges = np.arange(0, 1.025, 0.05)bin_center = 0.5 * (bin_edges[:-1] + bin_edges[1:])bin_width = np.diff(bin_edges)# find the p_value bin, each row belongs to# 0 is underflow, len(edges) is overflow bindf['bin'] = np.digitize(df['p_value'], bins=bin_edges)# get the number of goals per p_value bingoals_per_bin = df.groupby('bin')['goals'].sum()print(goals_per_bin)# not every bin might be filled, so we will use pandas index# matching tbinned = pd.DataFrame({ 'center': bin_center, 'width': bin_width, 'goals': np.zeros(len(bin_center))}, index=np.arange(1, len(bin_edges)))binned['goals'] = goals_per_binplt.step( binned['center'], binned['goals'], where='mid',)plt.xlabel('p-value')plt.ylabel('goals')plt.show()
Regression plot is wrong (python) So my program reads MPG vs weight relationship and draws a graph of what it is suppose to look like but as you can see the graph is not looking right. import numpy as npimport pandas as pdimport matplotlib.pyplot as plt#read txt filedataframe= pd.read_table('auto_data71.txt',delim_whitespace=True,names=['MPG','Cylinder','Displacement','Horsepower','Weight','acceleration','Model year','Origin','Car Name'])dataframe.dropna(inplace=True)#filter the un-necessary columnsX = dataframe.iloc[:,4:5].valuesY = dataframe.iloc[:,0:1].values#scale datafrom sklearn.preprocessing import StandardScalersc_X = StandardScaler()sc_Y= StandardScaler()X = sc_X.fit_transform(X)Y = sc_Y.fit_transform(Y)#split data into train and test setfrom sklearn.model_selection import train_test_splitx_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.2)#create modelfrom sklearn.preprocessing import PolynomialFeaturesfrom sklearn.linear_model import LinearRegressionpoly_reg = PolynomialFeatures(degree=2)poly_X = poly_reg.fit_transform(x_train)poly_reg.fit(poly_X,y_train)regressor2= LinearRegression()regressor2.fit(poly_X,y_train)#graphresult = regressor2.predict(poly_X)plt.scatter(x_train,y_train,color='red')plt.plot(x_train, result,color='blue')plt.show()the output is this:As you can see the regression line does not look right. Any help will be much appreciated.#auto_data.txt(part of data...)****NOTE:i am only using weight and mpg column for this codefile(mpg,cylinder,distance,horsepower,weight,acceleration,year,origin,name)27.0 4. 97.00 88.00 2130. 14.5 71. 3. "datsun pl510"28.0 4. 140.0 90.00 2264. 15.5 71. 1. "chevrolet vega 2300"25.0 4. 113.0 95.00 2228. 14.0 71. 3. "toyota corona"25.0 4. 98.00 NA 2046. 19.0 71. 1. "ford pinto"NA 4. 97.00 48.00 1978. 20.0 71. 2. "volkswagen super beetle 117"19.0 6. 232.0 100.0 2634. 13.0 71. 1. "amc gremlin"16.0 6. 225.0 105.0 3439. 15.5 71. 1. "plymouth satellite custom"17.0 6. 250.0 100.0 3329. 15.5 71. 1. "chevrolet chevelle malibu"19.0 6. 250.0 88.00 3302. 15.5 71. 1. "ford torino 500"18.0 6. 232.0 100.0 3288. 15.5 71. 1. "amc matador"14.0 8. 350.0 165.0 4209. 12.0 71. 1. "chevrolet impala"14.0 8. 400.0 175.0 4464. 11.5 71. 1. "pontiac catalina brougham"14.0 8. 351.0 153.0 4154. 13.5 71. 1. "ford galaxie 500"14.0 8. 318.0 150.0 4096. 13.0 71. 1. "plymouth fury iii"12.0 8. 383.0 180.0 4955. 11.5 71. 1. "dodge monaco (sw)"13.0 8. 400.0 170.0 4746. 12.0 71. 1. "ford country squire (sw)"13.0 8. 400.0 175.0 5140. 12.0 71. 1. "pontiac safari (sw)"18.0 6. 258.0 110.0 2962. 13.5 71. 1. "amc hornet sportabout (sw)"
You need to sort the values before plotting.DATA: https://files.fm/u/2g5dxyb4Use this:import numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom sklearn.preprocessing import StandardScalerfrom sklearn.preprocessing import PolynomialFeaturesfrom sklearn.linear_model import LinearRegressionfrom sklearn.model_selection import train_test_splitdata = pd.read_csv('data.txt', delim_whitespace=True)data.dropna(inplace=True)X = data['weight'].valuesY = data['mpg'].valuesX = X.reshape(-1, 1)Y = Y.reshape(-1, 1)#scale datafrom sklearn.preprocessing import StandardScalersc_X = StandardScaler()sc_Y= StandardScaler()X = sc_X.fit_transform(X)Y = sc_Y.fit_transform(Y)#split data into train and test setfrom sklearn.model_selection import train_test_splitx_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.2)#create modelfrom sklearn.preprocessing import PolynomialFeaturesfrom sklearn.linear_model import LinearRegressionpoly_reg = PolynomialFeatures(degree=2)poly_X = poly_reg.fit_transform(x_train)poly_reg.fit(poly_X,y_train)regressor2= LinearRegression()regressor2.fit(poly_X,y_train)#graphresult = regressor2.predict(np.sort(poly_X,axis=0))plt.scatter(x_train,y_train,color='red')plt.plot(np.sort(x_train, axis = 0), result,color='blue')plt.show()
Keras delayed data augmentation I am trying to apply a custom image augmentation technique in Keras. I am using fit_generator and a generator to yield images. I would like to start applying the image augmentation only after say 20 epochs (So the first 20 epochs would not have any data augmentation). Unfortunately the generator does not have a notion of epochs. Any idea how to do this?
The easiest way to do this is train for 20 epochs with no realtime augmentation (use the Keras ImageDataGenerator with no args) and save your models using a ModelCheckpoint callback. Then reload the model and continue training with RA (use an ImageDataGenerator with the transforms of your choice).If you want that behavior in one step, you can make your own version of ImageDataGenerator. You just need to make the following changes:def __init__(self, batch_counter=0, # count the batches elapsed steps_per_epoch=0, # pass steps per epoch into the custom ImageDataGenerator on init n_epoch = 0, # count the epochs elapsedThen, just modify the NumpyArrayIterator in your ImageDataGenerator to increment these variables and only call random_transform after your n_epochs have elapsed. E.g. self.image_data_generator.batch_counter += 1
How can I multiply column of the int numpy array to the float digit and stays in int? I have a numpy array: >>> b array([[ 2, 2], [ 6, 4], [10, 6]])I want to multiply first column by float number, and as result I need int number, because when I doing:>>> b[:,0] *= 2.1It says:TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int64') with casting rule 'same_kind'I need the array that looks like:array([[ 4, 2], [12, 4], [21, 6]])
@Umang Gupta gave a solution to your problem. I was curious myself as to why this worked, so I'm posting what I found as additional context. FWIW this question has already been asked and answered here, but that answer also doesn't really walk through what's happening as much as I would have liked, so here's my attempt:Using the *= operator calls the __imul__() special method for in-place multiplication of Numpy ndarrays, which in turn calls the universal function (ufunc) multiply(). There are two arguments in multiply() which are relevant here: out and casting. The out argument specifies the output (along with its type). In the in-place multiplication operator, out is set to self, i.e. the ndarray object which called the multiplication operation. In particular, the exact call for *= looks like this:ufunc(self, other, out=(self,))^ where ufunc = multiply, self = b (ndarray, type int64, and other = 2.1 (scalar, type float)The casting argument, however, determines the rules for what kind of data type casting is permitted as a result of an operation. As of Numpy 1.10, the default value for casting is same_kind, which means: only safe casts or casts within a kind, like float64 to float32, are allowed Since our ufunc call didn't specify a value for the casting argument, the default (same_kind) is used - but this causes problems because we have specified out as having an int64 dtype, which is not the same kind as the output of the int-by-float multiplication. With same_kind casting, the float result of the operation can't be converted to int. That's why we see this error. We can replicate this error using multiply() explicitly:np.multiply(b, 2.1, out=b)TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int64') with casting rule 'same_kind' It is possible to relax the casting requirement of multiply(), by setting the argument value to "unsafe". Then, when out is also set, the output is coerced to the type of out, regardless of whether it's the same kind or not (if possible):np.multiply(b, 2.1, out=b, casting="unsafe")# specifying int output and allowing casting to be "unsafe" allows re-conversion to intarray([[ 4, 4], [12, 8], [21, 12]])Using the normal assignment operator to update b[:,0], on the other hand, is ok. That's what @Umang Gupta's solution does.With: b[:,0] = b[:,0]* 2.1* calls the multiply ufunc, just like with *=. But since it isn't calling the inplace version of the operation, there's no out argument specified, and so no set type for the output. Then, standard typecasting allows ints to upcast to floats:np.multiply(b, 2.1)# float outputarray([[ 4.2, 4.2], [ 12.6, 8.4], [ 21. , 12.6]])Then the normal assignment operator = takes the output of the multiplication and stores it in b[:,0]. Per the Numpy docs on assigning values to indexed arrays: Note that assignments may result in changes if assigning higher types to lower types (like floats to ints)So the problem lies in *= operator's automatic setting of the out argument without changing the casting argument from same_kind to unsafe. (Not that this is a bug, just that this is why you are getting an error.) And the accepted solution gets around that by leveraging automatic "downcasting" properties of assignment in Numpy. Hope that helps! (Also, Numpy pros, please feel free to correct any misunderstandings on my part.)
Keras model params are all "NaN"s after reloading I use transfer learning with Resnet50. I create a new model out of the pretrained model provided by Keras (the 'imagenet').After training my new model, I save it as following:# Save the Siamese Network architecturesiamese_model_json = siamese_network.to_json()with open("saved_model/siamese_network_arch.json", "w") as json_file: json_file.write(siamese_model_json)# save the Siamese Network model weightssiamese_network.save_weights('saved_model/siamese_model_weights.h5')And later, I reload it as following to make some predictions:json_file = open('saved_model/siamese_network_arch.json', 'r')loaded_model_json = json_file.read()json_file.close()siamese_network = model_from_json(loaded_model_json)# load weights into new modelsiamese_network.load_weights('saved_model/siamese_model_weights.h5')Then I check if the weights look reasonable as following (from 1 of the layers):print("bn3d_branch2c:\n", siamese_network.get_layer('model_1').get_layer('bn3d_branch2c').get_weights())If I train my network for 1 epoch only, I see reasonable values there..But if I train my model for 18 epochs (which takes 5-6 hours as I have a very slow computer), I just see NaN values as following:bn3d_branch2c: [array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, ...What is the trick here? ADDENDUM 1:Here is how I create my model. Here, I have a triplet_loss function that I will need later on.def triplet_loss(inputs, dist='euclidean', margin='maxplus'): anchor, positive, negative = inputs positive_distance = K.square(anchor - positive) negative_distance = K.square(anchor - negative) if dist == 'euclidean': positive_distance = K.sqrt(K.sum(positive_distance, axis=-1, keepdims=True)) negative_distance = K.sqrt(K.sum(negative_distance, axis=-1, keepdims=True)) elif dist == 'sqeuclidean': positive_distance = K.sum(positive_distance, axis=-1, keepdims=True) negative_distance = K.sum(negative_distance, axis=-1, keepdims=True) loss = positive_distance - negative_distance if margin == 'maxplus': loss = K.maximum(0.0, 2 + loss) elif margin == 'softplus': loss = K.log(1 + K.exp(loss)) returned_loss = K.mean(loss) return returned_lossAnd here is how I construct my model from start to end. I give the complete code to give the exact picture.model = ResNet50(weights='imagenet')# Remove the last layer (Needed to later be able to create the Siamese Network model)model.layers.pop()# First freeze all layers of ResNet50. Transfer Learning to be applied.for layer in model.layers: layer.trainable = False# All Batch Normalization layers still need to be trainable so that the "mean"# and "standard deviation (std)" params can be updated with the new training datamodel.get_layer('bn_conv1').trainable = Truemodel.get_layer('bn2a_branch2a').trainable = Truemodel.get_layer('bn2a_branch2b').trainable = Truemodel.get_layer('bn2a_branch2c').trainable = Truemodel.get_layer('bn2a_branch1').trainable = Truemodel.get_layer('bn2b_branch2a').trainable = Truemodel.get_layer('bn2b_branch2b').trainable = Truemodel.get_layer('bn2b_branch2c').trainable = Truemodel.get_layer('bn2c_branch2a').trainable = Truemodel.get_layer('bn2c_branch2b').trainable = Truemodel.get_layer('bn2c_branch2c').trainable = Truemodel.get_layer('bn3a_branch2a').trainable = Truemodel.get_layer('bn3a_branch2b').trainable = Truemodel.get_layer('bn3a_branch2c').trainable = Truemodel.get_layer('bn3a_branch1').trainable = Truemodel.get_layer('bn3b_branch2a').trainable = Truemodel.get_layer('bn3b_branch2b').trainable = Truemodel.get_layer('bn3b_branch2c').trainable = Truemodel.get_layer('bn3c_branch2a').trainable = Truemodel.get_layer('bn3c_branch2b').trainable = Truemodel.get_layer('bn3c_branch2c').trainable = Truemodel.get_layer('bn3d_branch2a').trainable = Truemodel.get_layer('bn3d_branch2b').trainable = Truemodel.get_layer('bn3d_branch2c').trainable = Truemodel.get_layer('bn4a_branch2a').trainable = Truemodel.get_layer('bn4a_branch2b').trainable = Truemodel.get_layer('bn4a_branch2c').trainable = Truemodel.get_layer('bn4a_branch1').trainable = Truemodel.get_layer('bn4b_branch2a').trainable = Truemodel.get_layer('bn4b_branch2b').trainable = Truemodel.get_layer('bn4b_branch2c').trainable = Truemodel.get_layer('bn4c_branch2a').trainable = Truemodel.get_layer('bn4c_branch2b').trainable = Truemodel.get_layer('bn4c_branch2c').trainable = Truemodel.get_layer('bn4d_branch2a').trainable = Truemodel.get_layer('bn4d_branch2b').trainable = Truemodel.get_layer('bn4d_branch2c').trainable = Truemodel.get_layer('bn4e_branch2a').trainable = Truemodel.get_layer('bn4e_branch2b').trainable = Truemodel.get_layer('bn4e_branch2c').trainable = Truemodel.get_layer('bn4f_branch2a').trainable = Truemodel.get_layer('bn4f_branch2b').trainable = Truemodel.get_layer('bn4f_branch2c').trainable = Truemodel.get_layer('bn5a_branch2a').trainable = Truemodel.get_layer('bn5a_branch2b').trainable = Truemodel.get_layer('bn5a_branch2c').trainable = Truemodel.get_layer('bn5a_branch1').trainable = Truemodel.get_layer('bn5b_branch2a').trainable = Truemodel.get_layer('bn5b_branch2b').trainable = Truemodel.get_layer('bn5b_branch2c').trainable = Truemodel.get_layer('bn5c_branch2a').trainable = Truemodel.get_layer('bn5c_branch2b').trainable = Truemodel.get_layer('bn5c_branch2c').trainable = True# Used when compiling the siamese networkdef identity_loss(y_true, y_pred): return K.mean(y_pred - 0 * y_true) # Create the siamese networkx = model.get_layer('flatten_1').output # layer 'flatten_1' is the last layer of the modelmodel_out = Dense(128, activation='relu', name='model_out')(x)model_out = Lambda(lambda x: K.l2_normalize(x,axis=-1))(model_out)new_model = Model(inputs=model.input, outputs=model_out)anchor_input = Input(shape=(224, 224, 3), name='anchor_input')pos_input = Input(shape=(224, 224, 3), name='pos_input')neg_input = Input(shape=(224, 224, 3), name='neg_input')encoding_anchor = new_model(anchor_input)encoding_pos = new_model(pos_input)encoding_neg = new_model(neg_input)loss = Lambda(triplet_loss)([encoding_anchor, encoding_pos, encoding_neg])siamese_network = Model(inputs = [anchor_input, pos_input, neg_input], outputs = loss) # Note that the output of the model is the # return value from the triplet_loss function abovesiamese_network.compile(optimizer=Adam(lr=.0001), loss=identity_loss)One thing to notice is that I make all batch normalization layers "trainable" so that BN related params can be updated with my training data. This creates a lot of lines but I could not find a shorter solution.
The solution is inspired from @Gurmeet Singh's recommendation above.Seemingly, weights of trainable layers have become so big after a while during the training and all such weights are set to NaN, which made me think that I was saving and reloading my models in the wrong way but the problem was exploding gradients.I saw a similar issue in github discussions too, which can be checked out here: github.com/keras-team/keras/issues/2378 At the bottom of that thread in github, it is recommended to use lower learning rates to avoid the problem.In this link (Keras ML library: how to do weight clipping after gradient updates? TensorFlow backend), 2 solutions are discussed:- using the clipvalue parameter in the optimizer, which simply cuts the calculated gradient values as configured. But this is not the recommended solution to go for.(Explained in the other thread.)- and the second thing is to use the clipnorm parameter, which simply clips calculated gradient values when their L2 norm exceeds the given value by the user.I also thought about using input normalization (to avoid exploding gradients) but then figured out that it is already done in the preprocess_input(..) function.(Check this link for details: https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/preprocess_input) It is though possible to set the mode parameter to "tf" (set to "caffe" by default otherwise), which could further help (because mode="tf" setting scales pixels between -1 and 1) but I did not try it.I summary, I changed 2 things when compiling my model that will be trained:The line that has been changed is the following:Before the change:siamese_network.compile(optimizer=Adam(**lr=.0001**), loss=identity_loss)After the change:siamese_network.compile(optimizer=Adam(**lr=.00004**, **clipnorm=1.**), loss=identity_loss)1) Used a smaller learning rate to make gradient updates a bit smaller2) Used the clipnorm parameter to normalize calculated gradients and cut them.And I trained my network again for 10 epochs. The loss decreases as desired, but more slowly now. And I do not experience any problems when saving and storing my model. (At least after 10 epochs (it takes time on my computer).)Note that I set the value of clipnorm to 1. This means that the L2 norm of gradients is calculated first and if the calculated normalized gradient exceeds the value of "1", the gradient is clipped. I assume this is a hyperparameter that can be optimized, that affects the time needed to train the model while helping to avoid exploding gradients problem.