questions
stringlengths
56
48k
answers
stringlengths
13
43.8k
pandas-how can I replace rows in a dataframe I am new in Python and try to replace rows.I have a dataframe such as:XY1a2d3c4a5b6e7a8bI have two question:1- How can I replace 2nd row with 5th, such as:XY1a5b3c4a2d6e7a8b2- How can I put 6th row above 3rd row, such as:XY1a2d6e3c4a5b7a8b
First use DataFrame.iloc, python counts from 0, so for select second row use 1 and for fifth use 4:df.iloc[[1, 4]] = df.iloc[[4, 1]]print (df) X Y0 1 a1 5 b2 3 c3 4 a4 2 d5 6 e6 7 a7 8 bAnd then rename indices for above value, here 1 and sorting with only stable sorting mergesort:df = df.rename({5:1}).sort_index(kind='mergesort', ignore_index=True)print (df) X Y0 1 a1 2 d2 6 e3 3 c4 4 a5 5 b6 7 a7 8 b
What do the coordinates of verts from Marching Cubes mean? I have a 3D generated voxel model of a vehicle and the coordinates of the voxels are in the vehicle reference frame. The origin is at the center of the floor. It looks this:array([[-2.88783681, -0.79596956, 0.],[-2.8752784 -0.79596956, 0.],[-2.86271998, -0.79596956, 0.],...,[ 2.83880176, 0.89941685, 1.98423003],[ 2.85136017, 0.89941685, 1.98423003],[ 2.86391859, 0.89941685, 1.98423003]])Then I create a meshgrid of 0s and 1sux = np.unique (voxels[:,0])uy = np.unique (voxels[:,1])uz = np. unique (voxels[:,2])X, Y, Z = np.meshgrid(ux, uy, uz)V = np.zeros(X. shape)N = voxels.shape [0]for ii in range(n): ix = ux == voxels[ii,] iy = uy == voxels[ii, 1] iz = uz == voxels[ii,2] V[iy, ix, iz] = 1Then I call the marching cubes algorithm to generate a mesh of the voxel model.marching_cubes = measure.marching_cubes_lewiner (v, o, spacing=(voxel_size, voxel_size, voxel_size))verts = marching_cubes[0]faces = marching cubes[1]normals = marching_cubes[2]When I print out the vertices, the coordinates are like this:array([[2.78852894e-18, 4.39544627e-01, 3.39077284e-01),[1.25584179-02, 4.39544627e-01, 3.26518866e-01],[1.25584179-02, 4.26986209e-01, 3.39077284e-01],[1.72050325e+00, 1.26840021e+00, 2.76285194-01],[1.72050325e+00, 1.26840021e+00, 2.88843612e-01],[1.72050325e+00, 1.26840021e+00, 3.014020302-01]])In the documentation it says that verts is nothing but "Spatial coordinates for V unique mesh vertices". But what do the coordinates mean? In what coordinate system is it?I plan on projecting the mesh onto the image of the vehicle I generated the voxel model from. How do I do the coordinate transformation in that case? (I've already successfully projected the voxel onto the image)
verts are just points in space. Essentially each vert is a corner of some triangle (usually more than 1).To know what the actual triangles are you will look at faces which will have be something like:[(v1, v2, v3), (v1, v4, v5), ...]Each tuple in the list includes 3 indices for verts. For example:verts[v1], verts[v2], verts[v3]is a triangle in space.
Python Pandas- How do I subset time intervals into smaller ones? Let's imagine I have a timeseries dataframe of temperature sensor data that goes by 30 min intervals. How do I basically subset each 30 min interval into smaller 5 min intervals while accounting for the difference of temperature drops between each interval?I imagine that doing something like this could work:30 min intervals: interval 1: temp = 30 interval 2: temp = 255 min intervals: interval 1: temp = 30 interval 2: temp = 29 interval 3: temp = 28 interval 4: temp = 27 interval 5: temp = 26 interval 6: temp = 25
I would do it with a resample of the data frame to a lower time resolution ("6T" in this case, with T meaning minutes), this will create new rows for missing time steps with nan values then you can fill those nan somehow, for what you describe I think a linear interpolation can be enough.Here you have a simple example that I think can match the data you describe.import pandas as pddf = pd.DataFrame({"temp":[30, 25, 20, 18]}, index = pd.date_range("2021-12-01 12:00:00", "2021-12-01 13:59:00", freq = "30T"))#This resample will preserve your values at their original time indexes, and will create new rows for the intermediate#datetime full of nans#the .last() is just used to select the value for each time-step, you could also use mean o max o min or mean as there is just one value for each time step so it would get you the same.df = df.resample("6T").last()#it really depends on how you want to implement the change over time of the data, but as you described a linear #variation, what you can use is a simple linear interpolation between values with the method interpolatedf.interpolate()
Convert http text response to pandas dataframe I want to convert the below text into a pandas dataframe. Is there a way I can use Python Pandas pre-built or in-built parser to convert? I can make a custom function for parsing but want to know if there is pre-built and/or fast solution.In this example, the dataframe should result in two rows, one each of ABC & PQR{ "data": [ { "ID": "ABC", "Col1": "ABC_C1", "Col2": "ABC_C2" }, { "ID": "PQR", "Col1": "PQR_C1", "Col2": "PQR_C2" } ]}
You've listed everything you need as tags. Use json.loads to get a dict from stringimport jsonimport pandas as pdd = json.loads('''{ "data": [ { "ID": "ABC", "Col1": "ABC_C1", "Col2": "ABC_C2" }, { "ID": "PQR", "Col1": "PQR_C1", "Col2": "PQR_C2" } ]}''')df = pd.DataFrame(d['data'])print(df)Output: ID Col1 Col20 ABC ABC_C1 ABC_C21 PQR PQR_C1 PQR_C2
numpy to spark error: TypeError: Can not infer schema for type: While trying to convert a numpy array into a Spark DataFrame, I receive Can not infer schema for type: <class 'numpy.float64'> error.The same thing happens with numpy.int64 arrays.Example:df = spark.createDataFrame(numpy.arange(10.))TypeError: Can not infer schema for type: <class 'numpy.float64'>
Or without using pandas:df = spark.createDataFrame([(float(i),) for i in numpy.arange(10.)])
Compare file name in a dataframe to file present in a directory and then fetch row value of a different column I have a pandas dataframe which captures 2 columns - id and corresponding filename.I want to run a loop and compare if the filename in this dataframe is present in a specific directory. If it is present then I want to fetch the id and filename.I am trying the following code -x = df[["id", "filename_dir"]]directory = 'C:/users/'for filename in os.listdir(directory): if filename in x["filename_dir"]: id_1 = x["id"] print(id_1)However, this is giving me complete list of ids and not the one corresponding to the filename present in the directory. I am new to python so apologies for this basic query.
id = x["id"]will instantiate "id" with the whole columns of x.id, so the print statement will print the whole column everytime it finds a matching file.tryid = x.id[x.filename == filename]
Sparse Categorical CrossEntropy causing NAN loss So, I've been trying to implement a few custom losses, and so thought I'd start off with implementing SCE loss, without using the built in TF object. Here's the function I wrote for it.def custom_loss(y_true, y_pred): print(y_true, y_pred) return tf.cast(tf.math.multiply(tf.experimental.numpy.log2(y_pred[y_true[0]]), -1), dtype=tf.float32)y_pred is the set of probabilties, and y_true is the index of the correct one. This setup should work according to all that I've read, but it returns NAN loss.I checked if there's a problem with the training loop, but it works prefectly with the builtin losses.Could someone tell me what the problem is with this code?
You can replicate the SparseCategoricalCrossentropy() loss function as followsimport tensorflow as tfdef sparse_categorical_crossentropy(y_true, y_pred, clip=True): y_true = tf.convert_to_tensor(y_true, dtype=tf.int32) y_pred = tf.convert_to_tensor(y_pred, dtype=tf.float32) y_true = tf.one_hot(y_true, depth=y_pred.shape[1]) if clip == True: y_pred = tf.clip_by_value(y_pred, 1e-7, 1 - 1e-7) return - tf.reduce_mean(tf.math.log(y_pred[y_true == 1]))Note that the SparseCategoricalCrossentropy() loss function applies a small offset (1e-7) to the predicted probabilities in order to make sure that the loss values are always finite, see also this question.y_true = [1, 2]y_pred = [[0.05, 0.95, 0.0], [0.1, 0.8, 0.1]]print(tf.keras.losses.SparseCategoricalCrossentropy()(y_true, y_pred).numpy())print(sparse_categorical_crossentropy(y_true, y_pred, clip=True).numpy())print(sparse_categorical_crossentropy(y_true, y_pred, clip=False).numpy())# 1.1769392# 1.1769392# 1.1769392y_true = [1, 2]y_pred = [[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]]print(tf.keras.losses.SparseCategoricalCrossentropy()(y_true, y_pred).numpy())print(sparse_categorical_crossentropy(y_true, y_pred, clip=True).numpy())print(sparse_categorical_crossentropy(y_true, y_pred, clip=False).numpy())# 8.059048# 8.059048# inf
Apply condition on a column after groupby in pandas and then aggregate to get 2 max value data field bcorr0 A cs1 0.81 A cs2 0.92 A cs3 0.73 A pq1 0.44 A pq2 0.65 A pq3 0.56 B cs1 0.87 B cs2 0.98 B cs3 0.79 B pq1 0.410 B pq2 0.611 B pq3 0.5For every data A and B in data column, segregate the cs & pq fields from field column, and then aggregate to get 2 max value of bcorr.Sample result would be like:data field bcorr0 A cs1 0.81 A cs2 0.94 A pq2 0.65 A pq3 0.56 B cs1 0.87 B cs2 0.910 B pq2 0.611 B pq3 0.5For this, one of option is to do this while creating the list of records, which obviously will have high complexity.second, i want to do this with pandas dataframe, where i used groupby on data column, then applying startswith to get the source field and then apply max
First, extract common part of each field (first letters) then sort values (highest values go bottom). Finally group by data column and field series then keep the two last values (the highest):field = df['field'].str.extract('([^\d]+)', expand=False)out = df.sort_values('bcorr').groupby(['data', field]).tail(2).sort_index()print(out)# Output data field bcorr0 A cs1 0.81 A cs2 0.94 A pq2 0.65 A pq3 0.56 B cs1 0.87 B cs2 0.910 B pq2 0.611 B pq3 0.5If you field have only two fixed letters to determine the field, you can use df['field'].str[:2] instead of df['field'].str.extract(...).
Best way to remove specific words from column in pandas dataframe? I'm working with a huge set of data that I can't work with in excel so I'm using Pandas/Python, but I'm relatively new to it. I have this column of book titles that also include genres, both before and after the title. I only want the column to contain book titles, so what would be the easiest way to remove the genres?Here is an example of what the column contains:Book LabelsScience Fiction | Drama | DuneThriller | Mystery | The Day I DiedThriller | Razorblade Tears | Family | DramaComedy | How To Marry Keanu Reeves In 90 Days | Drama...So above, the book titles would be Dune, The Day I Died, Razorblade Tears, and How To Marry Keanu Reeves In 90 Days, but as you can see the genres precede as well as succeed the titles.I was thinking I could create a list of all the genres (as there are only so many) and remove those from the column along with the "|" characters, but if anyone has suggestions on a simpler way to remove the genres and "|" key, please help me out.
It is an enhancement to @tdy Regex solution. The original regex Family|Drama will match the words "Family" and "Drama" in the string. If the book title contains the words in gernes, the words will be removed as well.Supposed that the labels are separated by " | ", there are three match conditions we want to remove.Gerne at start of string. e.g. Drama | ...Gerne in the middle. e.g. ... | Drama | ...Gerne at end of string. e.g. ... | DramaUse regex (^|\| )(?:Family|Drama)(?=( \||$)) to match one of three conditions. Note that | Drama | Family has 2 overlapped matches, here I use ?=( \||$) to avoid matching once only. See this problem [Use regular expressions to replace overlapping subpatterns] for more details.>>> genres = ["Family", "Drama"]>>> df# Book Labels# 0 Drama | Drama 123 | Family# 1 Drama 123 | Drama | Family# 2 Drama | Family | Drama 123# 3 123 Drama 123 | Family | Drama# 4 Drama | Family | 123 Drama>>> re_str = "(^|\| )(?:{})(?=( \||$))".format("|".join(genres))>>> df['Book Labels'] = df['Book Labels'].str.replace(re_str, "", regex=True)# 0 | Drama 123# 1 Drama 123# 2 | Drama 123# 3 123 Drama 123# 4 | 123 Drama>>> df["Book Labels"] = df["Book Labels"].str.strip("| ")# 0 Drama 123# 1 Drama 123# 2 Drama 123# 3 123 Drama 123# 4 123 Drama
Changing column label Python plotly? How to change column titles? First column title should say "4-Year" and 2nd column title "2-Year". I tried using label={} but kept getting an error.df = pd.read_csv('college_data.csv') df1 = df[df.years > 2] df2 = df[df.years < 3] #CUNY College Table fig = go.Figure(data=[go.Table( header=dict(values=list(df1[['college_name', 'college_name']]), fill_color='paleturquoise', font_color='gray', align='left', height = 50, font=dict(size=26), ), cells=dict(values=[df1.college_name, df2.college_name], height = 50, fill_color='lavender', align='left', font=dict(size=20), )) ]) fig.update_layout(title = "CUNY Colleges", width = 900, height = 1320, font_family='Palanquin', font=dict(size=30), showlegend = False) st.plotly_chart(fig)
Changevalues=list(df1[['college_name', 'college_name']]),tovalues=["4-year", "2-year"],e.g.fig = go.Figure(data=[go.Table( header=dict(values=["4-year", "2-year"], ...Calling list on a pandas DataFrame returns a list of the column names of that dataframe, so list(df1[['college_name', 'college_name']]) is essentially identical to ['college_name', 'college_name'].
Rename files in a folder using python I have different docs,pdf files available in my folder (almost 1000 no). I want to rename all the files. My folder structure like -nikita ----------abc.doc ----------des.doc ----------jj1.pdfI want name should be starting with NC_. For examplenikita ----------NC1_abc.doc ----------NC2_des.doc ----------NC3_jj1.pdfI have done the following code -import osimport globimport pandas as pdos.chdir('C:\\Users\\EVM\\Nikita\\')print(os.getcwd())for count, f in enumerate(os.listdir()): f_name, f_ext = os.path.splitext(f) f_name = "NC" + str(count) + '_' + f_name new_name = f'{f_name}{f_ext}' os.rename(f, new_name)But my output starts with NC0 not NC1.nikita ----------NC0_abc.doc ----------NC1_des.doc ----------NC2_jj1.pdf
the enumerate function accepts an optional argument to declare the index you want to start with. Click here for further information.So if you added the argument like this:enumerate(os.listdir(), 1)the output should start with NC1.
Creating categorical column based on multiple column values in groupby print(df.groupby(['Step1', 'Step2', 'Step3']).size().reset_index(name='Freq')) Step1 Step2 Step3 Freq0 6.0 17.6 28.60 1351 7.5 22.0 35.75 2552 10.5 30.8 50.05 1293 12.0 35.2 57.20 3694 13.5 39.6 64.35 2495 15.0 44.0 71.50 2466 16.5 48.4 78.65 2467 18.0 52.8 85.80 3698 21.0 61.6 100.10 3759 22.5 66.0 107.25 24910 25.5 74.8 121.55 123The 'Step1', 'Step2', 'Step3' columns are constant input values. There are 10 unique combinations of input values from these columns (shown in the groupby). I am looking to delete the individual 'Step1', 'Step2', 'Step3' columns and create a single column "Step Type" that has a letter that represents the unique combinations of input values from these columns.Desired output: Step Type Freq0 A 1351 B 2552 C 1293 D 3694 E 2495 F 2466 G 2467 H 3698 J 3759 L 24910 M 123Step Type A: Step1=6.0, Step2=17.6, Step3=28.60How do I do this?
As combinationss of the three steps are unique, I used the combinations as a key of Dictionary for Step Type.Here, I pre-defined category value but it can be auto-generated by scanning the df if needed.# df Step1 Step2 Step30 6.0 17.6 28.601 7.5 22.0 35.752 10.5 30.8 50.053 12.0 35.2 57.204 13.5 30.6 64.35category = { (6.0, 17.6, 28.60): 'A', (7.5, 22.0, 35.75): 'B', (10.5, 30.8, 50.05): 'C', (12, 35.2, 57.20): 'D', (13.5, 30.6, 64.35): 'E',}df['Step_Type'] = df.apply(lambda row: category[(row['Step1'], row['Step2'], row['Step3'])], axis=1)df = df[['Step_Type', 'Freq']]print(df)# Step_Type Freq#0 A 135#1 B 255#2 C 129#3 D 369#4 E 249
Replacing nan values in a Pandas data frame with lists How to replace nan or empty strings (e.g. "") with zero if it exists in any column. the values in any column can be a combination of lists and scalar values as followscol1 col2 col3 col4 nan Jhon [nan, 1, 2] ['k', 'j']1 nan [1, 1, 5] 32 "" nan nan3 Samy [1, 1, nan] ['b', '']
You have to handle the three cases (empty string, NaN, NaN in list) separately.For the NaN in list you need to loop over each occurrence and replace the elements one by one.NB. applymap is slow, so if you know in advance the columns to use you can subset themFor the empty string, replace them to NaN, then fillna.sub = 'X'(df.applymap(lambda x: [sub if (pd.isna(e) or e=='') else e for e in x] if isinstance(x, list) else x) .replace('', float('nan')) .fillna(sub) )Output: col1 col2 col3 col40 X Jhon [X, 1, 2] [k, j]1 1.0 X [1, 1, 5] 32 2.0 X X X3 3.0 Samy [1, 1, X] [b, X]Used input:from numpy import nandf = pd.DataFrame({'col1': {0: nan, 1: 1.0, 2: 2.0, 3: 3.0}, 'col2': {0: 'Jhon', 1: nan, 2: '', 3: 'Samy'}, 'col3': {0: [nan, 1, 2], 1: [1, 1, 5], 2: nan, 3: [1, 1, nan]}, 'col4': {0: ['k', 'j'], 1: '3', 2: nan, 3: ['b', '']}})
How to modified dataset I am working with a dataset like this, where the values of 'country name' are repeat several time, and 'Indicator name' to.I want to create a new dataset with its columns are like thatYear CountryName IndicatorName1 IndicatorName2 ... IndicatorNameX2000. USA. value1. value2. valueX2000. Canada. value1. value2. valueX 2001. USA. value1. value2. valueX2001. Canada. value1. value2. valueX it is possible to do that??Thanks in advances!
You can use pivot as suggested by @Chris but you can also try:out = df.set_index(['Country Name', 'Indicator Name']).unstack('Country Name').T \ .rename_axis(index=['Year', 'Country'], columns=None).reset_index()print(out)# Output Year Country IndicatorName1 IndicatorName20 2000 France 1 31 2000 Italy 2 42 2001 France 5 73 2001 Italy 6 8Setup a Pandas / MRE:data = {'Country Name': ['France', 'Italy', 'France', 'Italy'], 'Indicator Name': ['IndicatorName1', 'IndicatorName1', 'IndicatorName2', 'IndicatorName2'], 2000: [1, 2, 3, 4], 2001: [5, 6, 7, 8]}df = pd.DataFrame(data)print(df)# Output Country Name Indicator Name 2000 20010 France IndicatorName1 1 51 Italy IndicatorName1 2 62 France IndicatorName2 3 73 Italy IndicatorName2 4 8
How to resave a csv file using pandas in Python? I have read a csv file using Pandas and I need to resave the csv file using code instead of opening the csv file and manually saving it.Is it possible?
There must be something I'm missing in the question. Why not simply:df = pd.read_csv('file.csv', ...)# any changesdf.to_csv('file.csv')?
Replace only replacing the 1st argument I have the following code:df['Price'] = df['Price'].replace(regex={'$': 1, '$$': 2, '$$$': 3})df['Price'].fillna(0)but even if a row had "$$" or "$$$" it still replaces it with a 1.0.How can I make it appropriately replace $ with 1, $$ with 2, and $$$ with 3?
df.Price.map({'$': 1, '$$': 2, '$$$': 3})
How to match string and arrange dataframe accordingly? Got Input df1 and df2df1:Subcategory_Desc Segment_Desc Flow Side Row_noAPPLE APPLE LOOSE Apple Kanzi Front Row 1APPLE APPLE LOOSE Apple Jazz Front Row 1CITRUS ORANGES LOOSE Orange Navel Front Row 1PEAR PEARS LOOSE Lemon Right End Row 1AVOCADOS AVOCADOS LOOSE Avocado Back Row 1TROPICAL FRUIT KIWI FRUIT Kiwi Gold Back Row 1TROPICAL FRUIT KIWI FRUIT Kiwi Green Left End Row 1df2:Subcategory_Desc Segment_Desc FlowTROPICAL FRUIT KIWI FRUIT 5pk Kids KiwiAPPLE APPLE LOOSE Apple GoldenDelAVOCADOS AVOCADOS LOOSE Avocado TrayScenario:Dataframe df2 rows should be inserted to dataframe df1 considering below condition:Check for the similar Subcategory_Desc and Segment_Desc of df2 in df1 and insert that df2 row at the end of that particular Side(Front/Back). As given in expected Output.Need to consider Row_no column as well, because original dataset holds n number of Row_no, here have given Row 1 alone for sample data.Expected Output:Subcategory_Desc Segment_Desc Flow Side Row_noAPPLE APPLE LOOSE Apple Kanzi Front Row 1APPLE APPLE LOOSE Apple Jazz Front Row 1CITRUS ORANGES LOOSE Orange Navel Front Row 1APPLE APPLE LOOSE Apple GoldenDel Front Row 1PEAR PEARS LOOSE Lemon Right End Row 1AVOCADOS AVOCADOS LOOSE Avocado Back Row 1TROPICAL FRUIT KIWI FRUIT Kiwi Gold Back Row 1TROPICAL FRUIT KIWI FRUIT 5pk Kids Kiwi Back Row 1AVOCADOS AVOCADOS LOOSE Avocado Tray Back Row 1TROPICAL FRUIT KIWI FRUIT Kiwi Green Left End Row 1Not sure what simple logic can be used for this purpose.
So, given the following dataframes:import pandas as pddf1 = pd.DataFrame( { "Subcategory_Desc": { 0: "APPLE", 1: "APPLE", 2: "CITRUS", 3: "PEAR", 4: "AVOCADOS", 5: "TROPICAL FRUIT", 6: "TROPICAL FRUIT", }, "Segment_Desc": { 0: "APPLE LOOSE", 1: "APPLE LOOSE", 2: "ORANGES LOOSE", 3: "PEARS LOOSE", 4: "AVOCADOS LOOSE", 5: "KIWI FRUIT", 6: "KIWI FRUIT", }, "Flow": { 0: "Apple Kanzi", 1: "Apple Jazz", 2: "Orange Navel", 3: "Lemon", 4: "Avocado", 5: "Kiwi Gold", 6: "Kiwi Green", }, "Side": { 0: "Front", 1: "Front", 2: "Front", 3: "Right_End", 4: "Back", 5: "Back", 6: "Left_End", }, "Row_no": { 0: "Row 1", 1: "Row 1", 2: "Row 1", 3: "Row 1", 4: "Row 1", 5: "Row 1", 6: "Row 1", }, })df2 = pd.DataFrame( { "Subcategory_Desc": {0: "TROPICAL FRUIT", 1: "APPLE", 2: "AVOCADOS"}, "Segment_Desc": {0: "KIWI FRUIT", 1: "APPLE LOOSE", 2: "AVOCADOS LOOSE"}, "Flow": {0: "5pk Kids Kiwi", 1: "Apple GoldenDel", 2: "Avocado Tray"}, })You could try this:# Initialize new columndf2["idx"] = ""# Find indice of first match in df1for _, row2 in df2.iterrows(): for i, row1 in df1.iterrows(): if i + 1 >= df1.shape[0]: break if ( row1["Subcategory_Desc"] == row2["Subcategory_Desc"] and row1["Segment_Desc"] == row2["Segment_Desc"] ): row2["idx"] = idf2 = df2.sort_values(by="idx").reset_index(drop=True)# Starting from previous indice, find insertion indice in df1for i, idx in enumerate(df2["idx"]): side_of_idx = df1.loc[idx, "Side"] df2.loc[i, "pos"] = df1.index[df1["Side"] == side_of_idx].to_list()[-1] + 1positions = df2["pos"].astype("int").to_list()# Clean up df2df2 = df2.drop(columns=["idx", "pos"])df2["Side"] = df2["Row_no"] = ""# Iterate on df1 to insert new rowsfor i, pos in enumerate(positions): # Fill missing values df2.loc[i, "Side"] = df1.loc[pos - 1, "Side"] df2.loc[i, "Row_no"] = df1.loc[pos, "Row_no"] # Insert row df1 = pd.concat( [df1.iloc[:pos], pd.DataFrame([df2.iloc[i]]), df1.iloc[pos:]], ignore_index=True ).reset_index(drop=True) # Increment next position since df1 has changed if i < len(positions) - 1: positions[i + 1] += 1And so:print(df1)# Outputs Subcategory_Desc Segment_Desc Flow Side Row_no0 APPLE APPLE LOOSE Apple Kanzi Front Row 11 APPLE APPLE LOOSE Apple Jazz Front Row 12 CITRUS ORANGES LOOSE Orange Navel Front Row 13 APPLE APPLE LOOSE Apple GoldenDel Front Row 14 PEAR PEARS LOOSE Lemon Right_End Row 15 AVOCADOS AVOCADOS LOOSE Avocado Back Row 16 TROPICAL FRUIT KIWI FRUIT Kiwi Gold Back Row 17 TROPICAL FRUIT KIWI FRUIT 5pk Kids Kiwi Back Row 18 AVOCADOS AVOCADOS LOOSE Avocado Tray Back Row 19 TROPICAL FRUIT KIWI FRUIT Kiwi Green Left_End Row 1
Data cleaning, dictionary, inside dictionary,inside lists in CSV I'm a newbie learning data science, I've been trying to clean a data set, but I've had some hurdles on the way, the first issue I had was to explode a Dictionary inside a table into individual columns link below), thanks to user Parfait I could do it using literal_eval, then I had a problem trying to apply the same solution until I found literal_eval has issues with null values, I got rid of nulls and some bad uses of quotes.Now I got this, it seems that a column, which is a dictionary has not one but two values which are dictionaries themselves, I've tried to pop and del those values, but it seems the data is not considered a dictionary so I couldn't afford it.When running df['creator'].map(eval) I get the message appended below, look to the "avatar" and "api" columns, these two columns are not necessary for what I want, so I could drop them, but I have not find a way to do it.To be clear I just want to extract id and name columns as "cre_id" and "cre_name", add them to the main df with prefix and deleting the rest of the column, thank you for your help.df['creator'].map(eval) File "<string>", line 1 {"id":347819977,"name":Raul CJ Montes,"is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/019/996/402/9de6ab427db7becb81711ce9b25e3645_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1517101311&auto=format&frame=1&q=92&s=c41776ee80edfa63ba4dc916b24f6f00","small":"https://ksr-ugc.imgix.net/assets/019/996/402/9de6ab427db7becb81711ce9b25e3645_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1517101311&auto=format&frame=1&q=92&s=6983b13a3c4e7a7a5f0b2d42f78f50dc","medium":"https://ksr-ugc.imgix.net/assets/019/996/402/9de6ab427db7becb81711ce9b25e3645_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1517101311&auto=format&frame=1&q=92&s=bb04642f7264234e6c01c5b1b77d8c63"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/347819977"},"api":{"user":"https://api.kickstarter.com/v1/users/347819977?signature=1631849457.e135d96dc2a9edbddb71deef896c78155ed13e8b"}}} ^SyntaxError: invalid syntaxEdit: Added first ten rows of the dataset:{0: '{"id":1379875462,"name":"Batton Lash","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1461382354&auto=format&frame=1&q=92&s=4d88bd2ed1e7098fcaf046321cc4be15","small":"https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1461382354&auto=format&frame=1&q=92&s=664f586cef17d83dc408a6a10b0f3c4a","medium":"https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1461382354&auto=format&frame=1&q=92&s=fe307263e32a2385e764e3923a13179e"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1379875462"},"api":{"user":"https://api.kickstarter.com/v1/users/1379875462?signature=1631849432.d50b79030e15111575554ecae171babad1f2925d"}}}', 1: '{"id":408247096,"name":"Scott(skoddii)","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1519354368&auto=format&frame=1&q=92&s=74f83e0070b20db01d5180ba214d1b5e","small":"https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1519354368&auto=format&frame=1&q=92&s=671b9100176dbfa63752a7a8e9cc63d0","medium":"https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1519354368&auto=format&frame=1&q=92&s=956c6f85ffbc3fb179c260611254a2be"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/408247096"},"api":{"user":"https://api.kickstarter.com/v1/users/408247096?signature=1631849432.6cc0456d4795aea0b32f861b050212afef4387ce"}}}', 2: '{"id":361953386,"name":"Luis G. Batista, CPM, C.P.S.M","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488754184&auto=format&frame=1&q=92&s=f4dc0bbe5e7edbb35fb15c07bdb2c843","small":"https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488754184&auto=format&frame=1&q=92&s=9c7e202bb6491516468ec69dff66bcdd","medium":"https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488754184&auto=format&frame=1&q=92&s=ac05f1a9827cc321ea3e8f754f19be94"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/361953386"},"api":{"user":"https://api.kickstarter.com/v1/users/361953386?signature=1631849432.7262fa85aec828a6b01ea70685ef22b0ada784ad"}}}', 3: '{"id":202579323,"name":"Brian Carmichael","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488680236&auto=format&frame=1&q=92&s=9433c133b6bf02a45dd8ba78a0b44a46","small":"https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488680236&auto=format&frame=1&q=92&s=900c300f2d425243c108ed4419c78793","medium":"https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488680236&auto=format&frame=1&q=92&s=55e58d426c7f41b92081ce735abac404"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/202579323"},"api":{"user":"https://api.kickstarter.com/v1/users/202579323?signature=1631849432.fb88647e78bbe87ca2646330b0d84a0237c7cc46"}}}', 4: '{"id":1996450690,"name":"Dan Schmeidler","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488802482&auto=format&frame=1&q=92&s=97f88d105a1bc21a72f008859b13055c","small":"https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488802482&auto=format&frame=1&q=92&s=a423f3fbf75bdb32f1c895a1f0d76bca","medium":"https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488802482&auto=format&frame=1&q=92&s=49f4a2d61132d1068d3f604b03a1f8e5"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1996450690"},"api":{"user":"https://api.kickstarter.com/v1/users/1996450690?signature=1631849432.3b51c0d212170f4228293d3133045d040c6a6285"}}}', 5: '{"id":903880044,"name":"Doug McQuilken","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1479214827&auto=format&frame=1&q=92&s=84c65c201bdb46e72afeef51ad261913","small":"https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1479214827&auto=format&frame=1&q=92&s=52beef6574a551f81be17acc750d4e2e","medium":"https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1479214827&auto=format&frame=1&q=92&s=b4bb14d2759e21e6c40d3ef9c86c1ed3"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/903880044"},"api":{"user":"https://api.kickstarter.com/v1/users/903880044?signature=1631849432.6a7dcb45d0ca2a4c5922d51a0b3f36f7972b6ac0"}}}', 6: '{"id":1391487766,"name":"Karen Scott","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1487847709&auto=format&frame=1&q=92&s=e18d2c915b50e20cf27bb1255ad82ba9","small":"https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1487847709&auto=format&frame=1&q=92&s=bd7c22cafcec49e73bea6a106976043c","medium":"https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1487847709&auto=format&frame=1&q=92&s=d1d5327de95dac76d4cbed7a95007de1"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1391487766"},"api":{"user":"https://api.kickstarter.com/v1/users/1391487766?signature=1631849432.2720fa0d8a70ccfc33034287985b98c0c791a23d"}}}', 7: '{"id":1344116211,"name":"Sanjiv(Sam) Mall","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488128800&auto=format&frame=1&q=92&s=fd4520798d39b777e5814219c8fe4ad2","small":"https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488128800&auto=format&frame=1&q=92&s=67553420e14378664ae3555275a25d51","medium":"https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488128800&auto=format&frame=1&q=92&s=f08f3b4420e3ab37c4e07b4f98100dde"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1344116211"},"api":{"user":"https://api.kickstarter.com/v1/users/1344116211?signature=1631849432.6e307780f53a56c7a6dd5493ae59f26575d9fbcb"}}}', 8: '{"id":2071365832,"name":"Christoph Vogelbusch","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1467291732&auto=format&frame=1&q=92&s=3b321faecc138d42f7aa249620fc342d","small":"https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1467291732&auto=format&frame=1&q=92&s=967c607450ac03547632f0865270822f","medium":"https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1467291732&auto=format&frame=1&q=92&s=507442b8d2a97678675ec7c19b049e4b"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/2071365832"},"api":{"user":"https://api.kickstarter.com/v1/users/2071365832?signature=1631849432.0d05bc7a066a3748232100864f2d3a441186b289"}}}', 9: '{"id":850790011,"name":"Harun Sarac","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488440832&auto=format&frame=1&q=92&s=ab34266c1a0ce2ec4ac5e4931a606b64","small":"https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488440832&auto=format&frame=1&q=92&s=e1d62a787470490c4189bb9a72cfbacc","medium":"https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488440832&auto=format&frame=1&q=92&s=28e1a25444c13592e5ccf2967ac8b8e3"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/850790011"},"api":{"user":"https://api.kickstarter.com/v1/users/850790011?signature=1631849432.3ac62ea0ee180b660968be6227e29684c54286d6"}}}'}
You have the following dataframe given by your dictionary:data = {0: '{"id":1379875462,"name":"Batton Lash","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1461382354&auto=format&frame=1&q=92&s=4d88bd2ed1e7098fcaf046321cc4be15","small":"https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1461382354&auto=format&frame=1&q=92&s=664f586cef17d83dc408a6a10b0f3c4a","medium":"https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1461382354&auto=format&frame=1&q=92&s=fe307263e32a2385e764e3923a13179e"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1379875462"},"api":{"user":"https://api.kickstarter.com/v1/users/1379875462?signature=1631849432.d50b79030e15111575554ecae171babad1f2925d"}}}', 1: '{"id":408247096,"name":"Scott(skoddii)","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1519354368&auto=format&frame=1&q=92&s=74f83e0070b20db01d5180ba214d1b5e","small":"https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1519354368&auto=format&frame=1&q=92&s=671b9100176dbfa63752a7a8e9cc63d0","medium":"https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1519354368&auto=format&frame=1&q=92&s=956c6f85ffbc3fb179c260611254a2be"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/408247096"},"api":{"user":"https://api.kickstarter.com/v1/users/408247096?signature=1631849432.6cc0456d4795aea0b32f861b050212afef4387ce"}}}', 2: '{"id":361953386,"name":"Luis G. Batista, CPM, C.P.S.M","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488754184&auto=format&frame=1&q=92&s=f4dc0bbe5e7edbb35fb15c07bdb2c843","small":"https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488754184&auto=format&frame=1&q=92&s=9c7e202bb6491516468ec69dff66bcdd","medium":"https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488754184&auto=format&frame=1&q=92&s=ac05f1a9827cc321ea3e8f754f19be94"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/361953386"},"api":{"user":"https://api.kickstarter.com/v1/users/361953386?signature=1631849432.7262fa85aec828a6b01ea70685ef22b0ada784ad"}}}', 3: '{"id":202579323,"name":"Brian Carmichael","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488680236&auto=format&frame=1&q=92&s=9433c133b6bf02a45dd8ba78a0b44a46","small":"https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488680236&auto=format&frame=1&q=92&s=900c300f2d425243c108ed4419c78793","medium":"https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488680236&auto=format&frame=1&q=92&s=55e58d426c7f41b92081ce735abac404"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/202579323"},"api":{"user":"https://api.kickstarter.com/v1/users/202579323?signature=1631849432.fb88647e78bbe87ca2646330b0d84a0237c7cc46"}}}', 4: '{"id":1996450690,"name":"Dan Schmeidler","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488802482&auto=format&frame=1&q=92&s=97f88d105a1bc21a72f008859b13055c","small":"https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488802482&auto=format&frame=1&q=92&s=a423f3fbf75bdb32f1c895a1f0d76bca","medium":"https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488802482&auto=format&frame=1&q=92&s=49f4a2d61132d1068d3f604b03a1f8e5"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1996450690"},"api":{"user":"https://api.kickstarter.com/v1/users/1996450690?signature=1631849432.3b51c0d212170f4228293d3133045d040c6a6285"}}}', 5: '{"id":903880044,"name":"Doug McQuilken","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1479214827&auto=format&frame=1&q=92&s=84c65c201bdb46e72afeef51ad261913","small":"https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1479214827&auto=format&frame=1&q=92&s=52beef6574a551f81be17acc750d4e2e","medium":"https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1479214827&auto=format&frame=1&q=92&s=b4bb14d2759e21e6c40d3ef9c86c1ed3"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/903880044"},"api":{"user":"https://api.kickstarter.com/v1/users/903880044?signature=1631849432.6a7dcb45d0ca2a4c5922d51a0b3f36f7972b6ac0"}}}', 6: '{"id":1391487766,"name":"Karen Scott","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1487847709&auto=format&frame=1&q=92&s=e18d2c915b50e20cf27bb1255ad82ba9","small":"https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1487847709&auto=format&frame=1&q=92&s=bd7c22cafcec49e73bea6a106976043c","medium":"https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1487847709&auto=format&frame=1&q=92&s=d1d5327de95dac76d4cbed7a95007de1"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1391487766"},"api":{"user":"https://api.kickstarter.com/v1/users/1391487766?signature=1631849432.2720fa0d8a70ccfc33034287985b98c0c791a23d"}}}', 7: '{"id":1344116211,"name":"Sanjiv(Sam) Mall","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488128800&auto=format&frame=1&q=92&s=fd4520798d39b777e5814219c8fe4ad2","small":"https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488128800&auto=format&frame=1&q=92&s=67553420e14378664ae3555275a25d51","medium":"https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488128800&auto=format&frame=1&q=92&s=f08f3b4420e3ab37c4e07b4f98100dde"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/1344116211"},"api":{"user":"https://api.kickstarter.com/v1/users/1344116211?signature=1631849432.6e307780f53a56c7a6dd5493ae59f26575d9fbcb"}}}', 8: '{"id":2071365832,"name":"Christoph Vogelbusch","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1467291732&auto=format&frame=1&q=92&s=3b321faecc138d42f7aa249620fc342d","small":"https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1467291732&auto=format&frame=1&q=92&s=967c607450ac03547632f0865270822f","medium":"https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1467291732&auto=format&frame=1&q=92&s=507442b8d2a97678675ec7c19b049e4b"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/2071365832"},"api":{"user":"https://api.kickstarter.com/v1/users/2071365832?signature=1631849432.0d05bc7a066a3748232100864f2d3a441186b289"}}}', 9: '{"id":850790011,"name":"Harun Sarac","is_registered":None,"is_email_verified":None,"chosen_currency":None,"is_superbacker":None,"avatar":{"thumb":"https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&w=40&h=40&fit=crop&v=1488440832&auto=format&frame=1&q=92&s=ab34266c1a0ce2ec4ac5e4931a606b64","small":"https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&w=80&h=80&fit=crop&v=1488440832&auto=format&frame=1&q=92&s=e1d62a787470490c4189bb9a72cfbacc","medium":"https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&w=160&h=160&fit=crop&v=1488440832&auto=format&frame=1&q=92&s=28e1a25444c13592e5ccf2967ac8b8e3"},"urls":{"web":{"user":"https://www.kickstarter.com/profile/850790011"},"api":{"user":"https://api.kickstarter.com/v1/users/850790011?signature=1631849432.3ac62ea0ee180b660968be6227e29684c54286d6"}}}'}That is:0 {"id":1379875462,"name":"Batton Lash","is_regi...1 {"id":408247096,"name":"Scott(skoddii)","is_re...2 {"id":361953386,"name":"Luis G. Batista, CPM, ...3 {"id":202579323,"name":"Brian Carmichael","is_...4 {"id":1996450690,"name":"Dan Schmeidler","is_r...5 {"id":903880044,"name":"Doug McQuilken","is_re...6 {"id":1391487766,"name":"Karen Scott","is_regi...7 {"id":1344116211,"name":"Sanjiv(Sam) Mall","is...8 {"id":2071365832,"name":"Christoph Vogelbusch"...9 {"id":850790011,"name":"Harun Sarac","is_regis...What you can do is the follwing:df = pd.DataFrame(pd.Series(data))from ast import literal_evalimport numpy as npdf[0] = df[0].apply(literal_eval)df = df.join(pd.json_normalize(df[0]))which gives you0 {'id': 1379875462, 'name': 'Batton Lash', 'is_... 1379875462 1 {'id': 408247096, 'name': 'Scott(skoddii)', 'i... 408247096 2 {'id': 361953386, 'name': 'Luis G. Batista, CP... 361953386 3 {'id': 202579323, 'name': 'Brian Carmichael', ... 202579323 4 {'id': 1996450690, 'name': 'Dan Schmeidler', '... 1996450690 5 {'id': 903880044, 'name': 'Doug McQuilken', 'i... 903880044 6 {'id': 1391487766, 'name': 'Karen Scott', 'is_... 1391487766 7 {'id': 1344116211, 'name': 'Sanjiv(Sam) Mall',... 1344116211 8 {'id': 2071365832, 'name': 'Christoph Vogelbus... 2071365832 9 {'id': 850790011, 'name': 'Harun Sarac', 'is_r... 850790011 name is_registered is_email_verified \0 Batton Lash None None 1 Scott(skoddii) None None 2 Luis G. Batista, CPM, C.P.S.M None None 3 Brian Carmichael None None 4 Dan Schmeidler None None 5 Doug McQuilken None None 6 Karen Scott None None 7 Sanjiv(Sam) Mall None None 8 Christoph Vogelbusch None None 9 Harun Sarac None None chosen_currency is_superbacker \0 None None 1 None None 2 None None 3 None None 4 None None 5 None None 6 None None 7 None None 8 None None 9 None None avatar.thumb \0 https://ksr-ugc.imgix.net/assets/006/347/706/b... 1 https://ksr-ugc.imgix.net/assets/020/330/517/3... 2 https://ksr-ugc.imgix.net/assets/015/751/771/b... 3 https://ksr-ugc.imgix.net/assets/010/482/911/1... 4 https://ksr-ugc.imgix.net/assets/015/757/606/4... 5 https://ksr-ugc.imgix.net/assets/014/523/998/2... 6 https://ksr-ugc.imgix.net/assets/015/612/365/b... 7 https://ksr-ugc.imgix.net/assets/015/648/502/2... 8 https://ksr-ugc.imgix.net/assets/012/912/270/d... 9 https://ksr-ugc.imgix.net/assets/015/673/759/7... avatar.small \0 https://ksr-ugc.imgix.net/assets/006/347/706/b... 1 https://ksr-ugc.imgix.net/assets/020/330/517/3... 2 https://ksr-ugc.imgix.net/assets/015/751/771/b... 3 https://ksr-ugc.imgix.net/assets/010/482/911/1... 4 https://ksr-ugc.imgix.net/assets/015/757/606/4... 5 https://ksr-ugc.imgix.net/assets/014/523/998/2... 6 https://ksr-ugc.imgix.net/assets/015/612/365/b... 7 https://ksr-ugc.imgix.net/assets/015/648/502/2... 8 https://ksr-ugc.imgix.net/assets/012/912/270/d... 9 https://ksr-ugc.imgix.net/assets/015/673/759/7... avatar.medium \0 https://ksr-ugc.imgix.net/assets/006/347/706/b... 1 https://ksr-ugc.imgix.net/assets/020/330/517/3... 2 https://ksr-ugc.imgix.net/assets/015/751/771/b... 3 https://ksr-ugc.imgix.net/assets/010/482/911/1... 4 https://ksr-ugc.imgix.net/assets/015/757/606/4... 5 https://ksr-ugc.imgix.net/assets/014/523/998/2... 6 https://ksr-ugc.imgix.net/assets/015/612/365/b... 7 https://ksr-ugc.imgix.net/assets/015/648/502/2... 8 https://ksr-ugc.imgix.net/assets/012/912/270/d... 9 https://ksr-ugc.imgix.net/assets/015/673/759/7... urls.web.user \0 https://www.kickstarter.com/profile/1379875462 1 https://www.kickstarter.com/profile/408247096 2 https://www.kickstarter.com/profile/361953386 3 https://www.kickstarter.com/profile/202579323 4 https://www.kickstarter.com/profile/1996450690 5 https://www.kickstarter.com/profile/903880044 6 https://www.kickstarter.com/profile/1391487766 7 https://www.kickstarter.com/profile/1344116211 8 https://www.kickstarter.com/profile/2071365832 9 https://www.kickstarter.com/profile/850790011 urls.api.user 0 https://api.kickstarter.com/v1/users/137987546... 1 https://api.kickstarter.com/v1/users/408247096... 2 https://api.kickstarter.com/v1/users/361953386... 3 https://api.kickstarter.com/v1/users/202579323... 4 https://api.kickstarter.com/v1/users/199645069... 5 https://api.kickstarter.com/v1/users/903880044... 6 https://api.kickstarter.com/v1/users/139148776... 7 https://api.kickstarter.com/v1/users/134411621... 8 https://api.kickstarter.com/v1/users/207136583... 9 https://api.kickstarter.com/v1/users/850790011...
Display specific column through PANDAS I have PortalMammals_species.csv which cotains following columns :['record_id', 'new_code', 'oldcode', 'scientificname', 'taxa', 'commonname', 'unknown', 'rodent', 'shrubland_affiliated']I want to find out how many taxa are “Rodent” and Display those records by using PANDAS.I am trying this:Taxa =df["taxa"]=="Rodent"print(Taxa.value_counts())but this code giving me only value counts that are True :28 and False :27How can I display only those records that are true?Example
If you want a count of 'Rodent' only while still using value_counts(), you can try:df['taxa'][df['taxa']=="Rodent"].value_counts()Another option:df['taxa'][df['taxa']=="Rodent"].count()The PortalMammals_species.csv dataset I am seeing online also has a 'rodent' column, which is a 1/0 flag for rodent; if you have that column too, you could trydf['rodent'].sum()EDIT in response to OP's comment:To display the 'taxa' column only, filtered for Rodent:df['taxa'][df['taxa']=="Rodent"]To display the entire df filtered for Rodent:df[df['taxa']=="Rodent"]
Keras: Loss for image rotation and translate (target errore registration)? My model return 3 cordinate [x,y,angle]. I want TRE similarity between 2 images. My custom loss is:loss(y_true, y_pred): s = tfa.image.rotate(images=y_true[0], angles=y_pred[0][0]) s = tfa.image.translate(images=s, translations=y_pred[0][1:]) s = tf.reduce_sum(tf.sqrt(tf.square(s-y_true[1])))y_pred=(1, 3)->tensor with [angle,x,y]y_true=(2,128,128)-> in y_true[0] and y_true[1]: image.I:s=Rotate and translate y_true[0],Compare s and y_true[1], with MSEI can't use tfa.image.translate beacuse is not differentiable?How can rotate an image in a custom loss function? There are problem with gradient?
I Believe this will or will not work depending on the frequency distribution in your data. But in fft space this might be easier.
python pandas how to read csv file by block I'm trying to read a CSV file, block by block.CSV looks like:No.,time,00:00:00,00:00:01,00:00:02,00:00:03,00:00:04,00:00:05,00:00:06,00:00:07,00:00:08,00:00:09,00:00:0A,...1,2021/09/12 02:16,235,610,345,997,446,130,129,94,555,274,4,2,2021/09/12 02:17,364,210,371,341,294,87,179,106,425,262,3,1434,2021/09/12 02:28,269,135,372,262,307,73,86,93,512,283,4,1435,2021/09/12 02:29,281,207,688,322,233,75,69,85,663,276,2,No.,time,00:00:10,00:00:11,00:00:12,00:00:13,00:00:14,00:00:15,00:00:16,00:00:17,00:00:18,00:00:19,00:00:1A,...1,2021/09/12 02:16,255,619,200,100,453,456,4,19,56,23,4,2,2021/09/12 02:17,368,21,37,31,24,8,19,1006,4205,2062,30,1434,2021/09/12 02:28,2689,1835,3782,2682,307,743,256,741,52,23,6,1435,2021/09/12 02:29,2281,2047,6848,3522,2353,755,659,885,6863,26,36,Blocks start with No., and data rows follow.def run(sock, delay, zipobj): zf = zipfile.ZipFile(zipobj) for f in zf.namelist(): print(zf.filename) print("csv name: ", f) df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5] #,"nrows=1435? (but for the next blocks?") print(df, '\n') date_pattern='%Y/%m/%d %H:%M' df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column tuples=[] # data will be saved in a list formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS' for each_column in list(df.columns)[2:-1]: for e in zip(list(df['epoch']),list(df[each_column])): each_column=each_column.replace("X", '') #print(f"perf.type.serial.LDEV.{each_column}.TOTAL_IOPS",e) tuples.append((f"perf.type.serial.LDEV.{each_column}.TOTAL_IOPS",e)) package = pickle.dumps(tuples, 1) size = struct.pack('!L', len(package)) sock.sendall(size) sock.sendall(package) time.sleep(delay)Many thanks for help,
Load your file with pd.read_csv and create block at each time the row of your first column is No.. Use groupby to iterate over each block and create a new dataframe.data = pd.read_csv('data.csv', header=None)dfs = []for _, df in data.groupby(data[0].eq('No.').cumsum()): df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0]) dfs.append(df.rename_axis(columns=None))Output:# First block>>> dfs[0] No. time 00:00:00 00:00:01 00:00:02 00:00:03 00:00:04 00:00:05 00:00:06 00:00:07 00:00:08 00:00:09 00:00:0A ...0 1 2021/09/12 02:16 235 610 345 997 446 130 129 94 555 274 4 NaN1 2 2021/09/12 02:17 364 210 371 341 294 87 179 106 425 262 3 NaN2 1434 2021/09/12 02:28 269 135 372 262 307 73 86 93 512 283 4 NaN3 1435 2021/09/12 02:29 281 207 688 322 233 75 69 85 663 276 2 NaN# Second block>>> dfs[1] No. time 00:00:10 00:00:11 00:00:12 00:00:13 00:00:14 00:00:15 00:00:16 00:00:17 00:00:18 00:00:19 00:00:1A ...0 1 2021/09/12 02:16 255 619 200 100 453 456 4 19 56 23 4 NaN1 2 2021/09/12 02:17 368 21 37 31 24 8 19 1006 4205 2062 30 NaN2 1434 2021/09/12 02:28 2689 1835 3782 2682 307 743 256 741 52 23 6 NaN3 1435 2021/09/12 02:29 2281 2047 6848 3522 2353 755 659 885 6863 26 36 NaNand so on.
Pandas - return value of column I have a df with categories and thresholds:cat t1 t2 t3 t4a 2 4 6 8b 3 5 7 0c 0 0 1 0My end goal is to return the column name given category and score. I can select a row using a cat variable:df[df['cat'] == cat]How do I now return the column name that is closest to the score (rounded down)? (c, 3) -> t3
You can compute the absolute difference to your value and get the index of the minimum with idxmin:value = 3cat = 'c'(df.set_index('cat') .loc[cat] .sub(value).abs() .idxmin() )Output: 't3'ensuring rounded downvalue = 1cat = 'a'out = ( df.set_index('cat') .loc[cat] .sub(value).abs() .idxmin() )x = df.set_index('cat').loc[cat,out]out = None if value < x else outprint(out)
Calculate standard deviation for groups of values using Python My data looks similar to this:index name number difference0 AAA 10 01 AAA 20 102 BBB 1 03 BBB 2 14 CCC 5 05 CCC 10 56 CCC 10.5 0.5I need to calculate standard deviation for difference column based on groups of name.I trieddata[['difference']].groupby(['name']).agg(['mean', 'std'])anddata["std"]=(data['difference'].groupby('name').std())but both gave KeyError for the variable that's passed to groupby(). I tried to resolve it with:data.columns = data.columns.str.strip()but the error persists.Thanks in advance.
You can use groupby(['name']) on the full data frame first, and only apply the agg on the columns of interest:data = pd.DataFrame({'name':['AAA','AAA','BBB','BBB','CCC','CCC','CCC'], 'number':[10,20,1,2,5,10,10.5], 'difference':[0,10,0,1,0,5,0.5]})data.groupby(['name'])['difference'].agg(['mean', 'std'])
How to fix "pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available" when installing Tensorflow? I was trying to install TensorFlow with Anaconda 3.9.9.I ran the commandpip install tensorflowand there was an error saying:WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /simple/tensorflow/Could not fetch URL https://pypi.org/simple/tensorflow/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/tensorflow/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skippingERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)ERROR: No matching distribution found for tensorflowI have tried adding /anaconda3, /anaconda3/Scripts and /anaconda3/library/bin to the Path variable. I have also tried running the command:pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org tensorflowbut nothing seems to be working.Did I miss anything and are there any other solution?
Try running below command to fix this issue:pip install --upgradepip install sslPlease create an virtual_environment to install TensorFlow in Anaconda.Follow below code to install TensorFlow in virtual_environment:conda create -n tf tensorflow #Create a Virtual environment(tf).conda activate tf #Activate the Virtual environmentpip install tensorflow #install TensorFlow in it.Note: You need to activate the virtual_environment each time you want to use TensorFlow.
Python Pandas How to get rid of groupings with only 1 row? In my dataset, I am trying to get the margin between two values. The code below runs perfectly if the fourth race was not included. After grouping based on a column, it seems that sometimes, there will be only 1 value, therefore, no other value to get a margin out of. I want to ignore these groupings in that case. Here is my current code:import pandas as pddata = {'Name':['A', 'B', 'B', 'C', 'A', 'C', 'A'], 'RaceNumber': [1, 1, 2, 2, 3, 3, 4], 'PlaceWon':['First', 'Second', 'First', 'Second', 'First', 'Second', 'First'], 'TimeRanInSec':[100, 98, 66, 60, 75, 70, 75]}df = pd.DataFrame(data)print(df)def winning_margin(times): times = list(times) winner = min(times) times.remove(winner) return min(times) - winnerwinning_margins = df[['RaceNumber', 'TimeRanInSec']] \ .groupby('RaceNumber').agg(winning_margin)winning_margins.columns = ['margin']winners = df.loc[df.PlaceWon == 'First', :]winners = winners.join(winning_margins, on='RaceNumber')avg_margins = winners[['Name', 'margin']].groupby('Name').mean()avg_margins
How about returning a NaN if times does not have enough elements:import numpy as npdef winning_margin(times): if len(times) <= 1: # New code return np.NaN # New code times = list(times) winner = min(times) times.remove(winner) return min(times) - winneryour code runs with this change and seem to produce sensible results. But you can furthermore remove NaNs later if you want eg in this linewinning_margins = df[['RaceNumber', 'TimeRanInSec']] \ .groupby('RaceNumber').agg(winning_margin).dropna() # note the addition of .dropna()
Perform unique row operation after a groupby I have been stuck to a problem where I have done all the groupby operation and got the resultant dataframe as shown below but the problem came in last operation of calculation of one additional columnCurrent dataframe:code industry category count duration2 Retail Mobile 4 73 Retail Tab 2 333 Health Mobile 5 1032 Food TV 1 88The question: Want an additional column operation which calculates the ratio of count of industry 'retail' for the specific code column entryfor example: code 2 has 2 industry entry retail and food so operation column should have value 4/(4+1) = 0.8 and similarly for code3 as well as shown belowO/P:code industry category count duration operation2 Retail Mobile 4 7 0.83 Retail Tab 2 33 -3 Health Mobile 5 103 2/7 = 0.2852 Food TV 1 88 -Help on here as well that if I do just groupby I will miss out the information of category and duration also what would be better way to represent the output df there can been multiple industry and operation is limited to just retail
I can't think of a single operation. But the way via a dictionary should work. Oh, and in advance for the other answerers the code to create the example dataframe.st_l = [[2,'Retail','Mobile', 4, 7], [3,'Retail', 'Tab', 2, 33], [3,'Health', 'Mobile', 5, 103], [2,'Food', 'TV', 1, 88]]df = pd.DataFrame(st_l, columns= ['code','industry','category','count','duration'])And now my attempt:sums = df[['code', 'count']].groupby('code').sum().to_dict()['count']df['operation'] = df.apply(lambda x: x['count']/sums[x['code']], axis=1)
Applying a condition for all similar values within a column in a Pandas dataframe I have the following dataset in a pandas dataframe:Patient_ID Image_Type ... P001 PairedP001 PairedP001 PairedP001 CBCTP002 CBCTP002 CBCTP002 CBCTP002 CBCTP002 CBCTP002 CBCTP003 CBCT... ...So what im trying to do is to find whether the number of datapoints for each patient (Patient_ID) is equal to the number CBCT images taken for that patient.For example for Patient P002, the number CBCT images taken is equal to the number of datapoints. And for Patient P001, the number of CBCT images taken does not equal to the total number of datapoints for that partient. I would like to assign this condition to a new column; where value = 'Yes' if it is true and 'No' where it false.Please let me know if you need clarificartions with my question. Thanks.
IIUC:df[df["Image_Type"] == "CBCT"].groupby("Patient_ID").size() == df.groupby("Patient_ID").size()#Patient_ID#P001 False#P002 True#P003 True#dtype: boolI'm using df as Patient_ID Image_Type0 P001 Paired1 P001 Paired2 P001 Paired3 P001 CBCT4 P002 CBCT5 P002 CBCT6 P002 CBCT7 P002 CBCT8 P002 CBCT9 P002 CBCT10 P003 CBCT
Optimizing using only accuracy As I know we optimize our model with changing the weight parameters over the iterations.The aim is to minimize the loss and maximize the accuracy.I don't understand why we using loss as parameter as well if we have accuracy as parameter.Can we use only accuracy and drop loss from our model?With accuracy we can also change the model weights?
In short, perfecting a neural network is all about minimizing the difference between the intended result and given result. The difference is known as the cost/loss. So the smaller the cost/loss, the closer the intended value, so the higher the accuracyI suggest you watch 3Blue1Brown's video series on neural networks on youtube
Train a model with a task and test it with another task? I have a data-frame consists of 3000 samples, n numbers of features, and two targets columns as follow:mydata: id, f1, f2, ..., fn, target1, target2 01, 23, 32, ..., 44, 0 , 1 02, 10, 52, ..., 11, 1 , 2 03, 66, 15, ..., 65, 1 , 0 ... 2000, 76, 32, ..., 17, 0 , 1Here, I have a multi-task learning problem (I am quite new in this domain) and I want to train a model/network with target1 and test it with target2.If we consider target1 and target2 as tasks, they might be related tasks but we do not know how much. So, I want to see how much we can use the model trained by task1 (target1) to predict task2 (target2).It seems, it is not possible since target1 is a binary class (0 and 1), but target2 has more than two values (0,1 and 2). Is there any way to handle this issue?
This is not called Multi-Task Learning but Transfer Learning. It would be multi-task learning if you had trained your model to predict both the target1 and target2.Yes, there are ways to handle this issue. The final layer of the model is just the classifier head that computes the final label from the previous layer. You can consider the output from the previous layer as embeddings of the datapoint and use this representation to train/fine-tune another model. You have to plug in another head, though, since you now have three classes.so in pseudo-code, you need something likemodel = remove_last_layer(model)model.add(<your new classification head outputting 3 classes>)model.train()you can then compare this approach to the baseline, where you train from scratch on target2 to analyze the transfer learning between these two tasks.
how to create a stacked bar chart indicating time spent on nest per day I have some data of an owl being present in the nest box. In a previous question you helped me visualize when the owl is in the box:In addition I created a plot of the hours per day spent in the box with the code below (probably this can be done more efficiently):import pandas as pdimport matplotlib.pyplot as plt# raw data indicating time spent in box (each row represents start and end time)time = pd.DatetimeIndex(["2021-12-01 18:08","2021-12-01 18:11", "2021-12-02 05:27","2021-12-02 05:29", "2021-12-02 22:40","2021-12-02 22:43", "2021-12-03 19:24","2021-12-03 19:27", "2021-12-06 18:04","2021-12-06 18:06", "2021-12-07 05:28","2021-12-07 05:30", "2021-12-10 03:05","2021-12-10 03:10", "2021-12-10 07:11","2021-12-10 07:13", "2021-12-10 20:40","2021-12-10 20:41", "2021-12-12 19:42","2021-12-12 19:45", "2021-12-13 04:13","2021-12-13 04:17", "2021-12-15 04:28","2021-12-15 04:30", "2021-12-15 05:21","2021-12-15 05:25", "2021-12-15 17:40","2021-12-15 17:44", "2021-12-15 22:31","2021-12-15 22:37", "2021-12-16 04:24","2021-12-16 04:28", "2021-12-16 19:58","2021-12-16 20:09", "2021-12-17 17:42","2021-12-17 18:04", "2021-12-17 22:19","2021-12-17 22:26", "2021-12-18 05:41","2021-12-18 05:44", "2021-12-19 07:40","2021-12-19 16:55", "2021-12-19 20:39","2021-12-19 20:52", "2021-12-19 21:56","2021-12-19 23:17", "2021-12-21 04:53","2021-12-21 04:59", "2021-12-21 05:37","2021-12-21 05:39", "2021-12-22 08:06","2021-12-22 17:22", "2021-12-22 20:04","2021-12-22 21:24", "2021-12-22 21:44","2021-12-22 22:47", "2021-12-23 02:20","2021-12-23 06:17", "2021-12-23 08:07","2021-12-23 16:54", "2021-12-23 19:36","2021-12-23 23:59:59", "2021-12-24 00:00","2021-12-24 00:28", "2021-12-24 07:53","2021-12-24 17:00", ])# create dataframe with column indicating presence (1) or absence (0)time_df = pd.DataFrame(data={'present':[1,0]*int(len(time)/2)}, index=time)# calculate interval length and add to time_dftime_df['interval'] = time_df.index.to_series().diff().astype('timedelta64[m]')# add column with day to time_dftime_df['day'] = time.day#select only intervals where owl is present timeinbox = time_df.iloc[1::2, :]interval = timeinbox.intervalday = timeinbox.day# sum multiple intervals per dayinterval_tot = [interval[0]]day_tot = [day[0]]for i in range(1, len(day)): if day[i] == day[i-1]: interval_tot[-1] +=interval[i] else: day_tot.append(day[i]) interval_tot.append(interval[i])# recalculate to hours for i in range(len(interval_tot)): interval_tot[i] = interval_tot[i]/(60)plt.figure(figsize=(15, 5)) plt.grid(zorder=0)plt.bar(day_tot, interval_tot, color='g', zorder=3) plt.xlim([1,31])plt.xlabel('day in December')plt.ylabel('hours per day in nest box')plt.xticks(np.arange(1,31,1))plt.ylim([0, 24])Now I would like to combine all data in one plot by making a stacked bar chart, where each day is represented by a bar and each bar indicating for each of the 24*60 minutes whether the owl is present or not. Is this possible from the current data structure?
The data seems to have been created manually, so I have changed the format of the data presented. The approach I took was to create the time spent and the time not spent, with a continuous index of 1 minute intervals with the start and end time as the difference time and a flag of 1. Now to create non-stay time, I will create a time series index of start and end date + 1 at 1 minute intervals. Update the original data frame with the newly created index. This is the data for the graph. In the graph, based on the data frame extracted in days, create a color list with red for stay and green for non-stay. Then, in a bar graph, stack the height one. It may be necessary to consider grouping the data into hourly units.import pandas as pdimport numpy as npimport matplotlib.pyplot as pltfrom datetime import timedeltaimport iodata = '''start_time,end_time"2021-12-01 18:08","2021-12-01 18:11""2021-12-02 05:27","2021-12-02 05:29""2021-12-02 22:40","2021-12-02 22:43""2021-12-03 19:24","2021-12-03 19:27""2021-12-06 18:04","2021-12-06 18:06""2021-12-07 05:28","2021-12-07 05:30""2021-12-10 03:05","2021-12-10 03:10""2021-12-10 07:11","2021-12-10 07:13""2021-12-10 20:40","2021-12-10 20:41""2021-12-12 19:42","2021-12-12 19:45""2021-12-13 04:13","2021-12-13 04:17""2021-12-15 04:28","2021-12-15 04:30""2021-12-15 05:21","2021-12-15 05:25""2021-12-15 17:40","2021-12-15 17:44""2021-12-15 22:31","2021-12-15 22:37""2021-12-16 04:24","2021-12-16 04:28""2021-12-16 19:58","2021-12-16 20:09""2021-12-17 17:42","2021-12-17 18:04""2021-12-17 22:19","2021-12-17 22:26""2021-12-18 05:41","2021-12-18 05:44""2021-12-19 07:40","2021-12-19 16:55""2021-12-19 20:39","2021-12-19 20:52""2021-12-19 21:56","2021-12-19 23:17""2021-12-21 04:53","2021-12-21 04:59""2021-12-21 05:37","2021-12-21 05:39""2021-12-22 08:06","2021-12-22 17:22""2021-12-22 20:04","2021-12-22 21:24""2021-12-22 21:44","2021-12-22 22:47""2021-12-23 02:20","2021-12-23 06:17""2021-12-23 08:07","2021-12-23 16:54""2021-12-23 19:36","2021-12-24 00:00""2021-12-24 00:00","2021-12-24 00:28""2021-12-24 07:53","2021-12-24 17:00"'''df = pd.read_csv(io.StringIO(data), sep=',')df['start_time'] = pd.to_datetime(df['start_time'])df['end_time'] = pd.to_datetime(df['end_time'])time_df = pd.DataFrame()for idx, row in df.iterrows(): rng = pd.date_range(row['start_time'], row['end_time']-timedelta(minutes=1), freq='1min') tmp = pd.DataFrame({'present':[1]*len(rng)}, index=rng) time_df = time_df.append(tmp)date_add = pd.date_range(time_df.index[0].date(), time_df.index[-1].date()+timedelta(days=1), freq='1min')time_df = time_df.reindex(date_add, fill_value=0)time_df['day'] = time_df.index.dayimport matplotlib.pyplot as pltfig, ax = plt.subplots(figsize=(8,15))ax.set_yticks(np.arange(0,1500,60))ax.set_ylim(0,1440)ax.set_xticks(np.arange(1,25,1))days = time_df['day'].unique()for d in days: #if d == 1: day_df = time_df.query('day == @d') colors = [ 'r' if p == 1 else 'g' for p in day_df['present']] for i in range(len(day_df)): ax.bar(d, height=1, width=0.5, bottom=i+1, color=colors[i])plt.show()
pandas rolling on specific column I'm trying something very simple, seemingly at least which is to do a rolling sum on a column of a dataframe. See minimal example below :df = pd.DataFrame({"Col1": [10, 20, 15, 30, 45], "Col2": [13, 23, 18, 33, 48], "Col3": [17, 27, 22, 37, 52]})df['dt'] = pd.date_range("2020-01-01", "2020-01-05")indexCol1Col2Col3dt.01013172020-01-0112023272020-01-0221518222020-01-0333033372020-01-0444548522020-01-05If I rundf['sum2']=df['Col1'].rolling(window="3d", min_periods=2, on=df['dt']).sum()then instead of getting what I'm hoping which is a rolling sum on column 1, I get this traceback. If I switch the index to the dt field value it works if I removed the on=df['dt'] param. I've tried on='dt' also with no luck.This is the error message I get :...ValueError: invalid on specified as 0 2020-01-011 2020-01-022 2020-01-033 2020-01-044 2020-01-05Name: dt, dtype: datetime64[ns], must be a column (of DataFrame), an Index or NoneAnything I'm overlooking?thanks!
The correct syntax is:df['sum2'] = df.rolling(window="3d", min_periods=2, on='dt')['Col1'].sum()print(df)# Output: Col1 Col2 Col3 dt sum20 10 13 17 2020-01-01 NaN1 20 23 27 2020-01-02 30.02 15 18 22 2020-01-03 45.03 30 33 37 2020-01-04 65.04 45 48 52 2020-01-05 90.0Your error is to extract the columns Col1 at first so the column dt does not exist when rolling.>>> df['Col1'] # the column 'dt' does not exist anymore.0 101 202 153 304 45Name: Col1, dtype: int64
how add new column with column names based on conditioned values? I have a table that contains active cases of covid per country for period of time. The columns are country name and dates.I need to find the max value of active cases per country and the corresponding date of the max values. I have created a list of max values but cant manage to create a column with the corresponding date.I have written the following loop, but it returns only one date (the last one - [5/2/20]):for row in active_cases_data[column]: if row in max_cases: active_cases_data['date'] = columnscreenshot of df and resulting columntable looks like this:country4/29/204/30/205/1/205/2/20Italy67105250240I need extra column of date for the largest number for the row(in Italy case its the 5/1/20 for value = 250) like this:country4/29/204/30/205/1/205/2/20dateItaly671052502405/1/20
In pandas we are trying not to use python loops, unless we REALLY need them.I suppose that your dataset looks something like that:df = pd.DataFrame({"Country": ["Poland", "Ukraine", "Czechia", "Russia"], "2021.12.30": [12, 23, 43, 43], "2021.12.31": [15, 25, 40, 50], "2022.01.01": [18, 27, 41, 70], "2022.01.02": [21, 22, 42, 90]})# Country 2021.12.30 2021.12.31 2022.01.01 2022.01.02#0 Poland 12 15 18 21#1 Ukraine 23 25 27 22#2 Czechia 43 40 41 42#3 Russia 43 50 70 90Short way:You use idxmax(), after excluding column with name:df['Date'] = df.loc[:, df.columns != "Country"].idxmax(axis=1)# Country 2021.12.30 2021.12.31 2022.01.01 2022.01.02 Date#0 Poland 12 15 18 21 2022.01.02#1 Ukraine 23 25 27 22 2022.01.01#2 Czechia 43 40 41 42 2021.12.30#3 Russia 43 50 70 90 2022.01.02You just have to be aware of running this line multiple times - it tooks every column (except of excluded one - "Country").Long way:First, I would transform the data from wide to long table:df2 = df.melt(id_vars="Country", var_name = "Date", value_name = "Cases")# Country Date Cases#0 Poland 2021.12.30 12#1 Ukraine 2021.12.30 23#2 Czechia 2021.12.30 43#3 Russia 2021.12.30 43#4 Poland 2021.12.31 15#...#15 Russia 2022.01.02 90With the long table we can in many different ways find the needed rows, for example:df2 = df2.sort_values(by=["Country", "Cases", "Date"], ascending=[True, False, False])df2.groupby("Country").first().reset_index()# Country Date Cases#0 Czechia 2021.12.30 43#1 Poland 2022.01.02 21#2 Russia 2022.01.02 90#3 Ukraine 2022.01.01 27By setting the last position in ascending parameter you could manipulate which date should be used in case of a tie.
Multiple plots from function Matplotlib (Adjusted to suggestions)I already have a function that performs some plot:def plot_i(Y, ax = None): if ax == None: ax = plt.gca() fig = plt.figure() ax.plot(Y) plt.close(fig) return figAnd I wish to use this to plot in a grid for n arrays. Let's assume the grid is (n // 2, 2) for simplicity and that n is even. At the moment, I came up with this:def multi_plot(Y_arr, function): n = len(Y_arr) fig, ax = plt.subplots(n // 2, 2) for i in range(n): # assign to one axis a call of the function = plot_i that draws a plot plt.close(fig) return figUnfortunately, what I get if I do something like:# inside the loopplot_i(Y[:, i], ax = ax[k,j])Is correct but I need to close figures each time at the end, otherwise I keep on adding figures to plt.Is there any way I can avoid calling each time plt.close(fig)?
If I understand correctly, you are looking for something like this:import numpy as npimport matplotlib.pyplot as pltdef plot_i(Y, ax=None): if ax == None: ax = plt.gca() ax.plot(Y) returndef multi_plot(Y_arr, function, n_cols=2): n = Y_arr.shape[1] fig, ax = plt.subplots(n // n_cols + (1 if n % n_cols else 0), n_cols) for i in range(n): # assign to one axis a call of the function = plot_i that draws a plot function(Y_arr[:, i], ax = ax[i//n_cols, i%n_cols]) return figif __name__ == '__main__': x = np.linspace(0,12.6, 100) # let's create some fake data data = np.exp(-np.linspace(0,.5, 14)[np.newaxis, :] * x[:, np.newaxis]) * np.sin(x[:, np.newaxis]) fig = multi_plot(data, plot_i, 3)Be careful when using gca(): it will create a new figure if there is no figure active.
Is it possible in numpy array to add rows with different length and then add elements to that rows in python? Python Version: 3.7.11numpy Version: 1.21.2I want to have a numpy array, something like below:[ ["Hi", "Anne"], ["How", "are", "you"], ["fine"]]But the process of creating this numpy array is not simple and it's as follows:# code block 1 At the beginning we have an empty numpy array.First loop:# code block 2 row is added in this first loop orin this loop we understand that we need a new row.A loop inside of the first loop:# code block 3 elements of that row will be added in this inner loop.Assume that:the number of iterations is not specified, I mean:the number of columns of each row is different andwe don't know the number of rows that we want to add to numpy array.Maybe bellow code example will help me get my point across:a = [["Hi", "Anne"], ["How", "are", "you"], ["fine"]]# code block 1: code for creating empty numpy arrayfor row in a: # code block 2: code for creating empty row for element in row: # code block 3: code for appending element to that row or last row Question:Is it possible to create a numpy array with these steps (code block #1, #2, #3)?If yes, how?
Numpy arrays are not optimised for inconsistent dimensions, and therefore not good practice. You can only do this by making your elements objects, not strings. But like I said, numpy is not the way to go for this.a = numpy.array([["Hi", "Anne"], ["How", "are", "you"], ["fine"]], dtype=object)
Why numpy argmax() not getting the index? so I'm really new with data analysis and numpy library, and just playing around with the builtin function.I have this on top of my file import numpy as npnew_arr = np.arange(25)print new_arr.argmax()which should print out the index of the maximum value, not the value it self. But it keeps on giving me 24.As what I understand max() gives you the maximum value, while argmax() gives you the index of the maximum value.
np.arange starts from zero (unless you give it a different start), and indexing is also 0-based. So in np.arange(25), the 0th element is zero, the 1th element is 1, etc. So every element is the same number as its index. So the maximum value is 24 and its index is also 24.
Python - how to correctly index numpy array with other numpy arrays, similarly to MATLAB I'm trying to learn python after years of using MATLAB and this is something I'm really stuck with. I have an array, say 10 by 8. I want to find rows that have value 3 in the first column and take columns "2:" in that row. What I do is:newArray = oldArray[np.asarray(np.where(oldArray[:,0] == 3)), 2:]But that creates a 3-dimensional array with first dimension 1, instead of 2-dimensional array. I'm trying to achieve MATLAB equivalent of newArray = oldArray(find(oldArray(:,1)==3),3:end);Anyone have any thoughts on how to do that? Thank you!
Slice the first column and compare against 3 to give us a mask for selecting rows. After selecting rows by indexing into the first axis/rows of a 2D array of the input array, we need to select the columns (second axis of array). On your MATLAB code, you have 3:end, which would translate to 2: on NumPy. In MATLAB, you need to specify the end index, in NumPy you don't. So, it simplifies to 2:, as compared to 3:end on MATLAB.Thus, the code would be -oldArray[oldArray[:,0]==3,2:]Sample run -In [352]: aOut[352]: |===============>|array([[1, 0, 4, 2, 0, 1, 3, 2], [1, 0, 0, 3, 2, 3, 4, 4], [1, 2, 1, 4, 4, 0, 4, 2], [0, 2, 0, 3, 2, 2, 1, 2], [1, 2, 3, 3, 1, 0, 0, 1], [3, 4, 2, 4, 2, 0, 3, 4], <== [3, 1, 1, 0, 0, 1, 2, 0], <== [2, 0, 4, 3, 1, 3, 1, 1], [4, 3, 1, 3, 1, 3, 4, 4], [2, 0, 2, 0, 3, 1, 1, 1]])In [353]: a[a[:,0]==3,2:]Out[353]: array([[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]])Reviewing your code -Your code was -In [359]: a[np.asarray(np.where(a[:,0] == 3)), 2:]Out[359]: array([[[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]]])That works too, but creates a 3D array as listed in the question.Dissecting into it -In [361]: np.where(a[:,0] == 3)Out[361]: (array([5, 6]),)We see np.where is a tuple of arrays, which are the row and column indices. For a slice of 1D, you won't have both rows and columns, but just one array of indices.In MATLAB, find gives you an array of indices, so there's less confusion ->> aa = 3 4 3 3 2 5 5 2 2 2 2 3 5 3 4 4 4 3 4 2 3 2 4 2>> find(a(:,1)==3)ans = 1 6So, to get those indices, get the first array out of it -In [362]: np.where(a[:,0] == 3)[0]Out[362]: array([5, 6])Use it to index into the first axis and then slice the column from 2 onwards -In [363]: a[np.where(a[:,0] == 3)[0]]Out[363]: array([[3, 4, 2, 4, 2, 0, 3, 4], [3, 1, 1, 0, 0, 1, 2, 0]])In [364]: a[np.where(a[:,0] == 3)[0],2:]Out[364]: array([[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]])That gives you the expected output.Word of cautionOne needs to be careful while indexing into axes with masks or integers.In theory, the column-indexing there should be equivalent of indexing with [2,3,4,5,6,7] for a of 8 columns.Let's try that -In [370]: a[a[:,0]==3,[2,3,4,5,6,7]]....IndexError: shape mismatch: indexing arrays could ... not be broadcast together with shapes (2,) (6,) We are triggering broadcastable indexing there. The elements for indexing into the two axes are of different lengths and are not broadcastable.Let's verify that. The array for indexing into rows -In [374]: a[:,0]==3Out[374]: array([False, False, False, False, False, True, True, False, False, False], dtype=bool)Essentially that's an array of two elements, as there are two True elems -In [375]: np.where(a[:,0]==3)[0]Out[375]: array([5, 6])The array for indexing into columns was [2,3,4,5,6,7], which was of length 6 and thus are not broadcastable against the row indices.To get to our desired target of selecting row IDs : 5,6 and for each of those rows select column IDs 2,3,4,5,6,7, we could create open meshes with np._ix that are broadcastable, like so -In [376]: np.ix_(a[:,0]==3, [2,3,4,5,6,7])Out[376]: (array([[5], [6]]), array([[2, 3, 4, 5, 6, 7]]))Finally, index into input array with those for the desired o/p -In [377]: a[np.ix_(a[:,0]==3, [2,3,4,5,6,7])]Out[377]: array([[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]])
Using apply on pandas dataframe with strings without looping over series I have a pandas DataFrame filled with strings. I would like to apply a string operation to all entries, for example capitalize(). I know that for a series we can use series.str.capitlize(). I also know that I can loop over the column of the Dataframe and do this for each of the columns. But I want something more efficient and elegant, without looping. Thanks
use stack + unstackstack makes a dataframe with a single level column index into a series. You can then perform your str.capitalize() and unstack to get back your original form.df.stack().str.capitalize().unstack()
Tensorflow - About mnist.train.next_batch() When I search about mnist.train.next_batch() I found thishttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.pyIn this code def next_batch(self, batch_size, fake_data=False, shuffle=True): """Return the next `batch_size` examples from this data set.""" if fake_data: fake_image = [1] * 784 if self.one_hot: fake_label = [1] + [0] * 9 else: fake_label = 0 return [fake_image for _ in xrange(batch_size)], [ fake_label for _ in xrange(batch_size) ]start = self._index_in_epoch# Shuffle for the first epochif self._epochs_completed == 0 and start == 0 and shuffle: perm0 = numpy.arange(self._num_examples) numpy.random.shuffle(perm0) self._images = self.images[perm0] self._labels = self.labels[perm0]# Go to the next epochif start + batch_size > self._num_examples: # Finished epoch self._epochs_completed += 1 # Get the rest examples in this epoch rest_num_examples = self._num_examples - start images_rest_part = self._images[start:self._num_examples] labels_rest_part = self._labels[start:self._num_examples] # Shuffle the data if shuffle: perm = numpy.arange(self._num_examples) numpy.random.shuffle(perm) self._images = self.images[perm] self._labels = self.labels[perm] # Start next epoch start = 0 self._index_in_epoch = batch_size - rest_num_examples end = self._index_in_epoch images_new_part = self._images[start:end] labels_new_part = self._labels[start:end] return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0)else: self._index_in_epoch += batch_size end = self._index_in_epoch return self._images[start:end], self._labels[start:end]I know that mnist.train.next_batch(batch_size=100) means it randomly pick 100 data from MNIST dataset. Now, Here's my questionWhat is shuffle=true means?If I set next_batch(batch_size=100,fake_data=False, shuffle=False) then it picks 100 data from the start to the end of MNIST dataset sequentially? Not randomly?
Re 1, when shuffle=True the order of examples in the data is randomized. Re 2, yes, it should respect whatever order the examples have in the numpy arrays.
Can my numba code be faster than numpy I am new to Numba and am trying to speed up some calculations that have proved too unwieldy for numpy. The example I've given below compares a function containing a subset of my calculations using a vectorized/numpy and numba versions of the function the latter of which was also tested as pure python by commenting out the @autojit decorator. I find that the numba and numpy versions give similar speed ups relative to the pure python, both of which are about a factor of 10 speed improvement.The numpy version was actually slightly faster than my numba function but because of the 4D nature of this calculation I quickly run out of memory when the arrays in the numpy function are sized much larger than this toy example. This speed up is nice but I have often seen speed ups of >100x on the web when moving from pure python to numba.I would like to know if there is a general expected speed increase when moving to numba in nopython mode. I would also like to know if there are any components of my numba-ized function that would be limiting further speed increases.import numpy as np from timeit import default_timer as timer from numba import autojit import math def vecRadCalcs(slope, skyz, solz, skya, sola): nloc = len(slope) ntime = len(solz) [lenz, lena] = skyz.shape asolz = np.tile(np.reshape(solz,[ntime,1,1,1]),[1,nloc,lenz,lena]) asola = np.tile(np.reshape(sola,[ntime,1,1,1]),[1,nloc,lenz,lena]) askyz = np.tile(np.reshape(skyz,[1,1,lenz,lena]),[ntime,nloc,1,1]) askya = np.tile(np.reshape(skya,[1,1,lenz,lena]),[ntime,nloc,1,1]) phi1 = np.cos(asolz)*np.cos(askyz) phi2 = np.sin(asolz)*np.sin(askyz)*np.cos(askya- asola) phi12 = phi1 + phi2 phi12[phi12> 1.0] = 1.0 phi = np.arccos(phi12) return(phi) @autojit def RadCalcs(slope, skyz, solz, skya, sola, phi): nloc = len(slope) ntime = len(solz) pop = 0.0 [lenz, lena] = skyz.shape for iiT in range(ntime): asolz = solz[iiT] asola = sola[iiT] for iL in range(nloc): for iz in range(lenz): for ia in range(lena): askyz = skyz[iz,ia] askya = skya[iz,ia] phi1 = math.cos(asolz)*math.cos(askyz) phi2 = math.sin(asolz)*math.sin(askyz)*math.cos(askya- asola) phi12 = phi1 + phi2 if phi12 > 1.0: phi12 = 1.0 phi[iz,ia] = math.acos(phi12) pop = pop + 1 return(pop) zenith_cells = 90 azim_cells = 360 nloc = 10 # nominallly ~ 700 ntim = 10 # nominallly ~ 200000 slope = np.random.rand(nloc) * 10.0 solz = np.random.rand(ntim) *np.pi/2.0 sola = np.random.rand(ntim) * 1.0*np.pi base = np.ones([zenith_cells,azim_cells]) skya = np.deg2rad(np.cumsum(base,axis=1)) skyz = np.deg2rad(np.cumsum(base,axis=0)*90/zenith_cells) phi = np.zeros(skyz.shape) start = timer() outcalc = RadCalcs(slope, skyz, solz, skya, sola, phi) stop = timer() outcalc2 = vecRadCalcs(slope, skyz, solz, skya, sola) stopvec = timer() print(outcalc) print(stop-start) print(stopvec-stop)
On my machine running numba 0.31.0, the Numba version is 2x faster than the vectorized solution. When timing numba functions, you need to run the function more than one time because the first time you're seeing the time of jitting the code + the run time. Subsequent runs will not include the overhead of jitting the functions time since Numba caches the jitted code in memory. Also, please note that your functions are not calculating the same thing -- you want to be careful that you're comparing the same things using something like np.allclose on the results.
return streams for multiple securities in pandas Suppose I have a table which looks like this: Ticker Date ClosingPrice0 A 01-02-2010 11.41 A 01-03-2010 11.5 ...1000 AAPL 01-02-2010 6341001 AAPL 01-02-2010 635So, in other words, we have a sequence of timeseries spliced together one per ticker symbol. Now, I would like to generate a column of daily returns. If I had only one symbol, that would be very easy with the pandas pct_change() function, but how do I do it for multiple time series as above (I can do a sequence of groupbys, make each a dataframe, do the return computation, then splice them all together with pd.concat() but that does not seem optimal.
use groupbydf.set_index(['Ticker', 'Date']).ClosingPrice.groupby(level=0).pct_change()Ticker Date A 01-02-2010 NaN 01-03-2010 0.008772AAPL 01-02-2010 NaN 01-02-2010 0.001577Name: ClosingPrice, dtype: float64
How can I slice 2D array in python without Numpy module? What I wanna process is slice 2D array partially without numpy module like following example with numpy.and I want to know Time Complexity of Slicing Lists in python basic functionimport numpy as np A = np.array([ [1,2,3,4,5,6,7,8] for i in range(8)])n = len(A[0])x = int(n/2)TEMP = [[None]*2 for i in range(2)]for w in range(2): for q in range(2): TEMP[w][q] = A[w*x:w*x+x,q*x:q*x+x]for w in range(2): for q in range(2): print(TEMP[w][q])here is the result that i wanna get[[1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4]][[5 6 7 8] [5 6 7 8] [5 6 7 8] [5 6 7 8]][[1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4]][[5 6 7 8] [5 6 7 8] [5 6 7 8] [5 6 7 8]]Process finished with exit code 0
For the first question:A = [ [1,2,3,4,5,6,7,8] for i in range(8)]n = len(A[0])x = int(n/2)TEMP = [[None]*2 for i in range(2)]for w in range(2): for q in range(2): TEMP[w][q] = [item[q * x:(q * x) + x] for item in A[w * x:(w * x) + x]]for w in range(2): for q in range(2): print("{i}, {j}: {item}".format(i=w, j=q, item=repr(TEMP[w][q])))
Countvectorizer having words not in data I am new to sklearn and countvectorizer. Some weird behaviour is happening to me. Initializing the count vectorizerfrom sklearn.feature_extraction.text import CountVectorizercount_vect = CountVectorizer()document_mtrx = count_vect.fit_transform(df['description'])count_vect.vocabulary_count_vect.vocabulary_Out[28]:{u'viewscity': 36216, u'sizeexposed': 31584, u'rentalcontact': 29104, u'villagebldg': 36323,Getting the rows which contains the word rentalcontactdf[df['description'].str.contains('rentalcontact')]The number of rows returned is 0. Why is this the case ?
CountVectorizer has a parameter lowercase which defaults to True - most probably that's why you can't find those values.So try this:df[df['description'].str.lower().str.contains('rentalcontact')]# ^^^^^^^UPDATE:vocabulary_ : dictA mapping of terms to feature indices.u'rentalcontact': 29104 - means that 'rentalcontact' has an index 29104 in the list of features.I.e. vectorizer.get_feature_names()[29104] should return 'rentalcontact'
Pandas DataFrame to drop rows in the groupby I have a DataFrame with three columns Date, Advertiser and ID. I grouped the data firsts to see if volumns of some Advertisers are too small (For example when count() less than 500). And then I want to drop those rows in the group table.df.groupby(['Date','Advertiser']).ID.count()The result likes this: Date Advertiser 2016-01 A 50000 B 50 C 4000 D 24000 2016-02 A 6800 B 7800 C 123 2016-03 B 1111 E 8600 F 500I want a result to be this: Date Advertiser 2016-01 A 50000 C 4000 D 24000 2016-02 A 6800 B 7800 2016-03 B 1111 E 8600Followed up question: How about if I want to filter out the rows in groupby in term of the total count() in date category. For example, I want to count() for a date larger than 15000. The table I want likes this: Date Advertiser 2016-01 A 50000 B 50 C 4000 D 24000 2016-02 A 6800 B 7800 C 123
You have a Series object after the groupby, which can be filtered based on value with a chained lambda filter:df.groupby(['Date','Advertiser']).ID.count()[lambda x: x >= 500]#Date Advertiser#2016-01 A 50000# C 4000# D 24000#2016-02 A 6800# B 7800#2016-03 B 1111# E 8600# F 500
Back propagation algorithm gets stuck on training AND function Here is an implementation of AND function with single neuron using tensorflow:def tf_sigmoid(x): return 1 / (1 + tf.exp(-x))data = [ (0, 0), (0, 1), (1, 0), (1, 1),]labels = [ 0, 0, 0, 1,]n_steps = 1000learning_rate = .1x = tf.placeholder(dtype=tf.float32, shape=[2])y = tf.placeholder(dtype=tf.float32, shape=None)w = tf.get_variable('W', shape=[2], initializer=tf.random_normal_initializer(), dtype=tf.float32)b = tf.get_variable('b', shape=[], initializer=tf.random_normal_initializer(), dtype=tf.float32)h = tf.reduce_sum(x * w) + boutput = tf_sigmoid(h)error = tf.abs(output - y)optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(error)sess.run(tf.initialize_all_variables())for step in range(n_steps): for i in np.random.permutation(range(len(data))): sess.run(optimizer, feed_dict={x: data[i], y: labels[i]})Sometimes it works perfectly, but on some parameters it gets stuck and doesn't want to learn. For example with these initial parameters:w = tf.Variable(initial_value=[-0.31199348, -0.46391705], dtype=tf.float32)b = tf.Variable(initial_value=-1.94877, dtype=tf.float32)it will hardly make any improvement in cost function. What am I doing wrong, maybe I should somehow adjust initialization of parameters?
Aren't you missing a mean(error) ?Your problem is the particular combination of the sigmoid, the cost function, and the optimizer.Don't feel bad, AFAIK this exact problem stalled the entire field for a few years.Sigmoid is flat when you're far from the middle, and You're initializing it with relatively large numbers, try /1000.So your abs-error (or square-error) is flat too, and the GradientDescent optimizer takes steps proportional to the slope.Either of these should fix it:Use cross-entropy for the error - it's convex.Use a better Optimizer, like Adam, who's step size is much less dependent on the slope. More on the consistency of the slope.Bonus: Don't roll your own sigmoid, use tf.nn.sigmoid, you'll get a lot fewer NaN's that way.Have fun!
How to Pivot a table in csv horizontally in Python using Pandas df? I have data in this format - MonthYear HPI Div State_fips1-1993 105.45 7 52-1993 105.58 7 53-1993 106.23 7 54-1993 106.63 7 5Required Pivot Table as: Stafips 1-1993 2-1993 3-1993 4-19935 105.45 105.58 106.23 106.63(pretty new to pandas)
Use unstack or pivot:df1 = df.set_index(['State_fips', 'MonthYear'])['HPI'].unstack()MonthYear 1-1993 2-1993 3-1993 4-1993State_fips 5 105.45 105.58 106.23 106.63df1 = df.pivot(index='State_fips', columns='MonthYear', values='HPI')MonthYear 1-1993 2-1993 3-1993 4-1993State_fips 5 105.45 105.58 106.23 106.63But if duplicates, need aggregate with groupby or pivot_table, mean can be changed to sum, median, ...:print (df) MonthYear HPI Div State_fips0 1-1993 105.45 7 51 2-1993 105.58 7 52 3-1993 106.23 7 53 4-1993 100.00 7 5 <-duplicates same 4-1993, 54 4-1993 200.00 7 5 <-duplicates same 4-1993, 5df1 = df.pivot_table(index='State_fips', columns='MonthYear', values='HPI', aggfunc='mean')MonthYear 1-1993 2-1993 3-1993 4-1993State_fips 5 105.45 105.58 106.23 150.0 <- (100+200/2) = 150df1 = df.groupby(['State_fips', 'MonthYear'])['HPI'].mean().unstack()MonthYear 1-1993 2-1993 3-1993 4-1993State_fips 5 105.45 105.58 106.23 150.0 <- (100+200/2) = 150Last if need create column from index and remove columns name:df1 = df1.reset_index().rename_axis(None, axis=1)print (df1) State_fips 1-1993 2-1993 3-1993 4-19930 5 105.45 105.58 106.23 150.0
Dimensionality Reduction in Python (defining the variance threshold) Afternoon. I'm having some trouble with my script. Specifically, I'd like to keep the singular values and their corresponding eigenvectors when the sum of a subset of the eigenvalues is greater than .9*the sum of all the eigenvalues. So far Iv'e been able to use a for loop and append function that creates a list of tuples that represent the singular values and eigenvectors. However, when I try to nest an if statement within the for loop to meet the condition i break it. here's my code.o = np.genfromtxt (r"C:\Users\Python\Desktop\PCADUMMYDATADUMP.csv", delimiter=",")o_m=np.matrix(o)#We define the covariance matrix of our data accordingly This is the mean centered data approx#of the covariance matrix. def covariance_matrix(x): #create the mean centered data matrix. this is the data matrix minus the matrix augmented from the vector that represents the column average m_c_d=x-np.repeat(np.mean(x, axis=0,), len(x), axis=0) #we compute the matrix operations here m_c_c=np.multiply(1/((len(m_c_d)-1)),np.transpose(m_c_d)*m_c_d) return m_c_c#Define the correlation matrix for our mean adjsuted data matrixdef correlation_matrix(x): C_M = covariance_matrix(x) #matrix operation is diagonal(covariance_matrix)^-1/2*(covaraince_matrix)*diagonal(covariance_matrix)^-1/2 c_m=fractional_matrix_power(np.diag(np.diag(C_M)),-1/2)*C_M*fractional_matrix_power(np.diag(np.diag(C_M)),-1/2) return c_mdef s_v_d(x): C_M=covariance_matrix(x) #create arrays that hold the left singular vectors(u), the right singular vectors(v), and the singular values (s) u,s,v=np.linalg.svd(C_M) #not sure if we should keep this here but this is how we can grab the eigenvalues which are the sqares of the singular values eigenvalues=np.square(s) singular_array=[] for i in range(0,len(s)-1): if np.sum(singular_array,axis=1) < (.9*np.sum(s)): singular_pairs=[s[i],v[:,i]] singular_array.append(singular_pairs) else: break return np.sum(s,axis=0)specifically, consider the for and if loop after singular[array]. Thanks!
I think your singular_array with its "mixed" scalar/vector elements is a bit more than np.sum can handle. I'm not 100% sure but aren't the variances the squares of the singular values? In other words shouldn't you be using your eigenvalues for the decision?Anyway, here is a a non-looping approach:part_sums = np.cumsum(eigenvalues)cutoff = np.searchsorted(part_sums, 0.9 * part_sums[-1])singular_array = list(zip(s[:cutoff], v[:, :cutoff]))Change eigenvalues to s if you think it's more appropriate.How it works:cumsum computes the running sum over eigenvalues. Its last element is therefore the total sum and we need only find the place where part_sums crosses 90% of that. This is what searchsorted does for us.Once we have the cutoff all that remains is applying it to the singular values and vectors and to form the pairs using zip.
Python pandas read dataframe from custom file format Using Python 3 and pandas 0.19.2I have a log file formatted this way:[Header1][Header2][Header3][HeaderN][=======][=======][=======][=======][Value1][Value2][Value3][ValueN][AnotherValue1][ValuesCanBeEmpty][][]......which is very much like a CSV excepted that each value is surrounded by [ and ] and there is no real delimiter.What would be the most efficient way to load that content into a pandas DataFrame ?
You can use read_csv with separator ][ which has to be escape by \. Then replace columns and values and remove row with all NaN by dropna:import pandas as pdfrom pandas.compat import StringIOtemp=u"""[Header1][Header2][Header3][HeaderN][=======][=======][=======][=======][Value1][Value2][Value3][ValueN][AnotherValue1][ValuesCanBeEmpty][][]"""#after testing replace 'StringIO(temp)' to 'filename.csv'df = pd.read_csv(StringIO(temp), sep="\]\[", engine='python')df.columns = df.columns.to_series().replace(['^\[', '\]$'],['',''], regex=True)df = df.replace(['^\[', '\]$', '=', ''], ['', '', np.nan, np.nan], regex=True)df = df.dropna(how='all')print (df) Header1 Header2 Header3 HeaderN1 Value1 Value2 Value3 ValueN2 AnotherValue1 ValuesCanBeEmpty NaN NaNprint (df.columns)Index(['Header1', 'Header2', 'Header3', 'HeaderN'], dtype='object')
Numpy C API - Using PyArray_Descr for array creation causes segfaults I'm trying to use the Numpy C API to create Numpy arrays in C++, wrapped in a utility class. Most things are working as expected, but whenever I try to create an array using one of the functions taking a PyArray_Descr*, the program instantly segfaults. What is the correct way to set up the PyArray_Descr for creation?An example of code which isn't working:PyMODINIT_FUNCPyInit_pysgm(){ import_array(); return PyModule_Create(&pysgmmodule);}// ....static PyAry zerosLike(PyAry const& array){ PyArray_Descr* descr = new PyArray_Descr; Py_INCREF(descr); // creation function steals a reference descr->type = 'H'; descr->type_num = NPY_UINT16; descr->kind = 'u'; descr->byteorder = '='; descr->alignment = alignof(std::uint16_t); descr->elsize = sizeof(std::uint16_t); std::vector<npy_intp> shape {array.shape().begin(), array.shape().end()}; // code segfaults after this line before entering PyAry constructor return PyAry(PyArray_Zeros(shape.size(), shape.data(), descr, 0));}(testing with uint16).I'm not setting the typeobj field, which may be the only problem, but I can't work out what the appropriate value of type PyTypeObject would be.Edit: This page lists the ScalarArray PyTypeObject instances for different types. Adding the linedescr->typeobj = &PyUShortArrType_Type;has not solved the problem.
Try using descr = PyArray_DescrFromType(NPY_UINT16);I've only recently been writing against the numpy C-API, but from what I gather the PyArray_Descr is basically the dtype from python-land. You should building these yourself and use the FromType macro if you can.
Frobenius normalization implementation in tensorflow I'm beginner in tensorflow and i want to apply Frobenius normalization on a tensor but when i searched i didn't find any function related to it in tensorflow and i couldn't implement it using tensorflow ops, i can implement it with numpy operations, but how can i do this using tensorflow ops only ??My implementation using numpy in pythondef Frobenius_Norm(tensor): x = np.power(tensor,2) x = np.sum(x) x = np.sqrt(x) return x
def frobenius_norm_tf(M): return tf.reduce_sum(M ** 2) ** 0.5
using or to return two columns. Pandas def continent_af(): africa = df[df['cont'] == 'AF' or df['cont'] == 'af'] return africaprint(continent_af())So the first half of the second line returned what I wanted, but when I put the or function in, i am getting an error, which reads the truth value of a series is ambiguous. use a.empty(), a.bool(), a.any(), or a.all()any help would be much appreciated
Try:df[(df['cont'] == 'AF') | (df['cont'] == 'af')]
Trouble creating/manipulating Pandas DataFrame from given list of JSON records I have json records in the file json_data. I used pd.DataFrame(json_data) to make a new table, pd_json_data, using these records.pandas table pd_json_dataI want to manipulate pd_json_data to return a new table with primary key (url,hour), and then a column updated that contains a boolean value.hour is based on the number of checks. For example, if number of checks contains 378 at row 0, the new table should have the numbers 1 through 378 in hour, with True in updated if the number in hour is a number in positive checks. Any ideas for how I should approach this?
Updated AnswerMake fake datadf = pd.DataFrame({'number of checks': [5, 10, 300, 8], 'positive checks':[[1,3,10], [10,11], [9,200], [1,8,7]], 'url': ['a', 'b', 'c', 'd']})Output number of checks positive checks url0 5 [1, 3, 10] a1 10 [10, 11] b2 300 [9, 200] c3 8 [1, 8, 7] dIterate and create new dataframes, then concatenatedfs = []for i, row in df.iterrows(): hour = np.arange(1, row['number of checks'] + 1) df_cur = pd.DataFrame({'hour' : hour, 'url': row['url'], 'updated': np.in1d(hour, row['positive checks'])}) dfs.append(df_cur)df_final = pd.concat(dfs) hour updated url0 1 True a1 2 False a2 3 True a3 4 False a4 5 False a0 1 False b1 2 False b2 3 False b3 4 False b4 5 False b5 6 False b6 7 False b7 8 False b8 9 False b9 10 True b0 1 False c1 2 False cOld answerNow build new dataframedf1 = df[['url']].copy()df1['hour'] = df['number of checks'].map(lambda x: list(range(1, x + 1)))df1['updated'] = df.apply(lambda x: x['number of checks'] in x['positive checks'], axis=1)Output url hour updated0 a [1, 2, 3, 4, 5] False1 b [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] True2 c [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... False3 d [1, 2, 3, 4, 5, 6, 7, 8] True
Pandas groupby results on the same plot I am dealing with the following data frame (only for illustration, actual df is quite large): seq x1 y10 2 0.7725 0.21051 2 0.8098 0.34562 2 0.7457 0.54363 2 0.4168 0.76104 2 0.3181 0.87905 3 0.2092 0.54986 3 0.0591 0.63577 5 0.9937 0.53648 5 0.3756 0.76359 5 0.1661 0.8364Trying to plot multiple line graph for the above coordinates (x as "x1 against y as "y1").Rows with the same "seq" is one path, and has to be plotted as one separate line, like all the x, y coordinates corresponding the seq = 2 belongs to one line, and so on. I am able to plot them, but on a separate graphs, I want all the lines on the same graph, Using subplots, but not getting it right. import matplotlib as mplimport matplotlib.pyplot as plt%matplotlib notebookdf.groupby("seq").plot(kind = "line", x = "x1", y = "y1")This creates 100's of graphs (which is equal to the number of unique seq). Suggest me a way to obtain all the lines on the same graph.**UPDATE*To resolve the above problem, I implemented the following code: fig, ax = plt.subplots(figsize=(12,8)) df.groupby('seq').plot(kind='line', x = "x1", y = "y1", ax = ax) plt.title("abc") plt.show()Now, I want a way to plot the lines with specific colors. I am clustering path from seq = 2 and 5 in cluster 1; and path from seq = 3 in another cluster.So, there are two lines under cluster 1 which I want in red and 1 line under cluster 2 which can be green.How should I proceed with this?
You need to init axis before plot like in this exampleimport pandas as pdimport matplotlib.pylab as pltimport numpy as np# random dfdf = pd.DataFrame(np.random.randint(0,10,size=(25, 3)), columns=['ProjID','Xcoord','Ycoord'])# plot groupby results on the same canvas fig, ax = plt.subplots(figsize=(8,6))df.groupby('ProjID').plot(kind='line', x = "Xcoord", y = "Ycoord", ax=ax)plt.show()
pandas diff() giving 0 value for first difference, I want the actual value instead I have df:Hour Energy Wh 1 4 2 6 3 94 15I would like to add a column that shows the per hour difference. I am using this:df['Energy Wh/h'] = df['Energy Wh'].diff().fillna(0)df1:Hour Energy Wh Energy Wh/h1 4 02 6 2 3 9 34 15 6However, the Hour 1 value is showing up as 0 in the Energy Wh/h column, whereas I would like it to show up as 4, like below:Hour Energy Wh Energy Wh/h1 4 42 6 2 3 9 34 15 6I have tried using np.where:df['Energy Wh/h'] = np.where(df['Hour'] == 1,df['Energy Wh'].diff().fillna(df['Energy Wh']),df['Energy Wh'].diff().fillna(0))but I am still getting a 0 value in the hour 1 row (df1), with no errors. How do I get the value in 'Energy Wh' for Hour 1 to be filled, instead of 0?
You can just fillna() with the original column, without using np.where:>>> df['Energy Wh/h'] = df['Energy Wh'].diff().fillna(df['Energy Wh'])>>> df Energy Wh Energy Wh/hHour 1 4 4.0 2 6 2.0 3 9 3.0 4 15 6.0
How do I read a bytearray from a CSV file using pandas? I have a csv file which has a column full of bytearrays. It looks like this:bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?')bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?')bytearray(b'\xf3\x00\x00\xff\xff\xff\xe0?')and so on. I tried to read this csv file using pandas.read_csv().df = pd.read_csv(filename, error_bad_lines=False)data = df.msgmsg is the name of the column with the bytearrays.But it doesn't look like this is a column full of bytearrays. When I pick out a column and try to print individual elements with print(data[1][1]), the output I get is y, which corresponds to the 1 position in bytearray.How can I import this particular column as a list of bytearrays?
You can pass a converter function to pandas.read_csv() to turn your bytearray into a bytearrayCode:from ast import literal_evaldef read_byte_arrays(bytearray_string): if bytearray_string.startswith('bytearray(') and \ bytearray_string.endswith(')'): return bytearray(literal_eval(bytearray_string[10:-1])) return bytearray_stringTest Code:from io import StringIOdata = StringIO(u'\n'.join([x.strip() for x in r""" data1,bytes,data2 1,bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?'),2 1,bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?'),2 1,bytearray(b'\xf3\x00\x00\xff\xff\xff\xe0?'),2""".split('\n')[1:-1]]))df = pd.read_csv(data, converters={'bytes': read_byte_arrays})print(df)Results: data1 bytes data20 1 [243, 144, 2, 255, 255, 255, 224, 63] 21 1 [243, 144, 2, 255, 255, 255, 224, 63] 22 1 [243, 0, 0, 255, 255, 255, 224, 63] 2
Cant iterate through multiple pandas series So I was attempting to iterate through two series that I obtained from a Pandas DF, and I found that I could not iterate through them to return numbers less than 280.000. I also realized that I could not iterate over lists either. Is there any way I can iterate over multiple lists, series, etc? thanks.Example below:two_series = df['GNP'], df['Population']def numb(): for i in two_series: if i < 280.000: print(i)
Currently, two_series is just a tuple with two elements, each of which is a Series. So when you loop through all the elements of two_series, i is the whole series, and you only loop twice. It doesn't make sense to ask if a Series is less than 280, so it throws an error.You could just concatenate the series, like this:two_series = df['GNP'].append(df['Population'])Or you could just add a second nested loop to go through each of the items in each series:for i in two_series: for entry in i: if entry < 280.000: print(entry)
using python to project lat lon geometry to utm I have a dataframe with earthquake data called eq that has columns listing latitude and longitude. using geopandas I created a point column with the following:from geopandas import GeoSeries, GeoDataFramefrom shapely.geometry import Points = GeoSeries([Point(x,y) for x, y in zip(df['longitude'], df['latitude'])])eq['geometry'] = seq.crs = {'init': 'epsg:4326', 'no_defs': True}eqNow I have a geometry column with lat lon coordinates but I want to change the projection to UTM. Can anyone help with the transformation?
Latitude/longitude aren't really a projection, but sort of a default "unprojection". See this page for more details, but it probably means your data uses WGS84 or epsg:4326.Let's build a dataset and, before we do any reprojection, we'll define the crs as epsg:4326import geopandas as gpdimport pandas as pdfrom shapely.geometry import Pointdf = pd.DataFrame({'id': [1, 2, 3], 'population' : [2, 3, 10], 'longitude': [-80.2, -80.11, -81.0], 'latitude': [11.1, 11.1345, 11.2]})s = gpd.GeoSeries([Point(x,y) for x, y in zip(df['longitude'], df['latitude'])])geo_df = gpd.GeoDataFrame(df[['id', 'population']], geometry=s)# Define crs for our geodataframe:geo_df.crs = {'init': 'epsg:4326'} I'm not sure what you mean by "UTM projection". From the wikipedia page I see there are 60 different UTM projections depending on the area of the world. You can find the appropriate epsg code online, but I'll just give you an example with a random epsgcode. This is the one for zone 33N for exampleHow do you do the reprojection? You can easily get this info from the geopandas docs on projection. It's just one line:geo_df = geo_df.to_crs({'init': 'epsg:3395'})and the geometry isn't coded as latitude/longitude anymore: id population geometry0 1 2 POINT (-8927823.161620541 1235228.11420853)1 2 3 POINT (-8917804.407449147 1239116.84994171)2 3 10 POINT (-9016878.754255159 1246501.097746004)
Converting numpy array values into integers My values are currently showing as 1.00+e09 in an array (type float64). I would like them to show 1000000000 instead. Is this possible?
Make a sample arrayIn [206]: x=np.array([1e9, 2e10, 1e6])In [207]: xOut[207]: array([ 1.00000000e+09, 2.00000000e+10, 1.00000000e+06])We can convert to ints - except notice that the largest one is too large the default int32In [208]: x.astype(int)Out[208]: array([ 1000000000, -2147483648, 1000000])In [212]: x.astype(np.int64)Out[212]: array([ 1000000000, 20000000000, 1000000], dtype=int64)Writing a csv with the default format (float) (this is the default format regardless of the array dtype):In [213]: np.savetxt('text.txt',x)In [214]: cat text.txt1.000000000000000000e+092.000000000000000000e+101.000000000000000000e+06We can specify a format:In [215]: np.savetxt('text.txt',x, fmt='%d')In [216]: cat text.txt1000000000200000000001000000Potentially there are 3 issues:integer v float in the array itself, it's dtypedisplay or print of the arraywriting the array to a csv file
How to find minimum value every x values in an array? path = ("C:/Users/Calum/AppData/Local/Programs/Python/Python35-32/Python Programs/PV Data/Monthly Data/brunel-11-2016.csv")with open (path) as f: readCSV = csv.reader((islice(f, 0, 8352)), delimiter = ';') irrad_bru1 = [] for row in readCSV: irrad1 = row[1] irrad_bru1.append(irrad1)irrad_bru1 = ['0' if float(x)<0 else x for x in irrad_bru1]bru_arr1 = np.asarray(irrad_bru1).astype(np.float)rr_bru1 = -np.diff(bru_arr1)I want to find the minimum value in the array rr_bru1 every 200 entries how do I go about doing that?
You can use np.minimum.reduceat:np.minimum.reduceat(a, np.arange(0, len(a), 200))
Add multiple rows for each datetime in pandas dataframe name_coldatetime 2017-03-22 0.2I want to add multiple rows till the present date (2017-03-25) so that resulting dataframe looks like: name_coldatetime 2017-03-22 0.22017-03-23 0.02017-03-24 0.02017-03-25 0.0How do I add multiple rows for each datetime? I can get present date as from datetime import datetime, timedelta, datedate.today()
You can also use .resample() method:In [98]: dfOut[98]: name_coldatetime2017-03-22 0.2In [99]: df.loc[pd.to_datetime(pd.datetime.now().date())] = 0In [100]: dfOut[100]: name_coldatetime2017-03-22 0.22017-03-25 0.0In [101]: df.resample('D').bfill()Out[101]: name_coldatetime2017-03-22 0.22017-03-23 0.02017-03-24 0.02017-03-25 0.0
Pandas DataFrame to csv: Specifying decimal separator for mixed type I've found a somewhat strange behaviour when I create a Pandas DataFrame from lists and convert it to csv with a specific decimal separator.This works as expected:>>> import pandas as pd>>> a = pd.DataFrame([['a', 0.1], ['b', 0.2]])>>> a 0 10 a 0.11 b 0.2>>> a.to_csv(decimal=',', sep=' ')' 0 1\n0 a 0,1\n1 b 0,2\n'However, in this case the decimal separator is not set properly:>>> b = pd.DataFrame([['a', 'b'], [0.1, 0.2]])>>> b 0 10 a b1 0.1 0.2>>> b.to_csv(decimal=',', sep=' ')' 0 1\n0 a b\n1 0.1 0.2\n'When I transpose b in order to get a DataFrame like a the decimal separator is still not properly set:>>> b.T.to_csv(decimal=',', sep=' ')' 0 1\n0 a 0.1\n1 b 0.2\n'Why I am asking: In my program I have columns as individual lists (e.g. col1 = ['a', 'b'] and col2 = [0.1, 0.2], but the number and format of columns can vary) and I would like to convert them to csv with a specific decimal separator, so I'd like to have an output like' 0 1\n0 a 0,1\n1 b 0,2\n'
Use applymap and cast the float typed cells to str by checking explicitly for their type. Then, replace the decimal dot(.) with the comma (,) as each cell now constitutes a string and dump the contents to a csv file later.b.applymap(lambda x: str(x).replace(".", ",") if isinstance(x, float) else x).to_csv(sep=" ")# ' 0 1\n0 a b\n1 0,1 0,2\n'
DataFrame of soccer soccers into a league table so I'm going to re-write this question having spent some time trying to crack it today, and I think I'm doing okay so far.I have a soccer results database with this as the head(3) Date Season home visitor FT hgoal vgoal division tier totgoal goaldif result1993-04-12 1992 Arsenal Aston Villa 0-1 0 1 1 1 1 -1 A 1992-09-12 1992 Arsenal Blackburn Rovers 0-1 0 1 1 1 1 -1 A 1992-10-03 1992 Arsenal Chelsea 2-1 2 1 1 1 3 1 HI've written this code, which work:def my_table(season) : teams = season['home'].unique().tolist() table = [] for team in teams : home = season[season['home'] == team]['result'] hseq = dict(zip(*np.unique(home, return_counts=True))) away = season[season['visitor'] == team]['result'] aseq = dict(zip(*np.unique(away, return_counts=True))) team_dict = { "season" : season.iloc[0]['Season'], "team" : team, "home_pl" : sum(hseq.values()), "home_w" : hseq.get('H', 0), "home_d" : hseq.get('D', 0), "home_l" : hseq.get('A', 0), "home_gf" : season[season['home'] == team]['hgoal'].sum(), "home_ga" : season[season['home'] == team]['vgoal'].sum(), "home_gd" : season[season['home'] == team]['goaldif'].sum(), "home_pts" : hseq.get('H', 0) * 3 + hseq.get('D', 0), "away_pl" : sum(aseq.values()), "away_w" : aseq.get('A', 0), "away_d" : aseq.get('D', 0), "away_l" : aseq.get('H', 0), "away_gf" : season[season['visitor'] == team]['vgoal'].sum(), "away_ga" : season[season['visitor'] == team]['hgoal'].sum(), "away_gd" : (season[season['visitor'] == team]['goaldif'].sum() * -1), "away_pts" : aseq.get('A', 0) * 3 + hseq.get('D', 0) } team_dict["pl"] = team_dict["home_pl"] + team_dict['away_pl'] team_dict["w"] = team_dict["home_w"] + team_dict['away_w'] team_dict["d"] = team_dict["home_d"] + team_dict['away_d'] team_dict["l"] = team_dict["home_l"] + team_dict['away_l'] team_dict["gf"] = team_dict["home_gf"] + team_dict['away_gf'] team_dict["ga"] = team_dict["home_ga"] + team_dict['away_ga'] team_dict["gd"] = team_dict["home_gd"] + team_dict['away_gd'] team_dict["pts"] = team_dict["home_pts"] + team_dict['away_pts'] table.append(team_dict) return tableseasons = pl['Season'].unique().tolist()all_tables = []for season in seasons : table = my_table(pl[pl['Season'] == season]) all_tables += tabletbl = pd.DataFrame(all_tables) away = ['away_pl', 'away_w', 'away_d', 'away_l', 'away_gf', 'away_ga', 'away_gd', 'away_pts']home = ['home_pl', 'home_w', 'home_d', 'home_l', 'home_gf', 'home_ga', 'home_gd', 'home_pts']full = ['pl', 'w', 'd', 'l', 'gf', 'ga', 'gd', 'pts']team = ['team']tbl = tbl[['season', 'team']+home+away+full]So now 'tbl' is good, and I can index it by season. But I'm having trouble making it a multi-index which is by 'season' first and then by their points total (descending) which would be equivalent to their league finishing position. To be clear, I want the index to be 1-20 (or 1-22) but the index be driven by the points total. Also, if anyone has any thoughts on how I've gone about building the table itself, would love to hear it. I spent a long time trying to use various vectorized functions which I'm told are more efficient but couldn't get it to work and reverted to for loops.Thank you
Consider using GroupBy.rank or Series.rank to calculate teams by descending pts rank. Since I can not tell if your final dataframe is at season, team, or game level choose appropriate ranking:tbl['team_rank'] = tbl.groupby(['season', 'team'])['pts'].rank(ascending=False)tbl['team_rank'] = tbl['pts'].rank(ascending=False)Then use set_index on the pair of fields for the multindex with no need for prior sorting. tbl = tbl.set_index(['season', 'team_rank'])However, since you require multiple fields for ranking purposes, consider using a reset_index then retrieve the index.values to get the ordered number (+ 1 if you do not want to begin with zero):tbl = tbl.sort_values(['season', 'pts', 'gd', 'gf'], ascending=[True, False, False, False]).reset_index(drop=True)tbl['rank'] = tbl.index.values + 1tbl = tbl.set_index(['season', 'rank'])
numpy.all axis parameter misbehavior? I have a following array.a = np.array([[0, 5, 0, 5], [0, 9, 0, 9]])>>>a.shape Out[72]: (2, 4)>>>np.all(a,axis=0)Out[69]: array([False, True, False, True], dtype=bool)>>>np.all(a,axis=1)Out[70]: array([False, False], dtype=bool)Because axis 0 means the first axis(row-wise) in 2D array,I expected when np.all(a,axis=0) is given, it checks whether all element is True or not, per every row. But it seems like checking per column cause it gives output as 4 elements like array([False, True, False, True], dtype=bool).What am I misunderstanding about np.all functioning?
axis=0 means to AND the elements together along axis 0, so a[0, 0] gets ANDed with a[1, 0], a[0, 1] gets ANDed with a[1, 1], etc. The axis specified gets collapsed.You're probably thinking that it takes np.all(a[0]), np.all(a[1]), etc., selecting subarrays by indexing along axis 0 and performing np.all on each subarray. That's the opposite of how it works; that would collapse every axis but the one specified.With 2D arrays, there isn't much advantage for one convention over the other, but with 3D and higher, NumPy's chosen convention is much more useful.
Accessing years within a dataframe in Pandas I have a dataframe wherein there is a column of datetimes:rng = pd.date_range('1/1/2011', periods=4, freq='500D')print(rng)df = DataFrame(rng)which looks like this:I would like to find the mean year from this column, which would be 2012.75 (I would later round it).Towards this end, I can access an individual year usingdf[0].iloc[0].year which returns 2011...but to take a mean, I'd have to do this in a clumsy loop. Is there a way to do access these years, then take a mean, which is consistent with Pandas vectorized nature?
If you convert the column into a DatetimeIndex, then you can use its year attribute (which returns a NumPy array) and the array's mean method.In [104]: pd.DatetimeIndex(df[0]).year.mean()Out[104]: 2012.75Another way is to use the dt accessor (new in Pandas 0.15):In [132]: df[0].dt.year.mean()Out[132]: 2012.75Or, if you want to do some NumPy datetime64 wrangling:In [115]: (df[0].values.astype('<M8[Y]').astype('<i8')+1970).mean()Out[115]: 2012.75For all but small DataFrames, using pd.DatetimeIndex is fastest:In [144]: rng = pd.date_range('1/1/2011', periods=10**5, freq='500D')In [145]: df = pd.DataFrame(rng)In [147]: %timeit pd.DatetimeIndex(df[0]).year.mean()100 loops, best of 3: 4.5 ms per loopIn [146]: %timeit (df[0].values.astype('<M8[Y]').astype('<i8')+1970).mean()100 loops, best of 3: 5.14 ms per loopIn [148]: %timeit df[0].dt.year.mean()100 loops, best of 3: 5.18 ms per loop
Pandas groupby monthly + prorate I have a MultiIndex series:date xcs subdomain count2012-04-05 111-11 zero 102012-04-11 222-22 m 252012-04-11 111-11 zero 30Basically the first 3 columns form a unique index. I need to group by year-month+xcs+subdomain, but count needs to be summed-up, divided by the number of items in that group, and multiplied by 30. Thus for [2012-04, 111-11, zero] group from the above example, it would be (10 + 30)/2*30. I am guessing that this is identical to using average() function for each group, but would still need to multiply it by 30.Thanks!
One way is to do it like this:Setup your dummy dataframe:import pandas as pddata = """date xcs subdomain count2012-04-05 111-11 zero 102012-04-11 222-22 m 252012-04-11 111-11 zero 30"""df = pd.read_csv(pd.io.common.StringIO(data), sep="\s+")df['date'] = pd.to_datetime(df.date)df.set_index(['date', 'xcs', 'subdomain'], inplace=True)Groupby and apply .mean multiplying by 30:df['value'] = (df.groupby(level=['date', 'xcs', 'subdomain']).mean() * 30).dropna()dfYielding: count valuedate xcs subdomain 2012-04-05 111-11 zero 10 3002012-04-11 222-22 m 25 750 111-11 zero 30 900
Python pandas cumsum() reset after hitting max I have a pandas DataFrame with timedeltas as a cumulative sum of those deltas in a separate column expressed in milliseconds. An example is provided below:Transaction_ID Time TimeDelta CumSum[ms]1 00:00:04.500 00:00:00.000 0002 00:00:04.600 00:00:00.100 1003 00:00:04.762 00:00:00.162 2624 00:00:05.543 00:00:00.781 10435 00:00:09.567 00:00:04.024 50676 00:00:10.654 00:00:01.087 61547 00:00:14.300 00:00:03.646 98008 00:00:14.532 00:00:00.232 100329 00:00:16.500 00:00:01.968 1200010 00:00:17.543 00:00:01.043 13043I would like to be able to provide a maximum value for CumSum[ms] after which the cumulative sum would start over again at 0. For example, if the maximum value was 3000 in the above example, the results would look like so:Transaction_ID Time TimeDelta CumSum[ms]1 00:00:04.500 00:00:00.000 0002 00:00:04.600 00:00:00.100 1003 00:00:04.762 00:00:00.162 2624 00:00:05.543 00:00:00.781 10435 00:00:09.567 00:00:04.024 06 00:00:10.654 00:00:01.087 10877 00:00:14.300 00:00:03.646 08 00:00:14.532 00:00:00.232 2329 00:00:16.500 00:00:01.968 220010 00:00:17.543 00:00:01.043 0I have explored using the modulo operator, but am only successful in resetting back to zero when the resulting cumsum is equal to the limit provided (i.e. cumsum[ms] of 500 % 500 equals zero).Thanks in advance for any thoughts you may have, and please let me know if I can provide any more information.
Here's an example of how you might do this by iterating over each row in the dataframe. I created new data for the example for simplicity:df = pd.DataFrame({'TimeDelta': np.random.normal( 900, 60, size=100)})print df.head() TimeDelta0 971.0212951 734.3598612 867.0003973 992.1665394 853.281131So let's do an accumulator loop with your desired 3000 max:maxvalue = 3000lastvalue = 0newcum = []for row in df.iterrows(): thisvalue = row[1]['TimeDelta'] + lastvalue if thisvalue > maxvalue: thisvalue = 0 newcum.append( thisvalue ) lastvalue = thisvalueThen put the newcom list into the dataframe:df['newcum'] = newcumprint df.head() TimeDelta newcum0 801.977678 801.9776781 893.296429 1695.2741072 935.303566 2630.5776733 850.719497 0.0000004 951.554206 951.554206
numpy slice to return last two dimensions Basically I'm looking for a function or syntax that will allow me to get the first 'slice' of the last two dimensions of a n dimensional numpy array with an arbitrary number of dimensions.I can do this but it's too ugly to live with, and what if someone sends a 6d array in? There must be a numpy function like the ellipse that expands to 0,0,0,... instead of :,:,:,...data_2d = np.ones(5**2).reshape(5,5)data_3d = np.ones(5**3).reshape(5,5,5)data_4d = np.ones(5**4).reshape(5,5,5,5)def get_last2d(data): if data.ndim == 2: return data[:] if data.ndim == 3: return data[0, :] if data.ndim == 4: return data[0, 0, :]np.array_equal(get_last2d(data_3d), get_last2d(data_4d))Thanks,Colin
How about this,def get_last2d(data): if data.ndim <= 2: return data slc = [0] * (data.ndim - 2) slc += [slice(None), slice(None)] return data[slc]
replace integers in array Python They told me to post a new question to the second part of the question.Is there some way I can replace the first 8 integers in the multidimensional array with 8 integers of array that I created for example: import Image import numpy as np im = Image.open("C:\Users\Jones\Pictures\1.jpg") pix = im.load() array=[0, 3, 38, 13, 7, 18, 3, 715] r, g, b = np.array(im).T print r[0:8]
Try with this:r[0, :8] = arrayIt looks like you can use reading the numpy docs on indexing.
pandas transform timeseries into multiple column DataFrame I have a timeseries of intraday day data looks like belowts =pd.Series(np.random.randn(60),index=pd.date_range('1/1/2000',periods=60, freq='2h'))I am hoping to transform the data into a DataFrame, with the columns as each date, and rows as the time in the date.I have tried these, key = lambda x:x.date()grouped = ts.groupby(key)But how do I transform the groups into date columned DataFrame? or is there any better way?
import pandas as pdimport numpy as npindex = pd.date_range('1/1/2000', periods=60, freq='2h')ts = pd.Series(np.random.randn(60), index = index)key = lambda x: x.time()groups = ts.groupby(key)print pd.DataFrame({k:g for k,g in groups}).resample('D').Tout: 2000-01-01 2000-01-02 2000-01-03 2000-01-04 2000-01-05 2000-01-06 \00:00:00 0.109959 -0.124291 -0.137365 0.054729 -1.305821 -1.928468 03:00:00 1.336467 0.874296 0.153490 -2.410259 0.906950 1.860385 06:00:00 -1.172638 -0.410272 -0.800962 0.568965 -0.270307 -2.046119 09:00:00 -0.707423 1.614732 0.779645 -0.571251 0.839890 0.435928 12:00:00 0.865577 -0.076702 -0.966020 0.589074 0.326276 -2.265566 15:00:00 1.845865 -1.421269 -0.141785 0.433011 -0.063286 0.129706 18:00:00 -0.054569 0.277901 0.383375 -0.546495 -0.644141 -0.207479 21:00:00 1.056536 0.031187 -1.667686 -0.270580 -0.678205 0.750386 2000-01-07 2000-01-08 00:00:00 -0.657398 -0.630487 03:00:00 2.205280 -0.371830 06:00:00 -0.073235 0.208831 09:00:00 1.720097 -0.312353 12:00:00 -0.774391 NaN 15:00:00 0.607250 NaN 18:00:00 1.379823 NaN 21:00:00 0.959811 NaN
Python double free error for huge datasets I have a very simple script in Python, but for some reason I get the following error when running a large amount of data:*** glibc detected *** python: double free or corruption (out): 0x00002af5a00cc010 ***I am used to these errors coming up in C or C++, when one tries to free memory that has already been freed. However, by my understanding of Python (and especially the way I've written the code), I really don't understand why this should happen. Here is the code:#!/usr/bin/python -tt import sys, commands, stringimport numpy as npimport scipy.io as iofrom time import clockW = io.loadmat(sys.argv[1])['W']size = W.shape[0]numlabels = int(sys.argv[2])Q = np.zeros((size, numlabels), dtype=np.double)P = np.zeros((size, numlabels), dtype=np.double)Q += 1.0 / Q.shape[1]nu = 0.001mu = 0.01start = clock()mat = -nu + mu*(W*(np.log(Q)-1))end = clock()print >> sys.stderr, "Time taken to compute matrix: %.2f seconds"%(end-start)One may ask, why declare a P and a Q numpy array? I simply do that to reflect the actual conditions (as this code is simply a segment of what I actually do, where I need a P matrix and declare it beforehand). I have access to a 192GB machine, and so I tested this out on a very large SciPy sparse matrix (2.2 million by 2.2 million, but very sparse, that's not the issue). The main memory is taken up by the Q, P, and mat matrices, as they are all 2.2 million by 2000 matrices (size = 2.2 million, numlabels = 2000). The peak memory goes up to 131GB, which comfortably fits in memory. While the mat matrix is being computed, I get the glibc error, and my process automatically goes into the sleep (S) state, without deallocating the 131GB it has taken up. Given the bizarre (for Python) error (I am not explicitly deallocating anything), and the fact that this works nicely for smaller matrix sizes (around 1.5 million by 2000), I am really not sure where to start to debug this. As a starting point, I have set "ulimit -s unlimited" before running, but to no avail.Any help or insight into numpy's behavior with really large amounts of data would be welcome. Note that this is NOT an out of memory error - I have 196GB, and my process reaches around 131GB and stays there for some time before giving the error below. Update: February 16, 2013 (1:10 PM PST):As per suggestions, I ran Python with GDB. Interestingly, on one GDB run I forgot to set the stack size limit to "unlimited", and got the following output:*** glibc detected *** /usr/bin/python: munmap_chunk(): invalid pointer: 0x00007fe7508a9010 ***======= Backtrace: =========/lib64/libc.so.6(+0x733b6)[0x7ffff6ec23b6]/usr/lib64/python2.7/site-packages/numpy/core/multiarray.so(+0x4a496)[0x7ffff69fc496]/usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x4e67)[0x7ffff7af48c7]/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x309)[0x7ffff7af6c49]/usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32)[0x7ffff7b25592]/usr/lib64/libpython2.7.so.1.0(+0xfcc61)[0x7ffff7b33c61]/usr/lib64/libpython2.7.so.1.0(PyRun_FileExFlags+0x84)[0x7ffff7b34074]/usr/lib64/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0x189)[0x7ffff7b347c9]/usr/lib64/libpython2.7.so.1.0(Py_Main+0x36c)[0x7ffff7b3e1bc]/lib64/libc.so.6(__libc_start_main+0xfd)[0x7ffff6e6dbfd]/usr/bin/python[0x4006e9]======= Memory map: ========00400000-00401000 r-xp 00000000 09:01 50336181 /usr/bin/python2.700600000-00601000 r--p 00000000 09:01 50336181 /usr/bin/python2.700601000-00602000 rw-p 00001000 09:01 50336181 /usr/bin/python2.700602000-00e5f000 rw-p 00000000 00:00 0 [heap]7fdf2584c000-7ffff0a66000 rw-p 00000000 00:00 0 7ffff0a66000-7ffff0a6b000 r-xp 00000000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so7ffff0a6b000-7ffff0c6a000 ---p 00005000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so7ffff0c6a000-7ffff0c6b000 r--p 00004000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so7ffff0c6b000-7ffff0c6c000 rw-p 00005000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so7ffff0c6c000-7ffff0c77000 r-xp 00000000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so7ffff0c77000-7ffff0e76000 ---p 0000b000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so7ffff0e76000-7ffff0e77000 r--p 0000a000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so7ffff0e77000-7ffff0e78000 rw-p 0000b000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so7ffff0e78000-7ffff0e79000 rw-p 00000000 00:00 0 7ffff0e79000-7ffff0e9b000 r-xp 00000000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so7ffff0e9b000-7ffff109a000 ---p 00022000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so7ffff109a000-7ffff109b000 r--p 00021000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so7ffff109b000-7ffff109f000 rw-p 00022000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so7ffff109f000-7ffff10a0000 rw-p 00000000 00:00 0 7ffff10a0000-7ffff10a5000 r-xp 00000000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so7ffff10a5000-7ffff12a4000 ---p 00005000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so7ffff12a4000-7ffff12a5000 r--p 00004000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so7ffff12a5000-7ffff12a7000 rw-p 00005000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so7ffff12a7000-7ffff12ad000 r-xp 00000000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so7ffff12ad000-7ffff14ac000 ---p 00006000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so7ffff14ac000-7ffff14ad000 r--p 00005000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so7ffff14ad000-7ffff14ae000 rw-p 00006000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so7ffff14ae000-7ffff14b5000 r-xp 00000000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so7ffff14b5000-7ffff16b4000 ---p 00007000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so7ffff16b4000-7ffff16b5000 r--p 00006000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so7ffff16b5000-7ffff16b6000 rw-p 00007000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so7ffff16b6000-7ffff17c2000 r-xp 00000000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so7ffff17c2000-7ffff19c2000 ---p 0010c000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so7ffff19c2000-7ffff19c3000 r--p 0010c000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so7ffff19c3000-7ffff19c6000 rw-p 0010d000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so7ffff19c6000-7ffff19d5000 r-xp 00000000 00:12 54138561 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_dia.so7ffff19d5000-7ffff1bd4000 ---p 0000f000 00:12 54138561 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_dia.so7ffff1bd4000-7ffff1bd5000 r--p 0000e000 00:12 54138561 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_dia.soProgram received signal SIGABRT, Aborted.0x00007ffff6e81ab5 in raise () from /lib64/libc.so.6(gdb) bt#0 0x00007ffff6e81ab5 in raise () from /lib64/libc.so.6#1 0x00007ffff6e82fb6 in abort () from /lib64/libc.so.6#2 0x00007ffff6ebcdd3 in __libc_message () from /lib64/libc.so.6#3 0x00007ffff6ec23b6 in malloc_printerr () from /lib64/libc.so.6#4 0x00007ffff69fc496 in ?? () from /usr/lib64/python2.7/site-packages/numpy/core/multiarray.so#5 0x00007ffff7af48c7 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0#6 0x00007ffff7af6c49 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0#7 0x00007ffff7b25592 in PyEval_EvalCode () from /usr/lib64/libpython2.7.so.1.0#8 0x00007ffff7b33c61 in ?? () from /usr/lib64/libpython2.7.so.1.0#9 0x00007ffff7b34074 in PyRun_FileExFlags () from /usr/lib64/libpython2.7.so.1.0#10 0x00007ffff7b347c9 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.7.so.1.0#11 0x00007ffff7b3e1bc in Py_Main () from /usr/lib64/libpython2.7.so.1.0#12 0x00007ffff6e6dbfd in __libc_start_main () from /lib64/libc.so.6#13 0x00000000004006e9 in _start ()When I set the stack size limit to unlimited", I get the following:*** glibc detected *** /usr/bin/python: double free or corruption (out): 0x00002abb2732c010 ***^X^CProgram received signal SIGINT, Interrupt.0x00002aaaab9d08fe in __lll_lock_wait_private () from /lib64/libc.so.6(gdb) bt#0 0x00002aaaab9d08fe in __lll_lock_wait_private () from /lib64/libc.so.6#1 0x00002aaaab969f2e in _L_lock_9927 () from /lib64/libc.so.6#2 0x00002aaaab9682d1 in free () from /lib64/libc.so.6#3 0x00002aaaaaabbfe2 in _dl_scope_free () from /lib64/ld-linux-x86-64.so.2#4 0x00002aaaaaab70a4 in _dl_map_object_deps () from /lib64/ld-linux-x86-64.so.2#5 0x00002aaaaaabcaa0 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2#6 0x00002aaaaaab85f6 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2#7 0x00002aaaaaabc5da in _dl_open () from /lib64/ld-linux-x86-64.so.2#8 0x00002aaaab9fb530 in do_dlopen () from /lib64/libc.so.6#9 0x00002aaaaaab85f6 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2#10 0x00002aaaab9fb5cf in dlerror_run () from /lib64/libc.so.6#11 0x00002aaaab9fb637 in __libc_dlopen_mode () from /lib64/libc.so.6#12 0x00002aaaab9d60c5 in init () from /lib64/libc.so.6#13 0x00002aaaab080933 in pthread_once () from /lib64/libpthread.so.0#14 0x00002aaaab9d61bc in backtrace () from /lib64/libc.so.6#15 0x00002aaaab95dde7 in __libc_message () from /lib64/libc.so.6#16 0x00002aaaab9633b6 in malloc_printerr () from /lib64/libc.so.6#17 0x00002aaaab9682dc in free () from /lib64/libc.so.6#18 0x00002aaaabef1496 in ?? () from /usr/lib64/python2.7/site-packages/numpy/core/multiarray.so#19 0x00002aaaaad888c7 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0#20 0x00002aaaaad8ac49 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0#21 0x00002aaaaadb9592 in PyEval_EvalCode () from /usr/lib64/libpython2.7.so.1.0#22 0x00002aaaaadc7c61 in ?? () from /usr/lib64/libpython2.7.so.1.0#23 0x00002aaaaadc8074 in PyRun_FileExFlags () from /usr/lib64/libpython2.7.so.1.0#24 0x00002aaaaadc87c9 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.7.so.1.0#25 0x00002aaaaadd21bc in Py_Main () from /usr/lib64/libpython2.7.so.1.0#26 0x00002aaaab90ebfd in __libc_start_main () from /lib64/libc.so.6#27 0x00000000004006e9 in _start ()This makes me believe the basic issue is with the numpy multiarray core module (line #4 in the first output and line #18 in the second). I will bring it up as a bug report in both numpy and scipy just in case. Has anyone seen this before? Update: February 17, 2013 (4:45 PM PST)I found a machine that I could run the code on that had a more recent version of SciPy (0.11) and NumPy (1.7.0). Running the code straight up (without GDB) resulted in a seg fault without any output to stdout or stderr. Running again through GDB, I get the following:Program received signal SIGSEGV, Segmentation fault.0x00002aaaabead970 in ?? () from /lib/x86_64-linux-gnu/libc.so.6(gdb) bt#0 0x00002aaaabead970 in ?? () from /lib/x86_64-linux-gnu/libc.so.6#1 0x00002aaaac5fcd04 in PyDataMem_FREE (ptr=<optimized out>, $K8=<optimized out>) at numpy/core/src/multiarray/multiarraymodule.c:3510#2 array_dealloc (self=0xc00ab7edbfc228fe) at numpy/core/src/multiarray/arrayobject.c:416#3 0x0000000000498eac in PyEval_EvalFrameEx ()#4 0x000000000049f1c0 in PyEval_EvalCodeEx ()#5 0x00000000004a9081 in PyRun_FileExFlags ()#6 0x00000000004a9311 in PyRun_SimpleFileExFlags ()#7 0x00000000004aa8bd in Py_Main ()#8 0x00002aaaabe4f76d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6#9 0x000000000041b9b1 in _start ()I understand this is not as useful as a NumPy compiled with debugging symbols, I will try doing that and post the output later.
After discussions on the same issue on the Numpy Github page (https://github.com/numpy/numpy/issues/2995) it has been brought to my attention that Numpy/Scipy will not support such a large number of non-zeros in the resulting sparse matrix. Basically, W is a sparse matrix, and Q (or np.log(Q)-1) is a dense matrix. When multiplying a dense matrix with a sparse one, the resulting product will also be represented in sparse matrix form (which makes a lot of sense). However, note that since I have no zero rows in my W matrix, the resulting product W*(np.log(Q)-1) will have nnz > 2^31 (2.2 million multiplied by 2000) and this exceeds the maximum number of elements in a sparse matrix in current versions of Scipy. At this stage, I'm not sure how else to get this to work, barring a re-implementation in another language. Perhaps it can still be done in Python, but it might be better to just write up a C++ and Eigen implementation.A special thanks to pv. for helping out on this to pinpoint the exact issue, and thanks to everyone else for the brainstorming!
Time difference in seconds from numpy.timedelta64 How to get time difference in seconds from numpy.timedelta64 variable?time1 = '2012-10-05 04:45:18'time2 = '2012-10-05 04:44:13'dt = np.datetime64(time1) - np.datetime64(time2)print dt0:01:05I'd like to convert dt to number (int or float) representing time difference in seconds.
To get number of seconds from numpy.timedelta64() object using numpy 1.7 experimental datetime API:seconds = dt / np.timedelta64(1, 's')
Calculating Percentile scores for each element with respect to its column So my NumPy array looks like thisnpfinal = [[1, 3, 5, 0, 0, 0], [5, 2, 4, 0, 0, 0], [7, 7, 2, 0, 0, 0], . . .Sample dataset I'm working with is 25k rows. The first 3 columns contain meaningful data, rest are placeholders for the percentiles.So I need the percentile of a[0][0] with respect to the entire first column in a[0][3]. So 1's percentile score wrt the column [1,5,7,...]My first attempt was:import scipy.stats as ss...numofcols = 3for row in npfinal: for i in range(0,numofcols): row[i+numofcols] = int(round(ss.percentileofscore(npfinal[:,i], row[i])))But this is taking way too much time; and on a full dataset it'll be impossible.I'm new to the world of computing on such large datasets so any sort of help will be appreciated.
I found a solution that I believe it works better when there are repeated values in the array:import numpy as npfrom scipy import stats# some array with repeated values:M = np.array([[1, 7, 2], [5, 2, 2], [5, 7, 2]]) # calculate percentiles applying scipy rankdata to each column:percentile = np.apply_along_axis(sp.stats.rankdata, 0, M, method='average')/len(M)The np.argsort solution has the problem that it gives different percentiles to repetitions of the same value. For example if you had:percentile_argsort = np.argsort(np.argsort(M, axis=0), axis=0) / float(len(M)) * 100percentile_rankdata = np.apply_along_axis(sp.stats.rankdata, 0, M, method='average')/len(M)the two different approaches will output the results:Marray([[1, 7, 2], [5, 2, 2], [5, 7, 2]])percentile_argsortarray([[ 0. , 33.33333333, 0. ], [ 33.33333333, 0. , 33.33333333], [ 66.66666667, 66.66666667, 66.66666667]])percentile_rankdataarray([[ 0.33333333, 0.83333333, 0.66666667], [ 0.83333333, 0.33333333, 0.66666667], [ 0.83333333, 0.83333333, 0.66666667]])
Aggregate over an index in pandas? How can I aggregate (sum) over an index which I intend to map to new values? Basically I have a groupby result by two variables where I want to groupby one variable into larger classes. The following code does this operation on s by mapping the first by-variable but seems too complicating:import pandas as pdmapping={1:1, 2:1, 3:3}s=pd.Series([1]*6, index=pd.MultiIndex.from_arrays([[1,1,2,2,3,3],[1,2,1,2,1,2]]))x=s.reset_index()x["level_0"]=x.level_0.map(mapping)result=x.groupby(["level_0", "level_1"])[0].sum()Is there a way to write this more concisely?
There is a level= option for Series.sum(), I guess you can use that and it will be a quite concise way to do it.In [69]:s.index = pd.MultiIndex.from_tuples(map(lambda x: (mapping.get(x[0]), x[1]), s.index.values))s.sum(level=(0,1))Out[69]:1 1 2 2 23 1 1 2 1dtype: int64
Interpolating array columns with PiecewisePolynomial in scipy I'm trying to interpolate each column of a numpy array using scipy's PiecewisePolynomial. I know that this is possible for scipy's interp1d but for piecewise polynomial interpolation it does not seem to work the same way. I have the following code:import numpy as npimport scipy.interpolate as interpolatex1=np.array([1,2,3,4])y1=np.array([[2,3,1],[4,1,6],[1,2,7],[3,1,3]])interp=interpolate.PiecewisePolynomial(x1,y1,axis=0)x = np.array([1.2, 2.1, 3.3])y = interp(x)Which results in y = np.array([2.6112, 4.087135, 1.78648]). It seems that only the first column in y1 was taken into account for interpolation. How can I make the method return the interpolated values of each column in y1 at the points specified by x?
The scipy.interpolate.PiecewisePolynomial inteprets the different columns of y1 as the derivatives of the function to be interpolated, whereas interp1d interprets the columns as different functions.It may be that you do not actually want to use the PiecewisePolynomial at all, if you do not have the derivatives available. If you just want to have a smoother interpolation, then try interp1d with, e.g., kind='quadratic' keyword argument. (See the documentation for interp1d)Now your function looks rather interestingimport matplotlib.pyplot as pltfig = plt.figure()ax = fig.add_subplot(111)x = linspace(0,5,200)ax.plot(x, interp(x))ax.plot(x1, y1[:,0], 'o')If you try the quadratic spline interpolation:interp = scipy.interpolate.interp1d(x1, y1.T, kind='quadratic')fig = plt.figure()ax = fig.add_subplot(111)x = linspace(1,4,200)ip = interp(x)ax.plot(x, ip[0], 'b')ax.plot(x, ip[1], 'g')ax.plot(x, ip[2], 'r')ax.plot(x1, y1[:,0], 'bo')ax.plot(x1, y1[:,1], 'go')ax.plot(x1, y1[:,2], 'ro')This might be closer to what you want:
Python Pandas: drop a column from a multi-level column index? I have a multi level column table like this: a ---+---+--- b | c | f--+---+---+---0 | 1 | 2 | 71 | 3 | 4 | 9How can I drop column "c" by name? to look like this: a ---+--- b | f--+---+---0 | 1 | 71 | 3 | 9I tried this:del df['c']but I get the following error, which makes sense: KeyError: 'Key length (1) was greater than MultiIndex lexsort depth (0)'
Solved:df.drop('c', axis=1, level=1)
optimization of some numpy/scipy code I'm trying to optimize some python code, which uses scipy.optimize.root for rootfinding. cProfile tells me that most of the time the programm is evaluating the function called by optimize.root:e.g. for a total execution time of 80s, 58s are spend on lineSphericalDist to which fun contributes 54s (and about 215 000 calls):Fri Aug 8 21:09:32 2014 profile2 12796193 function calls (12617458 primitive calls) in 82.707 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.005 0.005 82.710 82.710 BilliardsNumpyClass.py:6(<module>) 1 0.033 0.033 64.155 64.155 BilliardsNumpyClass.py:446(traceAll) 100 1.094 0.011 63.549 0.635 BilliardsNumpyClass.py:404(trace) 91333 7.226 0.000 58.804 0.001 BilliardsNumpyClass.py:244(lineSphericalDist) 214667 49.436 0.000 54.325 0.000 BilliardsNumpyClass.py:591(fun) ...Here the optimize.root call somehwere in trace: ... res = optimize.root(self.lineSphericalDist, [tguess], args=(t0, a0), method='lm') ...The function contains of some basic trigonometric functions: def lineSphericalDist(self, tt, t0, a0): x0,y0,vnn = self.fun(t0)[0:3] beta = np.pi + t0 + a0 - vnn l = np.sin(beta - t0)/np.sin(beta - tt) x2,y2 = self.fun(tt)[0:2] return np.sqrt(x0**2+y0**2)*l-np.sqrt(x2**2+y2**2)In the easiest case fun is: def fun(self,t): return self.r*np.cos(t),self.r*np.sin(t),np.pi/2.,np.mod(t+np.pi/2., np.pi*2.)Is there a way to speed this up (tguess is already a pretty good starting value) ? Am I doing something wrong? e.g. is it a good idea to return multiple values the way I do it in fun ?
If I understand well your a0 and t0 are not part of the optimization, you only optimize over tt. However, inside lineSphericalDist, you call self.fun(t0). You could precompute that quantity outside of lineSphericalDist, that would halve the number of calls to self.fun...You could also compute beta, and np.sin(beta - t0), and np.sqrt(x0**2 + y0**2) outside lineSphericalDist, leaving only the bits that really depend on tt inside lineSphericalDist.Lastly, why does self.fun compute 4 values if only 3 or 2 are used? This is your bottleneck function, make it compute only what's strictly necessary...
Pandas aggregate -- how to retain all columns Example dataframe:rand = np.random.RandomState(1)df = pd.DataFrame({'A': ['group1', 'group2', 'group3'] * 2, 'B': rand.rand(6), 'C': rand.rand(6), 'D': rand.rand(6)})print df A B C D0 group1 0.417022 0.186260 0.2044521 group2 0.720324 0.345561 0.8781172 group3 0.000114 0.396767 0.0273883 group1 0.302333 0.538817 0.6704684 group2 0.146756 0.419195 0.4173055 group3 0.092339 0.685220 0.558690Groupby column Agroup = df.groupby('A')Use agg to return max value for each groupmax1 = group['B'].agg({'max' : np.max})print max1 maxA group1 0.417022group2 0.720324group3 0.092339But I would like to retain (or get back) the appropriate data in the other columns, C and D. This would be the remaining data for the row which contained the max value.So, the return should be: A B C Dgroup1 0.417022 0.186260 0.204452group2 0.720324 0.345561 0.878117group3 0.092339 0.685220 0.558690Can anybody show how to do this? Any help appreciated.
Two stages: first find indices, then lookup all the rows.idx = df.groupby('A').apply(lambda x: x['B'].argmax())idxOut[362]: Agroup1 0group2 1group3 5df.loc[idx]Out[364]: A B C D0 group1 0.417022 0.186260 0.2044521 group2 0.720324 0.345561 0.8781175 group3 0.092339 0.685220 0.558690
Pandas sort_index gives strange result after applying function to grouped DataFrame Basic setup:I have a DataFrame with a MultiIndex on both the rows and the columns. The second level of the column index has floats for values.I want to perform a groupby operation (grouping by the first level of the row index). The operation will add a few columns (also with floats as their labels) to each group and then return the group.When I get the result back from my groupby operation, I can't seem to get the columns to sort properly. Working example. First, set things up:import pandas as pdimport numpy as npnp.random.seed(0)col_level_1 = ['red', 'blue']col_level_2 = [1., 2., 3., 4.]row_level_1 = ['a', 'b']row_level_2 = ['one', 'two']col_idx = pd.MultiIndex.from_product([col_level_1, col_level_2], names=['color', 'numeral'])row_idx = pd.MultiIndex.from_product([row_level_1, row_level_2], names=['letter', 'number'])df = pd.DataFrame(np.random.randn(len(row_idx), len(col_idx)), index=row_idx, columns=col_idx)Gives this DataFrame in df:Then define my group operation and apply it:def mygrpfun(group): for f in [1.5, 2.5, 3.5]: group[('red', f)] = 'hello' group[('blue', f)] = 'world' return groupresult = df.groupby(level='letter').apply(mygrpfun).sort_index(axis=1)Displaying result gives:What's going on here? Why doesn't the 2nd level of the column index display in ascending order?EDIT:In terms of context:pd.__version__Out[28]:'0.14.0'In [29]:np.__version__Out[29]:'1.8.1'Any help much appreciated.
The returned result looks as expected. You added columns. There was no guarantee that order imposed on those columns.You could just reimpose ordering:result = result[sorted(result.columns)]
Plotting Pandas Time Data My data is a pandas dataframe called 'T': A B CDate 2001-11-13 30.1 2 32007-02-23 12.0 1 7The result of T.index is <class 'pandas.tseries.index.DatetimeIndex'>[2001-11-13, 2007-02-23]Length: 2, Freq: None, Timezone: NoneSo I know that the index is a time series. But when I plot it using ax.plot(T) I don't get a times series on the x axis!I will only ever have two data points so how do I get the dates in my graph (i.e. two dates at either end of the x axis)?
Use the implemented pandas command:In[211]: df2Out[211]: A B C1970-01-01 30.1 2 31980-01-01 12.0 1 7In[212]: df2.plot()Out[212]: <matplotlib.axes.AxesSubplot at 0x105224e0>In[213]: plt.show()You can access the axis usingax = df2.plot()
Renumbering a 1D mesh in Python First of all, I couldn't find the answer in other questions. I have a numpy array of integer, this is called ELEM, the array has three columns that indicate, element number, node 1 and node 2. This is one dimensional mesh. What I need to do is to renumber the nodes, I have the old and new node numbering tables, so the algorithm should replace every value in the ELEM array according to this tables.The code should look like thisold_num = np.array([2, 1, 3, 6, 5, 9, 8, 4, 7])new_num = np.arange(1,10)ELEM = np.array([ [1, 1, 3], [2, 3, 6], [3, 1, 3], [4, 5, 6]])From now, for every element in the second and third column of the ELEM array I should replace every integer from the corresponding integer specified according to the new_num table.
If you're doing a lot of these, it makes sense to encode the renumbering in a dictionary for fast lookup.lookup_table = dict( zip( old_num, new_num ) ) # create your translation dictvect_lookup = np.vectorize( lookup_table.get ) # create a function to do the translationELEM[:, 1:] = vect_lookup( ELEM[:, 1:] ) # Reassign the elements you want to changenp.vectorize is just there to make things nicer syntactically. All it does is allow us to map over the values of the array with our lookup_table.get function
Ordered colored plot after clustering using python I have a 1D array called data=[5 1 100 102 3 4 999 1001 5 1 2 150 180 175 898 1012]. I am using python scipy.cluster.vq to find clusters within it. There are 3 clusters in the data. After clustering when I'm trying to plot the data, there is no order in it. It would be great if it's possible to plot the data in the same order as it is given and color different sections belong to different groups or clusters. Here is my code:import numpy as npimport matplotlib.pyplot as pltfrom scipy.cluster.vq import kmeans, vqdata = np.loadtxt('rawdata.csv', delimiter=' ')#----------------------kmeans------------------centroid,_ = kmeans(data, 3) idx,_ = vq(data, centroid)x=np.linspace(0,(len(data)-1),len(data))fig = plt.figure(1)plt.plot(x,data)plot1=plt.plot(data[idx==0],'ob')plot2=plt.plot(data[idx==1],'or')plot3=plt.plot(data[idx==2],'og')plt.show()Here is my plot http://s29.postimg.org/9gf7noe93/figure_1.png(The blue graph in the background is in-order, after clustering,it messed up) Thanks!Update :I wrote the following code to implement in-order colored plot after clustering,import numpy as npimport matplotlib.pyplot as pltfrom scipy.cluster.vq import kmeans, vqdata = np.loadtxt('rawdata.csv', delimiter=' ')#----------------------kmeans-----------------------------centroid,_ = kmeans(data, 3) # three clustersidx,_ = vq(data, centroid)x=np.linspace(0,(len(data)-1),len(data))fig = plt.figure(1)plt.plot(x,data)for i in range(0,(len(data)-1)): if data[i] in data[idx==0]: plt.plot(x[i],(data[i]),'ob' ) if data[i] in data[idx==1]: plt.plot(x[i],(data[i]),'or' ) if data[i] in data[idx==2]: plt.plot(x[i],(data[i]),'og' ) plt.show()The problem with the above code is it's too slow. And my array size is over 3million. So this code will take forever to finish it's job for me. I really appreciate if someone can provide vectorized version of the above mentioned code.Thanks!
You can plot the clustered data points based on their distances from the cluster center and then write the index of each data point close to that in order to see how they scattered based on their clustering properties: import numpy as npimport matplotlib.pyplot as pltfrom scipy.cluster.vq import kmeans, vqfrom scipy.spatial.distance import cdistdata=np.array([ 5, 1, 100, 102, 3, 4, 999, 1001, 5, 1, 2, 150, 180, 175, 898, 1012])centroid,_ = kmeans(data, 3) idx,_ = vq(data, centroid)X=data.reshape(len(data),1)Y=centroid.reshape(len(centroid),1)D_k = cdist( X, Y, metric='euclidean' )colors = ['red', 'green', 'blue']pId=range(0,(len(data)-1))cIdx = [np.argmin(D) for D in D_k]dist = [np.min(D) for D in D_k]r=np.vstack((data,dist)).Tfig = plt.figure()ax = fig.add_subplot(1,1,1)mark=['^','o','>']for i, ((x,y), kls) in enumerate(zip(r, cIdx)): ax.plot(r[i,0],r[i,1],color=colors[kls],marker=mark[kls]) ax.annotate(str(i), xy=(x,y), xytext=(0.5,0.5), textcoords='offset points', size=8,color=colors[kls])ax.set_yscale('log')ax.set_xscale('log')ax.set_xlabel('Data')ax.set_ylabel('Distance')plt.show()Update:if you are very keen of using vectorize procedure you can do it as following for a randomly generated data:data=np.random.uniform(1,1000,3000)@np.vectorizedef plotting(i): ax.plot(i,data[i],color=colors[cIdx[i]],marker=mark[cIdx[i]])mark=['>','o','^']fig = plt.figure()ax = fig.add_subplot(1,1,1)plotting(range(len(data)))ax.set_xlabel('index')ax.set_ylabel('Data')plt.show()
Using Numpy Array to Create Unique Array Can you create a numpy array with all unique values in it?myArray = numpy.random.random_integers(0,100,2500)myArray.shape = (50,50)So here I have a given random 50x50 numpy array, but I could have non-unique values. Is there a way to ensure every value is unique?Thank youUpdate:I have created a basic function to generate a list and populate a unique integer. dist_x = math.sqrt(math.pow((extent.XMax - extent.XMin), 2)) dist_y = math.sqrt(math.pow((extent.YMax - extent.YMin),2)) col_x = int(dist_x / 100) col_y = int(dist_y / 100) if col_x % 100 > 0: col_x += 1 if col_y % 100 > 0: col_y += 1 print col_x, col_y, 249*169 count = 1 a = [] for y in xrange(1, col_y + 1): row = [] for x in xrange(1, col_x + 1): row.append(count) count += 1 a.append(row) del row numpyArray = numpy.array(a)Is there a better way to do this?Thanks
The most convenient way to get a unique random sample from a set is probably np.random.choice with replace=False.For example:import numpy as np# create a (5, 5) array containing unique integers drawn from [0, 100]uarray = np.random.choice(np.arange(0, 101), replace=False, size=(5, 5))# check that each item occurs only onceprint((np.bincount(uarray.ravel()) == 1).all())# TrueIf replace=False the set you're sampling from must, of course, be at least as big as the number of samples you're trying to draw:np.random.choice(np.arange(0, 101), replace=False, size=(50, 50))# ValueError: Cannot take a larger sample than population when 'replace=False'If all you're looking for is a random permutation of the integers between 1 and the number of elements in your array, you could also use np.random.permutation like this:nrow, ncol = 5, 5uarray = (np.random.permutation(nrow * ncol) + 1).reshape(nrow, ncol)
Pandas - Delete Rows with only NaN values I have a DataFrame containing many NaN values. I want to delete rows that contain too many NaN values; specifically: 7 or more.I tried using the dropna function several ways but it seems clear that it greedily deletes columns or rows that contain any NaN values. This question (Slice Pandas DataFrame by Row), shows me that if I can just compile a list of the rows that have too many NaN values, I can delete them all with a simpledf.drop(rows)I know I can count non-null values using the count function which I could them subtract from the total and get the NaN count that way (Is there a direct way to count NaN values in a row?). But even so, I am not sure how to write a loop that goes through a DataFrame row-by-row. Here's some pseudo-code that I think is on the right track:### LOOP FOR ADDRESSING EACH row: m = total - row.count() if (m > 7): df.drop(row)I am still new to Pandas so I'm very open to other ways of solving this problem; whether they're simpler or more complex.
Basically the way to do this is determine the number of cols, set the minimum number of non-nan values and drop the rows that don't meet this criteria:df.dropna(thresh=(len(df) - 7))See the docs
Speed up Pandas filtering I have a 37456153 rows x 3 columns Pandas dataframe consisting of the following columns: [Timestamp, Span, Elevation]. Each Timestamp value has approximately 62000 rows of Span and Elevation data, which looks like (when filtered for Timestamp = 17210, as an example): Timestamp Span Elevation94614 17210 -0.019766 36.57194615 17210 -0.019656 36.45394616 17210 -0.019447 36.50694617 17210 -0.018810 36.50794618 17210 -0.017883 36.502... ... ... ...157188 17210 91.004000 33.493157189 17210 91.005000 33.501157190 17210 91.010000 33.497157191 17210 91.012000 33.500157192 17210 91.013000 33.503As seen above, the Span data is not equal spaced, which I actually need it to be. So I came up with the following code to convert it into an equal spaced format. I know the start and end locations I'd like to analyze. Then I defined a delta parameter as my increment. I created a numpy array called mesh, which holds the equal spaced Span data I would like to end up with. Finally, I decided the iterate over the dataframe for a given TimeStamp (17300 in the code) to test how fast it would work. The for loop in the code calculates average Elevation values for a +/- 0.5delta range at each increment.My problem is: it takes 603 ms to filter through dataframe and calculate the average Elevation at a single iteration. For the given parameters, I have to go through 9101 iterations, resulting in approximately 1.5 hrs of computing time for this loop to end. Moreover, this is for a single Timestamp value, and I have 600 of them (900 hrs to do all?!).Is there any way that I can speed up this loop? Thanks a lot for any input!# MESH GENERATIONstart = 0end = 91delta = 0.01mesh = np.linspace(start,end, num=(end/delta + 1))elevation_list =[]#Loop below will take forever to run, any idea about how to optimize it?!for current_loc in mesh: average_elevation = np.average(df[(df.Timestamp == 17300) & (df.Span > current_loc - delta/2) & (df.Span < current_loc + delta/2)].Span) elevation_list.append(average_elevation)
You can vectorize the whole thing using np.searchsorted. I am not much of a pandas user, but something like this should work, and runs reasonably fast on my system. Using chrisb's dummy data:In [8]: %%timeit ...: mesh = np.linspace(start, end, num=(end/delta + 1)) ...: midpoints = (mesh[:-1] + mesh[1:]) / 2 ...: idx = np.searchsorted(midpoints, df.Span) ...: averages = np.bincount(idx, weights=df.Elevation, minlength=len(mesh)) ...: averages /= np.bincount(idx, minlength=len(mesh)) ...: 100 loops, best of 3: 5.62 ms per loop That is about 3500x faster than your code:In [12]: %%timeit ...: mesh = np.linspace(start, end, num=(end/delta + 1)) ...: elevation_list =[] ...: for current_loc in mesh: ...: average_elevation = np.average(df[(df.Span > current_loc - delta/2) & ...: (df.Span < current_loc + delta/2)].Span) ...: elevation_list.append(average_elevation) ...: 1 loops, best of 3: 19.1 s per loopEDIT So how does this works? In midpoints we store a sorted list of the boundaries between buckets. We then do a binary search with searchsorted on this sorted list, and get idx, which basically tells us into which bucket each data point belongs. All that is left is to group all the values in each bucket. That's what bincount is for. Given an array of ints, it counts how many times each number comes up. Given an array of ints , and a corresponding array of weights, instead of adding 1 to the tally for the bucket, it adds the corresponding value in weights. With two calls to bincount you get the sum and the number of items per bucket: divide them and you get the bucket average.
Updating rows of the same index Given a DataFrame df: yellowCard secondYellow redCardmatch_id player_id 1431183600x96x30 76921 X NaN NaN 76921 NaN X X1431192600x162x32 71174 X NaN NaNI would like to update duplicated rows (of the same index) resulting in: yellowCard secondYellow redCardmatch_id player_id 1431183600x96x30 76921 X X X1431192600x162x32 71174 X NaN NaNDoes pandas provide a library method to achieve it?
It looks like your df is multi-indexed on match_id and player_id so I would perform a groupby on the match_id and fill the NaN values twice, ffill and bfill:In [184]:df.groupby(level=0).fillna(method='ffill').groupby(level=0).fillna(method='bfill')Out[184]: yellowCard secondYellow redCardmatch_id player_id 1431183600x96x30 76921 1 2 2 76921 1 2 21431192600x162x32 71174 3 NaN NaNI used the following code to build the above, rather than use x values:In [185]:t="""match_id player_id yellowCard secondYellow redCard1431183600x96x30 76921 1 NaN NaN1431183600x96x30 76921 NaN 2 21431192600x162x32 71174 3 NaN NaN"""df=pd.read_csv(io.StringIO(t), sep='\s+', index_col=[0,1])dfOut[185]: yellowCard secondYellow redCardmatch_id player_id 1431183600x96x30 76921 1 NaN NaN 76921 NaN 2 21431192600x162x32 71174 3 NaN NaNEDIT there is a ffill and bfill method for groupby objects so this simplifies to:In [189]:df.groupby(level=0).ffill().groupby(level=0).bfill()Out[189]: yellowCard secondYellow redCardmatch_id player_id 1431183600x96x30 76921 1 2 2 76921 1 2 21431192600x162x32 71174 3 NaN NaNYou can then call drop_duplicates:In [190]:df.groupby(level=0).ffill().groupby(level=0).bfill().drop_duplicates()Out[190]: yellowCard secondYellow redCardmatch_id player_id 1431183600x96x30 76921 1 2 21431192600x162x32 71174 3 NaN NaN
Applying DataFrame with logic expression to another DataFrame (need Pandas wizards) I have a DataFrame conditions with a set of conditions that are used like an expression: indicator logic valueDiscount 'ADR Premium' '<' -0.5Premium 'ADR Premium' '>' 0.5Now I have a dataframe indicators with a set of values, in this case there is just one indicator ADR Premium: ADR Premium2015-04-20 15:30:00-04:00 -0.1022702015-04-21 15:30:00-04:00 0.2353152015-04-22 15:30:00-04:00 -0.3239192015-04-23 15:30:00-04:00 0.5463632015-04-24 15:30:00-04:00 -0.7141432015-04-27 15:30:00-04:00 -0.1531652015-04-28 15:30:00-04:00 0.8784942015-04-29 15:30:00-04:00 0.9930792015-04-30 15:30:00-04:00 -0.8248152015-05-04 15:30:00-04:00 1.6447842015-05-05 15:30:00-04:00 -0.2543432015-05-06 15:30:00-04:00 -0.2689812015-05-07 15:30:00-04:00 0.5914112015-05-08 15:30:00-04:00 -0.5880472015-05-11 15:30:00-04:00 -0.4581432015-05-12 15:30:00-04:00 0.0636432015-05-13 15:30:00-04:00 -0.0516592015-05-14 15:30:00-04:00 1.4749632015-05-15 15:30:00-04:00 -0.1724292015-05-18 15:30:00-04:00 0.035558What I am hoping to achieve, is to apply the logic of conditions to indicatorsin order to produce a new dataframe called signals. To give you an idea of what I'm looking for, see below. This looks only at the first condition in conditions and the fifth value in indicator (because it evaluates to True):signals_list = []conditions_index = 0indicators_index = 4if eval( str(indicators[conditions.ix[conditions_index].indicator][indicators_index]) + conditions.ix[conditions_index].logic + str(conditions.ix[conditions_index].value) ): signal = {'Time': indicators.ix[indicators_index].name, 'Signal': conditions.ix[conditions_index].name} signals_list.append(signal)signals = pd.DataFrame(signals_list)signals.index = signals.Timesignals.drop('Time', 1)This leaves me with signals: SignalTime2015-04-24 15:30:00-04:00 'Discount'I would like to do this for all conditions across applicable indicators in the most efficient, Pandas-ic method. Looking forward to ideas.
It's hard to tell from the question but I think you just want to classify each entry in indicators with according to some set of conditions for that column. First I would initialise signals:signals = pd.Series(index=indicators.index)This will be a series of nans. For a give indicator name (ADR premium in this case), logic, value and classification you can do something likebool_vector = indicators.eval(' '.join(indicator, logic, value))signals[bool_vector] = classificationIn the example given, this would translate tobool_vector = indicators.eval('ADR Premium < -0.5) signals[bool_vector] = 'discount'For the first row in conditions and would set all rows which satisfy the condition to 'discount'. You can do the same for each row. It's hard to tell from the example but if you have multiple columns you may want to have signals as a DataFrame. You can loop through conditions using for classification, (indicator, logic, value) in conditions.iterrows():For a fully vectorized solution you'll need to give a fuller example.
distinct contiguous blocks in pandas dataframe I have a pandas dataframe looking like this: x1=[np.nan, 'a','a','a', np.nan,np.nan,'b','b','c',np.nan,'b','b', np.nan] ty1 = pd.DataFrame({'name':x1})Do you know how I can get a list of tuples containing the start and end indices of distinct contiguous blocks? For example for the dataframe above, [(1,3), (6,7), (8,8), (10,11)].
You can use shift and cumsum to create 'id's for each contiguous block: In [5]: blocks = (ty1 != ty1.shift()).cumsum()In [6]: blocksOut[6]: name0 11 22 23 24 35 46 57 58 69 710 811 812 9You are only interested in those blocks that are not NaN, so filter for that:In [7]: blocks = blocks[ty1['name'].notnull()]In [8]: blocksOut[8]: name1 22 23 26 57 58 610 811 8And then, we can get the first and last index for each 'id':In [10]: blocks.groupby('name').apply(lambda x: (x.index[0], x.index[-1]))Out[10]:name2 (1, 3)5 (6, 7)6 (8, 8)8 (10, 11)dtype: objectAlthough, if this last step is necessary will depend on what you want to do with it (working with tuples as elements in dataframes in not really recommended). Maybe having the 'id's can already be enough.
copying a 24x24 image into a 28x28 array of zeros Hi I want to copy a random portion of a 28x28 matrix and then use the resulting 24x24 matrix to be inserted into a 28x28 matrix image = image.reshape(28, 28) getx = random.randint(0,4) gety = random.randint(0,4) # get a 24 x 24 tile from a random location in img blank_image = np.zeros((28,28), np.uint8) tile= image[gety:gety+24,getx:getx+24] cv2.imshow("the 24x24 Image",tile)tile is a 24x24 ROI works as planned blank_image[gety:gety+24,getx:getx+24] = tileblank_image in my example does not get updated with the values from tileThanks for the help in advance
If you are getting an error, it might be because your np array dimensions are different. If your image is an RGB image, then your blank image should be defined as :blank_image = np.zeros((28,28,3), uint8)
join two dataframe together total_purchase_amt2013-07-01 225331212014-08-29 2141148442014-08-30 1835472672014-08-31 205369438 total_purchase_amt2014-08-31 2.016808e+082014-09-01 2.481354e+082014-09-02 2.626838e+082014-09-03 2.497276e+08having two dataframe, I want to join them together,the result is like this:the last row in first dataframe should be replaced by the first row of seconddataframe. total_purchase_amt2013-07-01 225331212014-08-29 2141148442014-08-30 1835472672014-08-31 2.016808e+082014-09-01 2.481354e+082014-09-02 2.626838e+082014-09-03 2.497276e+08
Use combine_first with the other df combining with the first df, this will perserve the values in your other df and add the missing values from the first df:In [49]:df1.combine_first(df)Out[49]: total_purchase_amt2013-07-01 225331212014-08-29 2141148442014-08-30 1835472672014-08-31 2016808002014-09-01 2481354002014-09-02 2626838002014-09-03 249727600
Pandas MAX formula across different grouped rows I have dataframe that looks like this:Auction_id bid_price min_bid rank123 5 3 1123 4 3 2124 3 2 1124 1 2 2I'd like to create another column that returns MAX(rank 1 min_bid, rank 2 bid_price). I don't care what appears for the rank 2 column values. I'm hoping for the result to look something like this:Auction_id bid_price min_bid rank custom_column123 5 3 1 4123 4 3 2 NaN/Don't care124 3 2 1 2124 1 2 2 NaN/Don't careShould I be iterating through grouped auction_ids? Can someone provide the topics one would need to be familiar with to tackle this type of problem?
Here's an approach that does some reshaping with pivot() Auction_id bid_price min_bid rank 0 123 5 3 1 1 123 4 3 2 2 124 3 2 1 3 124 1 2 2 Then reshape your frame (df)pv = df.pivot("Auction_id","rank")pv bid_price min_bid rank 1 2 1 2Auction_id 123 5 4 3 3124 3 1 2 2Adding a column to pv that contains the max. I"m using iloc to get a slice of the pv dataframe. pv["custom_column"] = pv.iloc[:,[1,2]].max(axis=1) pv bid_price min_bid custom_columnrank 1 2 1 2 Auction_id 123 5 4 3 3 4124 3 1 2 2 2and then add the max to the original frame (df) by mapping to our pv framedf.loc[df["rank"] == 1,"custom_column"] = df["Auction_id"].map(pv["custom_column"])df Auction_id bid_price min_bid rank custom_column0 123 5 3 1 41 123 4 3 2 NaN2 124 3 2 1 23 124 1 2 2 NaNall the steps combinedpv = df.pivot("Auction_id","rank")pv["custom_column"] = pv.iloc[:,[1,2]].max(axis=1)df.loc[df["rank"] == 1,"custom_column"] = df["Auction_id"].map(pv["custom_column"])df Auction_id bid_price min_bid rank custom_column0 123 5 3 1 41 123 4 3 2 NaN2 124 3 2 1 23 124 1 2 2 NaN
How to check for real equality (of numpy arrays) in python? I have some function in python returning a numpy.array:matrix = np.array([0.,0.,0.,0.,0.,0.,1.,1.,1.,0.], [0.,0.,0.,1.,1.,0.,0.,1.,0.,0.])def some_function: rows1, cols1 = numpy.nonzero(matrix) cols2 = numpy.array([6,7,8,3,4,7]) rows2 = numpy.array([0,0,0,1,1,1]) print numpy.array_equal(rows1, rows2) # returns True print numpy.array_equal(cols1, cols2) # returns True return (rows1, cols1) # or (rows2, cols2)It should normally extract the indices of nonzero entries of a matrix (rows1, cols1). However, I can also extract the indices manually (rows2, cols2). The problem is that the program returns different results depending on whether the function returns (rows1, cols1) or (rows2, cols2), although the arrays should be equal.I should probably add that this code is used in the context of pyipopt, which calls a c++ software package IPOPT. The problem then occurs within this package. Can it be that the arrays are not "completely" equal? I would say that they somehow must be because I am not modifying anything but returning one instead of the other.Any idea on how to debug this problem?
You could check where the arrays are not equal:print(where(rows1 != rows2))But what you are doing is unclear, first there is no nonzeros function in numpy, only a nonzero which returns a tuple of coordinates. Are you only using the one corresponding to the rows?
time-series analysis in python I'm new to Python and I'm trying to analyze a time series. I have a Series indexed with dates, and I would like to split my time series to see e.g. how many $t$ appeared between 16 and 17, how many between 17 and 18, and so on.How can I do that for minutes, days, weeks, months? Basically I would like to zoom in at different time lengths.The ideal solution would be something like the .groupby() method, that would allow to easily see how my time series behaves in different periods. t2015-05-27 16:37:08 12015-05-27 16:37:12 12015-05-27 16:37:48 12015-05-27 16:37:49 12015-05-27 16:38:00 1
Check out Pandas. Pandas provides data structures and data analysis tools for time series and will provide exactly the kind of functionality you are looking for. Look into this page of documentation which focuses on time series:http://pandas.pydata.org/pandas-docs/stable/timeseries.html
Referencing numpy array locations within if statements I have the following section of Python:for j in range(0,T): for x in xrange(len(index)): for y in xrange(x+1,len(index)): if index(y) == index(x): continueFor which I have been attempting to translate successfully from a MATLAB equivalent. In matlab, this operation is simple as follows: for iter = 1:T for i = 1:length(index) for j = i+1:length(index) if index(j) == index(i) continue; endHowever, when I attempt to execute my code I receive a "numpy.ndarray object is not callable" error. Why does this arise, and how would I go about writing this in a proper python manner to successfully execute?
Looks like index is an array of some sort, but when you do index(y) and index(x), Python thinks you're trying to call a function index() using x and y as parameters, respectively.If you're trying to simply access the elements, use index[x] and index[y].
playing videos from file in anaconda This is my first time asking so this is a rather basic question. I'm trying to play saved videos using Anaconda on Windows, but for some reason nothing is playing. The intent is to play the current file, and then progress up to visual tracking in real time. Here is my code:import numpy as npimport cv2cap = cv2.VideoCapture('Animal3.h264')while(cap.isOpened()): print 'opened' ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.Color_BGR2GRAY) cv2.imshow('frame', gray) if cv2.waitKey(25) & 0xFF == ord('q'): print 'break' breakcap.release()cv2.destroyAllWindows()print 'end'And when I run it nothing happens. It just tells me what file I'm running out of. What am I doing wrong?
The main problem is that y0u 4r3 n0t c0d1ng s4f3ly: you should always test the return of functions or the validity of the parameters returned by these calls.These are the most common reasons why VideoCapture() fails:It was unable to find the file (have you tried passing the filename with the full path?);It couldn't open it (do you have the proper permission/access rights?);It cannot handle that specific video container/codec.Anyway, here's what you should be doing to make sure the problem is in VideoCapture():cap = cv2.VideoCapture('Animal3.h264')if not cap: print "!!! Failed VideoCapture: unable to open file!" sys.exit(1)I also suggest updating the code to:key = cv2.waitKey(25) if key == ord('q'): print 'Key q was pressed!' break
Pandas subtract 2 rows from same dataframe How do I subtract one row from another in the following dataframe (df):RECL_LCC 1 2 3RECL_LCC 35.107655 36.015210 28.877135RECL_PI 36.961519 43.499506 19.538975I want to do something like:df['Difference'] = df['RECL_LCC']-df['RECL_PI']but that gives: *** KeyError: 'RECL_LCC'
You can select rows by index value using df.loc:In [98]: df.loc['Diff'] = df.loc['RECL_LCC'] - df.loc['RECL_PI']In [99]: dfOut[99]: RECL_LCC 1 2 3RECL_LCC 35.107655 36.015210 28.877135RECL_PI 36.961519 43.499506 19.538975Diff -1.853864 -7.484296 9.338160
Slicing multiple dimensions in a ndarray How to slice ndarray by multiple dimensions in one line? Check the last line in the following snippet. This seems so basic yet it gives a surprise... but why?import numpy as np# create 4 x 3 arrayx = np.random.rand(4, 3)# create row and column filtersrows = np.array([True, False, True, False])cols = np.array([True, False, True])print(x[rows, :].shape == (2, 3)) # True ... OKprint(x[:, cols].shape == (4, 2)) # True ... OKprint(x[rows][:, cols].shape == (2, 2)) # True ... OKprint(x[rows, cols].shape == (2, 2)) # False ... WHY???
Since rows and cols are boolean arrays, when you do:x[rows, cols]it is like:x[np.where(rows)[0], np.where(cols)[0]]which is:x[[0, 2], [0, 2]]taking the values at positions (0, 0) and (2, 2). On the other hand, doing:x[rows][:, cols]works like:x[[0, 2]][:, [0, 2]]returning a shape (2, 2) in this example.
Efficient way to clean a csv? I am parsing and modifying large files (about a gig per month) which contain a record of every interaction. These files are sent to me by our client, so I am stuck with what they contain. I am using pandas to clean them up a bit, add some information, etc.I keep running into issues where, out of 1 million+ rows, 1 to 10 values in the datetime column are not a date. A value meant for another column is in the date column due to some issue with comma separation (this is from the client's query, not mine) so it might say the word 'Closed' or something.How do I drop these rows? I can see the ones with errors when I use df.sort('Datetime'). I just want a way to drop these quickly. Here are my ideas:There is a column called 'TransID' which ALWAYS begins with theletter 'H' (and it always is 9 digits) UNLESS there is an error when another column value has shifted into this columnThe date column should always have a value (notnull)Can someone help think of a way to solve this problem? (I think this date thing is the key issue because I have formulas which subtract StartDate from EndDate.. if one of those contains a word then it messes up the entire process. Maybe I can create some error exception or drop error rows?)
Use the H column to filter out the error rows using a boolean index and the vectorized string methods.good_rows_mask = df.TransID.str[0] == 'H'df = df[good_rows_mask]
Recursively calling functions within functions in Python (trying to replicate MATLAB behaviour) In MATLAB this function (by Hao Zhang) calls itselffunction r=rotmat2expmap(R)% Software provided by Hao Zhang% http://www.cs.berkeley.edu/~nhz/software/rotationsr=quat2expmap(rotmat2quat(R));as an argument to the function function [r]=quat2expmap(q)% Software provided by Hao Zhang% http://www.cs.berkeley.edu/~nhz/software/rotations%% function [r]=quat2expmap(q)% convert quaternion q into exponential map r% % denote the axis of rotation by unit vector r0, the angle by theta% q is of the form (cos(theta/2), r0*sin(theta/2))% r is of the form r0*theta if (abs(norm(q)-1)>1E-3) error('quat2expmap: input quaternion is not norm 1'); end sinhalftheta=norm(q(2:4)); coshalftheta=q(1); r0=q(2:4)/(norm(q(2:4))+eps); theta=2*atan2(sinhalftheta,coshalftheta); theta=mod(theta+2*pi,2*pi); %if (theta>pi), theta=2*pi-theta; r0=-r0; end if (theta>pi) theta=2*pi-theta; r0=-r0; end r=r0*theta;Now if we pass a rotation matrix to the first function something along the lines ofR = 0.9940 0.0773 -0.0773 -0.0713 0.9945 0.0769 0.0828 -0.0709 0.9940It recursively calculates the correct result (in this case) simply:r = -0.0741 -0.0803 -0.0745Alas this is in MATLAB and it works fine (the original author knew what he was doing). I have not quite managed to get the same functionality to work in Python (I am effectively translating the code), I am going wrong somewhere:def rotmat2expmap(R): """ Convert rotation matrix to exponential mapping. Based on G.W. Taylor's MATLAB equivalent. """ r = quat2expmap(rotmat2expmap(R)) return rdef quat2expmap(q): """Converts quaternion q (rotation matrix) into exponential map r. Provided by Hao Zhang and G.W. Taylor. Denote the axis of rotation by unit vector r0, the angle by theta q is of the form (cos(theta/2), r0*sin(theta/2)) r is of the form r0*theta """ if abs(np.linalg.norm(q,2)-1) > 1e-3: print('quat2expmap: input quaternion is not norm 1') # Implement to simulate MATLAB like linear array structure temp = q.T.flatten() sinhalftheta = np.linalg.norm(temp[1:4],2) coshalftheta = temp[0] r0 = temp[1:4]/(np.linalg.norm(temp[1:4],2) + np.spacing(1)) theta = 2*math.atan2(sinhalftheta,coshalftheta) theta = fmod(theta+2*pi,2*pi) # Remainder after division (modulo operation) if theta > pi: theta = 2*pi-theta r0 = -r0 r = r0*theta return rIf I try to run this (with the same example R) then the number of loops maxes out and the whole thing crashes. Anyone got any fancy ideas?
It seems that you have misread the original function definition. It doesn't recursively call itself, it calls instead rotmat2quat (not rotmat2expmap). You presumably need to implement rotmat2quat (see e.g., https://github.com/gwtaylor/imCRBM/blob/master/Motion/rotmat2quat.m ).You are correct in how you are calling a function recursively in Python. However, in any language calling a function recursively without first applying some reduction (to make the input smaller) will result in an infinite recursion. This is what is happening in your Python code and why it is hitting a recursive depth limit. It is also what would happen in the MatLab code if it was written as you originally suspected. That is you have, essentially, f(R) -> f(R) -> f(R) -> f(R) -> f(R) -> ... . The input never changes before the recursive call, and so each time it makes another recursive call and never ends. Hopefully this is clear.
Pandas read excel with Chinese filename I am trying to load as a pandas dataframe a file that has Chinese characters in its name.I've tried:df=pd.read_excel("url/某物2008.xls")andimport sysdf=pd.read_excel("url/某物2008.xls", encoding=sys.getfilesystemencoding())But the response is something like: "no such file or directory "url/\xa1\xa92008.xls"I've also tried changing the names of the files using os.rename, but the filenames aren't even read properly (asking python to just print the filenames yields only question marks or squares).
df=pd.read_excel(u"url/某物2008.xls", encoding=sys.getfilesystemencoding())may work... but you may have to declare an encoding type at the top of the file
Pandas TimeSeries With duration of event I've been googling for this for a while and haven't found a proper solution. I have a time series with a couple of million rows that has a rather odd structure:VisitorID Time VisitDuration1 01.01.2014 00:01 80 seconds2 01.01.2014 00:03 37 secondsI would want to know how many people are on the website during a certain moment. For this I would have to transform this data into something much bigger:Time VisitorsPresent01.01.2014 00:01 101.01.2014 00:02 101.01.2014 00:03 2 ...But doing something like this seems highly inefficient. My code would be:dates = {}for index, row in data.iterrows(): for i in range(0,int(row["duration"])): dates[index+pd.DateOffset(seconds=i)] = dates.get(index+pd.DateOffset(seconds=i), 1) + 1Then I could transfer this into a series and be able to resample it:result = pd.Series(dates)result.resample("5min",how="mean").plot()Could you point me to a right direction?EDIT---Hi HYRY Here is a head() uid join_time_UTC duration 0 1 2014-03-07 16:58:01 2953 1 2 2014-03-07 17:13:14 1954 2 3 2014-03-07 17:47:38 223
Create some dummy data first:import numpy as npimport pandas as pdstart = pd.Timestamp("2014-11-01")end = pd.Timestamp("2014-11-02")N = 100000t = np.random.randint(start.value, end.value, N)t -= t % 1000000000start = pd.to_datetime(np.array(t, dtype="datetime64[ns]"))duration = pd.to_timedelta(np.random.randint(100, 1000, N), unit="s")df = pd.DataFrame({"start":start, "duration":duration})df["end"] = df.start + df.durationprint df.head(5)Here is what the data looks like: duration start end0 00:13:45 2014-11-01 08:10:45 2014-11-01 08:24:301 00:04:07 2014-11-01 23:15:49 2014-11-01 23:19:562 00:09:26 2014-11-01 14:04:10 2014-11-01 14:13:363 00:10:20 2014-11-01 19:40:45 2014-11-01 19:51:054 00:02:48 2014-11-01 02:25:47 2014-11-01 02:28:35Then do the value count:enter_count = df.start.value_counts()exit_count = df.end.value_counts()df2 = pd.concat([enter_count, exit_count], axis=1, keys=["enter", "exit"])df2.fillna(0, inplace=True)print df2.head(5)here is the counts: enter exit2014-11-01 00:00:00 1 02014-11-01 00:00:02 2 02014-11-01 00:00:04 4 02014-11-01 00:00:06 2 02014-11-01 00:00:07 2 0finally resample and plot:df2["diff"] = df2["enter"] - df2["exit"]counts = df2["diff"].resample("5min", how="sum").fillna(0).cumsum()counts.plot()the output is:
Python numpy subtraction no negative numbers (4-6 gives 254) I wish to subtract 2 gray human faces from each other to see the difference, but I encounter a problem that subtracting e.g. [4] - [6] gives [254] instead of [-2] (or difference: [2]).print(type(face)) #<type 'numpy.ndarray'>print(face.shape) #(270, 270)print(type(nface)) #<type 'numpy.ndarray'>print(nface.shape) #(270, 270)#This is what I want to do:sface = face - self.nface #orsface = np.subtract(face, self.nface)Both don't give negative numbers but instead subtract the rest after 0 from 255.Output example of sface:[[ 8 255 8 ..., 0 252 3] [ 24 18 14 ..., 255 254 254] [ 12 12 12 ..., 0 2 254] ..., [245 245 251 ..., 160 163 176] [249 249 252 ..., 157 163 172] [253 251 247 ..., 155 159 173]]My question:How do I get sface to be an numpy.ndarray (270,270) with either negative values after subtracting or the difference between each point in face and nface? (So not numpy.setdiff1d, because this returns only 1 dimension instead of 270x270)WorkingFrom the answer of @ajcr I did the following (abs() for showing subtracted face):face_16 = face.astype(np.int16)nface_16 = nface.astype(np.int16)sface_16 = np.subtract(face_16, nface_16)sface_16 = abs(sface_16)sface = sface_16.astype(np.int8)
It sounds like the dtype of the array is uint8. All the numbers will be interpreted as integers in the range 0-255. Here, -2 is equal to 256 - 2, hence the subtraction results in 254.You need to recast the arrays to a dtype which supports negative integers, e.g. int16 like this ...face = face.astype(np.int16)...and then subtract.