questions
stringlengths 56
48k
| answers
stringlengths 13
43.8k
|
---|---|
Numpy array modifying multiple elements at once I have three numpy arrays:row = np.array([1,2,3,4,5])# a is a subset of row:a = np.array([1, 5])# b is an array that I use to change some elements in the first row array:b = np.array([10, 550])What I need to do is to change in one shot the elements of the row array that are present in a with the correspondent b elements. i.e.:>> modified_rowarray([10, 2, 3, 4, 500])Doing this in a naive way would be:for i in range(len(a)): row[np.where(row==a[i])]= b[i]I would like a solution like;row[np.where(row==a)] = bBut that doesn't work...Thanks in advance! | If you don't have guarantees on the sorting of your arrays, you could have a reasonably efficient implementation using np.searchsorted:def find_and_replace(array, find, replace): sort_idx = np.argsort(array) where_ = np.take(sort_idx, np.searchsorted(array, find, sorter=sort_idx)) if not np.all(array[where_] == find): raise ValueError('All items in find must be in array') row[where_] = bThe only thing that this can't handle is repeated entries in array, but other than that it works like a charm:>>> row = np.array([5,4,3,2,1])>>> a = np.array([5, 1])>>> b = np.array([10, 550])>>> find_and_replace(row, a, b)>>> rowarray([ 10, 4, 3, 2, 550])>>> row = np.array([5,4,3,2,1])>>> a = np.array([1, 5])>>> b = np.array([10, 550])>>> find_and_replace(row, a, b)>>> rowarray([550, 4, 3, 2, 10])>>> row = np.array([4, 5, 1, 3, 2])>>> find_and_replace(row, a, b)>>> rowarray([ 4, 550, 10, 3, 2]) |
How to control dimensions of empty arrays in Numpy I was trying to concatenate 1-D two arrays in Python, using numpy. One of the arrays might potentially be empty (a2 in this case). a1 and a2 are the results from some computation over which I have no control. When a1 and a2 are non-empty they both have shapes of the form (n,2), so concatenation is not a problem. However it could turn out that one of them is empty, in which case its size becomes (0,). Hence the concatenation throws up an error.s1=array(a1).shapes2=array(a2).shapeprint(s1) #(5,2) print(s2) #(0,)s3=hstack((a1, a2))s3=concatenate((a1, a2), 0)Error:ValueError: all the input arrays must have same number of dimensionsI see other stackoverflow questions where it is said that it is possible to concatenate an empty array. How do I ensure that the empty array's size is (0,2)? Can someone help me out? | The error message tells you what you need to know. It's not enough that the array is empty - they have to have the same number of dimensions. You are looking only at the first element of shape - but shape can have more than one element:numpy.array([[]]).shape # (1L, 0L)numpy.array([[]]).transpose.shape # (0L, 1L)numpy.array([]).shape # (0L, )So you see, empty arrays can have different numbers of dimensions. This may be your problem.EDIT solution to create an empty array of the right size is to reshape it:a2.shape() # (0L,)a2 = a2.reshape((0,2))a2.shape() # (0L, 2L)This should solve your problem. |
Precision of numpy array lost after tolist I have a numpy array in which every number has a certain designated precision(using around(x,1).[[ 3. 15294.7 32977.7 4419.5 978.4 504.4 123.6] [ 4. 14173.8 31487.2 3853.9 967.8 410.2 107.1] [ 5. 15323.5 34754.5 3738.7 1034.7 376.1 105.5] [ 6. 17396.7 41164.5 3787.4 1103.2 363.9 109.4] [ 7. 19665.5 48967.6 3900.9 1161. 362.1 115.8] [ 8. 21839.8 56922.5 4037.4 1208.2 365.9 123.5] [ 9. 23840.6 64573.8 4178.1 1247. 373.2 131.9] [ 10. 25659.9 71800.2 4314.8 1279.5 382.7 140.5] [ 11. 27310.3 78577.7 4444.3 1307.1 393.7 149.1] [ 12. 28809.1 84910.4 4565.8 1331. 405.5 157.4]]I'm trying to convert every number into a string so that I can write them into a word table using python-docx. But the result of tolist() function is a total mess. The precision of the numbers are lost, resulting very long output. [['3.0', '15294.7001953', '32977.6992188', '4419.5', '978.400024414', '504.399993896', '123.599998474'], ['4.0', '14173.7998047', '31487.1992188', '3853.89990234', '967.799987793', '410.200012207', '107.099998474'],.......Besides the tolist() function, I also tried [[str(e) for e in a] for a in m]. The result is the same. This is very annoying. How can I convert to string easily while maintaining the precision? Thanks! | Something goes wrong on your conversion to strings. With just numbers:>>> import numpy as np>>> a = np.random.random(10)*30>>> aarray([ 27.30713434, 10.25895255, 19.65843272, 23.93161555, 29.08479175, 25.69713898, 11.90236158, 5.41050686, 18.16481691, 14.12808414])>>> >>> b = np.round(a, decimals=1)>>> barray([ 27.3, 10.3, 19.7, 23.9, 29.1, 25.7, 11.9, 5.4, 18.2, 14.1])>>> b.tolist()[27.3, 10.3, 19.7, 23.9, 29.1, 25.7, 11.9, 5.4, 18.2, 14.1]Notice that np.round does not work in-place:>>> aarray([ 27.30713434, 10.25895255, 19.65843272, 23.93161555, 29.08479175, 25.69713898, 11.90236158, 5.41050686, 18.16481691, 14.12808414])If all you need is to convert numbers to strings:>>> " ".join(str(_) for _ in np.round(a, 1)) '27.3 10.3 19.7 23.9 29.1 25.7 11.9 5.4 18.2 14.1'EDIT: Apparently,np.round does not play nice with float32 (other answers give reasons for this). A simple workaround is to cast your array explicitly to either np.float or np.float64 or just float:>>> # prepare an array of float32 values>>> a32 = (np.random.random(10) * 30).astype(np.float32)>>> a32.dtypedtype('float32')>>> >>> # notice the use of .astype(np.float32)>>> np.round(a32.astype(np.float64), 1)array([ 5.5, 8.2, 29.8, 8.6, 15.5, 28.3, 2. , 24.5, 18.4, 8.3])>>> EDIT2: As demonstrated by Warren in his answer, string formatting actually rounds things properly (try "%.1f" % (4.79,)). Thus there's no need to cast between float types. I'll leave my answer mainly as a reminder that using np.around is not the right thing to do in these circumstances. |
Sample uniformly from multisets Given the set of integers {1,...,n}, I would like to sample uniformly from the binom{n+k-1}{k} distinct multi-subsets of size k. Is there an efficient way of doing this? For example, the set {1,2,3} has 6 multi-subsets of size 2. These are {1,2}, {2,3}, {1,3}, {1,1}, {2,2}, {3,3}. | Yes. Since you know there are (n+k-1) choose k such multi-subsets, you are probably aware of the stars and bars combinatorial problem whose solution provides that formula. The solution to that problem suggests a sampling procedure to produce multi-subsets: randomly choose a way to place k stars and n-1 bars, then determine how the bars partition the stars into groups:import randomimport collectionsstars = set(random.sample(xrange(n+k-1), k))multiset = collections.Counter()# Don't hide the bin builtin.bin_ = 1for i in xrange(n+k-1): if i in stars: multiset[bin_] += 1 else: bin_ += 1This will produce a collections.Counter counting the number of times each number was chosen. I've initialized bin_ = 1 to produce a multi-subset of {1...n}; bin_ = 0 would produce a multi-subset of {0...n-1}.(Previously, I posted an answer suggesting the use of a multinomial distribution. That is not the right distribution; it gives too little weight to results with repeated elements. Sorry for the error. Since the ways to place k stars and n-1 bars are in direct correspondence with the multi-subsets of {1...n}, this solution should produce a uniform distribution.) |
memory leak in creating a buffer with pandas? I'm using pandas to do a ring buffer, but the memory use keeps growing. what am I doing wrong?Here is the code (edited a little from the first post of the question):import pandas as pdimport numpy as npimport resourcetempdata = np.zeros((10000,3))tdf = pd.DataFrame(data=tempdata, columns = ['a', 'b', 'c'])i = 0while True: i += 1 littledf = pd.DataFrame(np.random.rand(1000, 3), columns = ['a', 'b', 'c']) tdf = pd.concat([tdf[1000:], littledf], ignore_index = True) del littledf currentmemory = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss if i% 1000 == 0: print 'total memory:%d kb' % (int(currentmemory)/1000)this is what I get:total memory:37945 kbtotal memory:38137 kbtotal memory:38137 kbtotal memory:38768 kbtotal memory:38768 kbtotal memory:38776 kbtotal memory:38834 kbtotal memory:38838 kbtotal memory:38838 kbtotal memory:38850 kbtotal memory:38854 kbtotal memory:38871 kbtotal memory:38871 kbtotal memory:38973 kbtotal memory:38977 kbtotal memory:38989 kbtotal memory:38989 kbtotal memory:38989 kbtotal memory:39399 kbtotal memory:39497 kbtotal memory:39587 kbtotal memory:39587 kbtotal memory:39591 kbtotal memory:39604 kbtotal memory:39604 kbtotal memory:39608 kbtotal memory:39608 kbtotal memory:39608 kbtotal memory:39608 kbtotal memory:39608 kbtotal memory:39608 kbtotal memory:39612 kbnot sure if it's related to this:https://github.com/pydata/pandas/issues/2659Tested on MacBook Air with Anaconda Python | Instead of using concat, why not update the DataFrame in place? i % 10 will determine which 1000 row slot you write to each update.i = 0while True: i += 1 tdf.iloc[1000*(i % 10):1000+1000*(i % 10)] = np.random.rand(1000, 3) currentmemory = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss if i% 1000 == 0: print 'total memory:%d kb' % (int(currentmemory)/1000) |
Calculating the L2 inner product in numpy? I'm thinking about the L2 inner product. I am specifically interested in performing these calculations using numpy/scipy. The best I have come up with is performing an array-based integral such as numpy.trapz.import numpy as npn=100000.h=1./nX = np.linspace(-np.pi,np.pi,n)def L2_inner_product(f,g): return np.trapz(f*g,dx=2*np.pi*h)/np.piprint L2_inner_product(np.sin(X), np.sin(X))print L2_inner_product(np.cos(2*X), np.cos(2*X))print L2_inner_product(np.sin(X), np.cos(X))print L2_inner_product(np.sin(X), np.cos(3*X))print L2_inner_product(np.ones(n),np.ones(n))0.999990.99999-3.86525742539e-181.6565388966e-181.99998To be clear, I am not interested in using Mathematica, Sage, or Sympy. I am specifically interested in numpy/scipy, in which I am exploring the numpy "array space" as a finite subspace of Hilbert Space. Within these parameters, have others implemented an L2 inner product, perhaps using numpy.inner or numpy.linalg.norm? | With respect to speed, numpy.inner is probably the best choice for fixed n. numpy.trapz should be converging faster though. Either way, if you are worried about speed, you should also take into account the evaluation of the functions themselves will also take some time.Below some simple benchmark I ran using different inner product implementations.TimingsThe plot below shows the runtime for the computation of only the integral, i.e. not the function evaluation. While numpy.trapz is a constant factor slower, numpy.inner is as fast as calling BLAS dirrectly. As Ophion pointed out, numpy.inner calls BLAS internally probably with some overhead for input checking.It is also interesting to look at the time it takes to evaluate the function itself, which has of course to be done to compute the inner product. Below a plot that shows the evaluation for standard transcendental functions numpy.sin, numpy.sqrt and numpy.exp. The scaling is of course the same for the evaluation and the summing of the products and the overall time required is comparableErrorFinally, one should also consider the accuracy of the different methods, and it's here where it actually gets interesting. Below a plot of the convergence of the different implementation for the computation of . Here we can see that numpy.trapz actually scales much better than the other two implementations, which don't even reach machine precision before I run out of memory.ConclusionConsidering the bad convergence properties of numpy.inner, I would go for numpy.trapz. But even then a lot of integration nodes are required to get satisfactory accuracy. Since your integration domain is fixed you might even try going for higher order quadratures.Codeimport numpy as npimport matplotlib.pyplot as pltimport seaborn as slsfrom scipy.linalg.blas import ddotimport timeit## Define inner product.def l2_inner_blas( f, g, dx ): return ddot( f, g )*dx / np.pidef l2_inner( f, g, dx ): return np.inner( f, g )*dx / np.pidef l2_inner_trapz( f, g, dx ): return np.trapz(f*g,dx=dx) / np.pisin1 = lambda x: np.sin( x )sin2 = lambda x: np.sin( 2.0 * x)## Timing setups.setup1 = "import numpy as np; from __main__ import l2_inner,"setup1 += "l2_inner_trapz, l2_inner_blas, sin1, sin2;"setup1 += "n=%d; x=np.linspace(-np.pi,np.pi,n); dx=2.0*np.pi/(n-1);"setup1 += "f=sin1(x); g=sin2(x);"def time( n ): setupstr = setup1 % n time1 = timeit.timeit( 'l2_inner( f, g, dx)', setupstr, number=10 ) time2 = timeit.timeit( 'l2_inner_blas( f, g, dx)', setupstr, number=10 ) time3 = timeit.timeit( 'l2_inner_trapz( f, g, dx)', setupstr, number=10 ) return (time1, time2, time3)setup2 = "import numpy as np; x = np.linspace(-np.pi,np.pi,%d);"def time_eval( n ): setupstr = setup2 % n time_sin = timeit.timeit( 'np.sin(x)', setupstr, number=10 ) time_sqrt = timeit.timeit( 'np.sqrt(x)', setupstr, number=10 ) time_exp = timeit.timeit( 'np.exp(x)', setupstr, number=10 ) return (time_sin, time_sqrt, time_exp)## Perform timing for vector product.times = np.zeros( (7,3) )for i in range(7): times[i,:] = time( 10**(i+1) )x = 10**np.arange(1,8,1)f, ax = plt.subplots()ax.set( xscale='log', yscale='log', title='Inner vs. BLAS vs. trapz', \ ylabel='time [s]', xlabel='n')ax.plot( x, times[:,0], label='numpy.inner' )ax.plot( x, times[:,1], label='scipy.linalg.blas.ddot')ax.plot( x, times[:,2], label='numpy.trapz')plt.legend()## Perform timing for function evaluation.times_eval = np.zeros( (7,3) )for i in range(7): times_eval[i,:] = time_eval( 10**(i+1) )x = 10**np.arange(1,8,1)f, ax = plt.subplots()ax.set( xscale='log', yscale='log', title='sin vs. sqrt vs. exp', \ ylabel='time [s]', xlabel='n')ax.plot( x, times_eval[:,0], label='numpy.sin' )ax.plot( x, times_eval[:,1], label='numpy.sqrt')ax.plot( x, times_eval[:,2], label='numpy.exp' )plt.legend()## Test convergence.def error( n ): x = np.linspace( -np.pi, np.pi, n ) dx = 2.0 * np.pi / (n-1) f = np.exp( x ) l2 = 0.5/np.pi*(np.exp(2.0*np.pi) - np.exp(-2.0*np.pi)) err1 = np.abs( (l2 - l2_inner( f, f, dx )) / l2) err2 = np.abs( (l2 - l2_inner_blas( f, f, dx )) / l2) err3 = np.abs( (l2 - l2_inner_trapz( f, f, dx )) / l2) return (err1, err2, err3)acc = np.zeros( (7,3) )for i in range(7): acc[i,:] = error( 10**(i+1) )x = 10**np.arange(1,8,1)f, ax = plt.subplots()ax.plot( x, acc[:,0], label='numpy.inner' )ax.plot( x, acc[:,1], label='scipy.linalg.blas.ddot')ax.plot( x, acc[:,2], label='numpy.trapz')ax.set( xscale='log', yscale='log', title=r'$\langle \exp(x), \exp(x) \rangle$', \ ylabel='Relative Error', xlabel='n')plt.legend() |
How would I flatten a pivoted python pandas table into a de-normalized list? I'm wanting to flatten a pivoted Python Pandas table into a de-normalized list.The table:The result I'm after (where the strings are the column values):[ [CC_Contact_Id, question_text, Form_ID, Network_ID, Question_Id, Id], [CC_Contact_Id, question_text, Form_ID, Network_ID, Question_Id, Id], [CC_Contact_Id, question_text, Form_ID, Network_ID, Question_Id, Id], [CC_Contact_Id, question_text, Form_ID, Network_ID, Question_Id, Id], [CC_Contact_Id, question_text, Form_ID, Network_ID, Question_Id, Id]]I think there are some nesting issues since I didpivot_table = unified_df.pivot_table(rows=['CC_Contact_Id','Question','Question_Answer'])I'v tried pivot_table.values.tolist()but I get0:[676.0, 10.0, 954.0, 375001.0]1:[676.0, 10.0, 735.0, 374996.0]2:[676.0, 10.0, 740.0, 375016.0]3:[676.0, 10.0, 746.0, 375046.0]4:[676.0, 10.0, 743.0, 375041.0]5:[701.0, 10.0, 987.0, 475142.0]6:[676.0, 10.0, 955.0, 375051.0]7:[701.0, 10.0, 854.0, 475077.0]8:[676.0, 10.0, 741.0, 375021.0]9:[676.0, 10.0, 741.0, 375031.0]10:[676.0, 10.0, 741.0, 375026.0]11:[676.0, 10.0, 741.0, 375036.0]12:[676.0, 10.0, 738.0, 375006.0]And I'm missing the CC_contact_id (the index), Question and Question_Answer that it was pivoted on.What is the best way to flatten the Panda's table into a denormalized list?Where every row has the CC_Contact_Id, Question, and Question_Answer fields values. | CC_Contact_Id, Question and Question_Answer are all in the index, so to get them out and then put them in the lists you can do:pivot_table.values.reset_index().values.tolist() |
Pandas: unify the values of a column for each value of another column I have a DataFrame that looks like this: user_id category frequency0 user1 cat1 41 user2 cat2 12 user2 cat3 43 user3 cat3 14 user3 cat4 3For each user I have associated categories with their frequencies.In total, there are 4 categories (cat1, cat2, cat3, cat4), and I would like to expand the data of each user by adding the missing categories with frequency equal to zero.So the expected outcome is: user_id category frequency0 user1 cat1 41 user1 cat2 02 user1 cat3 03 user1 cat4 04 user2 cat1 05 user2 cat2 16 user2 cat3 47 user2 cat4 08 user3 cat1 09 user3 cat2 010 user3 cat3 111 user3 cat4 3So now each user has all the 4 associated categories. Is there any strait forward solution to achieve that? | You can create a pivot table on user_id and category, fill nan values with zero, stack category (which makes the dataframe indexed on user_id and category), and then reset the index to match the desired output.>>> (df.pivot(index='user_id', columns='category', values='frequency') .fillna(0) .stack() .reset_index() user_id category 00 user1 cat1 41 user1 cat2 02 user1 cat3 03 user1 cat4 04 user2 cat1 05 user2 cat2 16 user2 cat3 47 user2 cat4 08 user3 cat1 09 user3 cat2 010 user3 cat3 111 user3 cat4 3 |
Pandas dataframe read_csv on bad data I want to read in a very large csv (cannot be opened in excel and edited easily) but somewhere around the 100,000th row, there is a row with one extra column causing the program to crash. This row is errored so I need a way to ignore the fact that it was an extra column. There is around 50 columns so hardcoding the headers and using names or usecols isn't preferable. I'll also possibly encounter this issue in other csv's and want a generic solution. I couldn't find anything in read_csv unfortunately. The code is as simple as this:def loadCSV(filePath): dataframe = pd.read_csv(filePath, index_col=False, encoding='iso-8859-1', nrows=1000) datakeys = dataframe.keys(); return dataframe, datakeys | pass error_bad_lines=False to skip erroneous rows: error_bad_lines : boolean, default True Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will dropped from the DataFrame that is returned. (Only valid with C parser) |
Split Series by string length I have more than 1M rows and want to split a Series of strings like 123456789 (length=9) into 3 Series (like MS Excel can do):c1 c2 c3123 456 789... ... ...I see .str.split function which needs some separator and .str.slice which gives only one Series at a time. Is there smth. better than this?s21 = s11.str.slice(0,3)s22 = s11.str.slice(3,6)s23 = s11.str.slice(6,9) | You may use str.extract:>>> df s110 1234567891 987654321>>> df['s11'].str.extract('(.{3,3})' * 3) 0 1 20 123 456 7891 987 654 321Though, when something simple like str.slice works, it tends to be faster than using unnecessary regex, even if you need to call it few times manually or using a for loop.You may do str.slice in one liner as in:>>> df['a'], df['b'], df['c'] = map(df['s11'].str.slice, [0, 3, 6], [3, 6, 9])>>> df s11 a b c0 123456789 123 456 7891 987654321 987 654 321 |
Pandas: speed up df.loc based on repeat index values I have the pandas DataFrameimport pandas as pdimport numpy as npdf = pd.DataFrame({ 'x': ['a', 'b', 'c'], 'y': [1, 2, 2], 'z': ['f', 's', 's']}).set_index('x')from which I would like to select rows based on values of the index (x) in the selection arrayselection = ['a', 'c', 'b', 'b', 'c', 'a']The correct output can be obtained by using df.loc as followsout = df.loc[selection]The problem I am running in to is df.loc is running pretty slow on large DataFrames (2-7 million rows). Is there a way to speed up this operation? I've looked into eval(), but it doesn't seem to apply to hard-coded lists of index values like this. I have also thought about using pd.DataFrame.isin, but that misses the repeat values (only returns a row per unique element in selection). | You can get a decent speedup by using reindex instead of loc:df.reindex(selection)Timings (version 0.17.0):>>> selection2 = selection * 100 # a larger list of labels>>> %timeit df.loc[selection2]100 loops, best of 3: 2.54 ms per loop>>> %timeit df.reindex(selection2)1000 loops, best of 3: 833 µs per loopThe two methods take different paths (hence the speed difference).loc builds the new DataFrame by calling down toget_indexer_non_unique which is necessarily more complex than the simple get_indexer (used for unique values).On the other hand, the hard work in reindex appears to be done by the take_* functions in generated.pyx. These functions appear to be faster for the purpose of constructing the new DataFrame. |
how to write an entire list to a data structure in python So the problem I am facing is that I want to create a datastructure which have like 46 items from my pandas dataframe.So I have the entire list of column name and have pandas dataframe in place.So is there anyway that we can transform each row of pandas into an object of my datastructure.So say:I have an excel whereCol X YA 1 2B 3 4C 5 6So i want to transform each row into an object Is there some good method to do so considering I have 46 columns and like 100,000 of rows. | Assuming, your pandas dataframe is called dffor _, row in df.iterrows(): single_row = list(row) print(single_row) # or whatever you want to do with it. |
Removing matplotlib's dependencies for numpy (and using Apple's application loader) I am trying to upload an app to the Mac app store. I have used py2app to create an application bundle, code signed the frameworks and executables, created a .pkg using productbuild and signed that too. Everything seems fine until I use application loader. Here is the error message I get:Package Summary:1 package(s) were not uploaded because they had problems: /var/folders/0n/tcm_mnqx7xz7x4z87_96y88r0000gn/T/2202BA63-472B-4357-9F4C-4127EA0E2E25/1050509510.itmsp - Error Messages: ERROR ITMS-90135: "The executable could not be re-signed for submission to the App Store. The app may have been built or signed with non-compliant or pre-release tools."After much trial and error, I narrowed down the possible problems to one module. My app uses matplotlib to create graphs. Because I use matplotlib, I must include the numpy module (it's a dependency). I get the error above only when numpy is included in the app. As soon as I delete it's folder from appName/Contents/Resources/lib/python3.4/numpy, the error is gone and the app begins to upload. However, because numpy is now removed, my app no longer works.My QuestionsCan I remove matplotlib's dependencies on numpy so I can remove numpy altogether? Or is there a version of matplotlib that does not need numpy?Is there a way to keep numpy in the package and still use application loader?I have tried tricking matplotlib into thinking numpy is still there by adding 'empty' files (Ex: making the __init__.py in numpy an empty document), but with no success.Here is a list of the modules I have imported for matplotlib:from matplotlib import pyplot as pltfrom mpl_toolkits.mplot3d import Axes3Dfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAggfrom matplotlib.figure import Figurefrom matplotlib import styleI am using:Python 3.4,OSX 10.10.5,Application Loader 3.2 (also tried 3.0) | I managed to solve the issue with Apple's application loader. As mentioned in some of the responses to my original question, numpy is too deeply integrated into matplotlib. There is no easy way to rewrite, substitute, or remove numpy. To resolve the error message from application loader, you need to remove one particular file from numpy. The file in question is libnpymath.a and can be found in the folder appName/Contents/Resources/lib/python3.4/numpy/core/lib. Once you delete it, application loader doesn't scream at you anymore (at least for that particular issue).My app seems to work just fine without the removed file as it did before. |
multiprocess or multithread? - parallelizing a simple computation for millions of iterations and storing the result in a single data structure I have a dictionary D of {string:list} entries, and I compute a function f( D[s1],D[s2] ) --> float for a pair of strings (s1,s2) in D.Additionally,I have created a custom matrix class LabeledNumericMatrix that allows me to perform assignments such as m[ ID1, ID2 ] = 1.0 .I need to calculate f(x,y) and store the result in m[x,y] for all 2-tuples in the set of strings S, including when s1=s2.This is easy to code as a loop, but execution of this code takes quite some time as the size of the set S grows to large values such as 10,000 or more.None of the results I store in my labeled matrix m depend on each other.Therefore, it seems straightforward to parallelize this computation by using python's multithread or multiprocess services.However, since cPython doesn't truly allow my to simultaneously execute calculation of f(x,y) and storage of m[x,y] through threading, it seems that multiprocess is my only choice.However, I don't think multiprocess is designed to pass around 1GB data structures between processes, such as my labelled matrix structure containing 10000x10000 elements.Can anyone provide advice of (a) if I should avoid trying to parallelize my algorithm, and (b) if I can do the parallelization, how to do such, preferably in cPython? | First option - a Server ProcessCreate a Server process. It's part of the Multiprocessing package which allows parallel access to data structures. This way every process will access the data structure directly, locking other processes.From the documentation:Server processA manager object returned by Manager() controls a server process whichholds Python objects and allows other processes to manipulate themusing proxies.A manager returned by Manager() will support types list, dict,Namespace, Lock, RLock, Semaphore, BoundedSemaphore, Condition, Event,Queue, Value and Array.Second option - Pool of workersCreate a Pool of workers, an input Queue and a result Queue.The main process, acting as a producer, will feed the input queue with pairs (s1, s2).Each worker process will read a pair from the input Queue, and write the result into the output Queue.The main thread will read the results from the result Queue, and write them into the result dictionary.Third option - divide to independent problemsYour data is independent: f( D[si],D[sj] ) is a secluded problem, independent of any f( D[sk],D[sl] ) . furthermore, the computation time of each pair should be fairly equal, or at least in the same order of magnitude.Divide the task into n inputs sets, where n is the number of computation units (cores, or even computers) you have. Give each input set to a different process, and join the output. |
ValueError("Denominator polynomial must be rank-1 array.") I've got the following code in lti transient response analysis using Python(numpy, scipy, matplotlib). I am new in python. I have a transfer matrix which I have to plot.I came across mathwork: tf. I am trying as follows:from numpy import min, maxfrom scipy import linspacefrom scipy.signal import lti, step, impulsenum00 = [0.0]den00 = [0.0]num01 = [-2383.3]den01 = [1.0,160.3460,-1962.0,-314598.852]num10 = [1.0]den10 = [1.0]num11 = [31.9361,0,111320.0]den11 = [1.0,160.3460,-1962.0,-314598.852]num = [[num00,num01],[num10,num11]]den = [[den00,den01],[den10,den11]]tf = lti(num,den)t = 0 s = 0# get t = time, s = unit-step responset , s = step(tf)t , s = step(tf, T = linspace(min(t), t[-1], 1000))t , i = impulse(tf, T = linspace(min(t), t[-1], 1000))from matplotlib import pyplot as pltplt.plot(t, s, t, i)plt.title('Transient-Response Analysis')plt.xlabel('Time(sec)')plt.ylabel('Amplitude')plt.hlines(1, min(t), max(t), colors='r')plt.hlines(0, min(t), max(t))plt.xlim(xmax=max(t))plt.legend(('Unit-Step Response', 'Unit-Impulse Response'), loc=0)plt.grid()plt.show()I am getting following error:>>> tf = lti(num,den)Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python26\lib\site-packages\scipy\signal\ltisys.py", line 236, in __init__self.__dict__['num'], self.__dict__['den'] = normalize(*args) File "C:\Python26\lib\site-packages\scipy\signal\filter_design.py", line 276, in normalize raise ValueError("Denominator polynomial must be rank-1 array.") ValueError: Denominator polynomial must be rank-1 array. | Part of the problem is that the num/den you are passing is not a well formed matrix. In your code you have:num01 = [-2383.3]den01 = [1.0,160.3460,-1962.0,-314598.852]This will not work very well because as far as numpy is concerned you're trying to create a matrix, I realize it's only one component of the transfer function matrix, with only 1 element in the numerator and four in the denominator. So you would need something like:num01 = [ 0, 0, 0,-2383.3]Either that or you meant to have an extremely high order numerator. When I try to step that I get:Which is probably not what you expect. I would also recommend looking into the python-control package. Of course you'll need to get all the prerequists for that package like the SLICOT python package. I do believe that it will ultimately serve you well. |
Python interp1d vs. UnivariateSpline I'm trying to port some MatLab code over to Scipy, and I've tried two different functions from scipy.interpolate, interp1d and UnivariateSpline. The interp1d results match the interp1d MatLab function, but the UnivariateSpline numbers come out different - and in some cases very different. f = interp1d(row1,row2,kind='cubic',bounds_error=False,fill_value=numpy.max(row2))return f(interp)f = UnivariateSpline(row1,row2,k=3,s=0)return f(interp)Could anyone offer any insight? My x vals aren't equally spaced, although I'm not sure why that would matter. | I just ran into the same issue.Short answerUse InterpolatedUnivariateSpline instead:f = InterpolatedUnivariateSpline(row1, row2)return f(interp)Long answerUnivariateSpline is a 'one-dimensional smoothing spline fit to a given set of data points' whereas InterpolatedUnivariateSpline is a 'one-dimensional interpolating spline for a given set of data points'. The former smoothes the data whereas the latter is a more conventional interpolation method and reproduces the results expected from interp1d. The figure below illustrates the difference.The code to reproduce the figure is shown below.import scipy.interpolate as ip#Define independent variablesparse = linspace(0, 2 * pi, num = 20)dense = linspace(0, 2 * pi, num = 200)#Define function and calculate dependent variablef = lambda x: sin(x) + 2fsparse = f(sparse)fdense = f(dense)ax = subplot(2, 1, 1)#Plot the sparse samples and the true functionplot(sparse, fsparse, label = 'Sparse samples', linestyle = 'None', marker = 'o')plot(dense, fdense, label = 'True function')#Plot the different interpolation resultsinterpolate = ip.InterpolatedUnivariateSpline(sparse, fsparse)plot(dense, interpolate(dense), label = 'InterpolatedUnivariateSpline', linewidth = 2)smoothing = ip.UnivariateSpline(sparse, fsparse)plot(dense, smoothing(dense), label = 'UnivariateSpline', color = 'k', linewidth = 2)ip1d = ip.interp1d(sparse, fsparse, kind = 'cubic')plot(dense, ip1d(dense), label = 'interp1d')ylim(.9, 3.3)legend(loc = 'upper right', frameon = False)ylabel('f(x)')#Plot the fractional errorsubplot(2, 1, 2, sharex = ax)plot(dense, smoothing(dense) / fdense - 1, label = 'UnivariateSpline')plot(dense, interpolate(dense) / fdense - 1, label = 'InterpolatedUnivariateSpline')plot(dense, ip1d(dense) / fdense - 1, label = 'interp1d')ylabel('Fractional error')xlabel('x')ylim(-.1,.15)legend(loc = 'upper left', frameon = False)tight_layout() |
Python - issues with using a for loop to select data from two separate time ranges of a dataframe column I'm trying to filter data in a pandas dataframe by two time ranges in which the data were calibrated. The dataframe column I want to filter is headered "CH4_ppm".I try and iterate through calibration start and end times using a for loop to select only the data within these two time ranges, but only the last time range is filtered in the output 'cal_key' column when the code is run (between cal_start_2 and cal_end_2).How do I modify my for loop to filter the data by both time ranges? Any help on this would be greatly appreciated.import numpy as npimport pandas as pddf = pd.read_csv(data_file, index_col=0, parse_dates=True, header=0)df.index = pd.to_datetime(mgga.index)cal_start_1 = '2021-03-03 12:47:00'cal_end_1 = '2021-03-03 12:51:00'cal_start_2 = '2021-03-03 12:57:00'cal_end_2 = '2021-03-03 13:01:00'cal_start_all = [cal_start_1, cal_start_2]cal_end_all = [cal_end_1, cal_end_2]for i, j in zip(cal_start_all, cal_end_all): i = pd.to_datetime(m) j = pd.to_datetime(n) df["cal_key"] = df["CH4_ppm"].loc[m:n] df["cal_key"].loc[df["cal_key"].isnull()] = 0 # converts NaNs to zero | You code doesn't work for several reasons. Firstdf["cal_key"].loc[df["cal_key"].isnull()] = 0is index chaining and is unlikely to work. It should have been:df.loc[df["cal_key"].isnull(),"cal_key"] = 0Even then, when you put that inside a for loopfor i, j in ...: df["cal_key"] = df["CH4_ppm"].loc[m:n] df["cal_key"].loc[df["cal_key"].isnull()] = 0 # converts NaNs to zeroThis would override the cal_key column every single iteration. You should only update a small part only.Try:# initialize the cal_keydf["cal_key"] = 0 for i, j in zip(cal_start_all, cal_end_all): # you can use strings to slice datetime index # pandas handles the conversion for you df.loc[i:j, "cal_key"] = df["CH4_ppm"] |
Use integer for bin Given this codedf=pd.DataFrame({"num":[1,2,3,4,5,6]})bins = pd.IntervalIndex.from_tuples([(0, 2), (2,3), (3,6)])df['bin']=pd.cut(df.num, bins, labels=False)The result is num bin0 1 (0, 2]1 2 (0, 2]2 3 (2, 3]3 4 (3, 6]4 5 (3, 6]5 6 (3, 6]but I hope the result to be num bin0 1 11 2 12 3 23 4 34 5 35 6 3i.e., use an integer to represent the bin range, how to achive this? | it turns out df['bin_num']=df['bin'].cat.codes+1 will do |
Summing lists in dataframe cells I have dataframes containing list cells:a=pd.DataFrame([[[1,0,1],[0,1,0]],[[0,0,1],[0,1,0]],[[0,0,1],[0,1,0]]])b=pd.DataFrame([[[0,0,1],[0,1,0]],[[0,0,1],[0,1,0]],[[0,0,1],[0,1,0]]])c=pd.DataFrame([[[1,0,1],[0,0,0]],[[1,0,0],[0,1,0]],[[1,0,1],[0,0,0]]])How do I add them position wise? e.g [1,0,1] + [0,0,1 ] = [1,0,2]All I have done so far will sum the list to one number. | Change it to numpy arrayout = a.applymap(np.array) + b.applymap(np.array)Out[135]: 0 10 [1, 0, 2] [0, 2, 0]1 [0, 0, 2] [0, 2, 0]2 [0, 0, 2] [0, 2, 0] |
Reassigning the calculated group-by column to the original dataframe Hopefully I am asking this question the right way - thank you to the person who pointed out my mistakes earlier.I have a dataframe (dft) of stock codes with prices, for e.g.: Date Open High Low Close Volume AdjClose StockCode37563 2020-08-03 4.63 4.63 4.50 4.51 9602 4.51 ABA38002 2020-08-04 4.52 4.54 4.51 4.51 4254 4.51 ABA38374 2020-08-05 4.52 4.52 4.40 4.40 27307 4.40 ABA38568 2020-08-06 4.41 4.58 4.41 4.58 3412 4.58 ABA38772 2020-08-07 4.57 4.57 4.45 4.50 16260 4.50 ABA... ... ... ... ... ... ... ... ...77232 2021-02-15 11.06 12.76 11.06 12.66 27607862 12.66 Z1P77632 2021-02-16 13.02 14.53 12.97 13.92 42833861 13.92 Z1P77929 2021-02-17 13.65 13.66 11.27 11.97 29813500 11.97 Z1P78103 2021-02-18 11.43 12.37 10.51 11.70 20602054 11.70 Z1P78424 2021-02-19 12.10 12.59 11.87 12.35 14345435 12.35 Z1P39741 rows × 8 columnsI am trying to calculate the technical indicators by stock code, which I have done here for MA_14 (Moving average, 14 time periods), i.e. split into each Stock Code and then apply the moving average calculation:dft.groupby(["StockCode"]).apply(lambda x: (ta.MA(x["Close"],timeperiod=14, matype=0)))Output:StockCode ABA 37563 NaN 38002 NaN 38374 NaN 38568 NaN 38772 NaN ... Z1P 77232 9.058571 77632 9.498571 77929 9.832143 78103 10.148571 78424 10.484286Length: 39741, dtype: float64The output is as per what I expected, whereby it would give back the same number of rows as the original dataframe dft.Now I am trying to assign this MA_14 back to the original dataframe (dft).What I have tried:transform - but got the error message belowdft.groupby(["StockCode"]).transform.apply(lambda x: (ta.MA(x["Close"],timeperiod=14, matype=0)))AttributeError: 'function' object has no attribute 'apply'Tried to directly do a row-to-row join using concatgrouped=dft.groupby(["StockCode"]).transform.apply(lambda x: (ta.MA(x["Close"],timeperiod=14, matype=0)))concatenated = pd.concat([dft, grouped], axis=1)which somehow gives about double the number of rows (dft = 39741 rows, concatenated = 79482) - is it something to do with indexing? Date Open High Low Close Volume AdjClose StockCode 037563 2020-08-03 4.63 4.63 4.50 4.51 9602.0 4.51 ABA NaN38002 2020-08-04 4.52 4.54 4.51 4.51 4254.0 4.51 ABA NaN38374 2020-08-05 4.52 4.52 4.40 4.40 27307.0 4.40 ABA NaN38568 2020-08-06 4.41 4.58 4.41 4.58 3412.0 4.58 ABA NaN38772 2020-08-07 4.57 4.57 4.45 4.50 16260.0 4.50 ABA NaN... ... ... ... ... ... ... ... ... ...(Z1P, 77232) NaN NaN NaN NaN NaN NaN NaN NaN 9.058571(Z1P, 77632) NaN NaN NaN NaN NaN NaN NaN NaN 9.498571(Z1P, 77929) NaN NaN NaN NaN NaN NaN NaN NaN 9.832143(Z1P, 78103) NaN NaN NaN NaN NaN NaN NaN NaN 10.148571(Z1P, 78424) NaN NaN NaN NaN NaN NaN NaN NaN 10.48428679482 rows × 9 columnsTried simply assigning back to dft as such but also got an error message:dft['test'] = (dft.groupby(["StockCode"]).apply(lambda x: (ta.MA(x["Close"],timeperiod=14, matype=0))))TypeError: incompatible index of inserted column with frame indexHow can I align the index of both 'grouped' and 'dft' so that I can perform the join correctly?I also thought of joining using the StockCode, but that would not be correct because it would then result in each row from DFT being joined to 70K rows in grouped. Is there a way to keep both StockCode and Date in 'grouped'?Thanks in advance for any suggestions on how to do this. I have already searched through some threads on StackOverFlow but can't seem to find a solution that applies to this (perhaps not the right keywords being used), please do point me to the relevant posts if any. | You can make the assignment to a new column within each group, as follows. The main bit is .apply(lambda g: g.assign(...)) that assigns the right values for each group g. Note I do not have ta.MA package so I am using the standard Pandas rolling functionality, I also set min_periods = 1 so we do not get NaNs in this example.(df.reset_index() .groupby("StockCode",as_index = False) .apply(lambda g : g.assign(test = g['Close'].rolling(window = 14, min_periods = 1).mean())) .set_index('index'))you get index Date Open High Low Close Volume AdjClose StockCode test------- ---------- ------ ------ ----- ------- -------- ---------- ----------- -------- 37563 2020-08-03 4.63 4.63 4.5 4.51 9602 4.51 ABA 4.51 38002 2020-08-04 4.52 4.54 4.51 4.51 4254 4.51 ABA 4.51 38374 2020-08-05 4.52 4.52 4.4 4.4 27307 4.4 ABA 4.47333 38568 2020-08-06 4.41 4.58 4.41 4.58 3412 4.58 ABA 4.5 38772 2020-08-07 4.57 4.57 4.45 4.5 16260 4.5 ABA 4.5 77232 2021-02-15 11.06 12.76 11.06 12.66 27607862 12.66 Z1P 12.66 77632 2021-02-16 13.02 14.53 12.97 13.92 42833861 13.92 Z1P 13.29 77929 2021-02-17 13.65 13.66 11.27 11.97 29813500 11.97 Z1P 12.85 78103 2021-02-18 11.43 12.37 10.51 11.7 20602054 11.7 Z1P 12.5625 78424 2021-02-19 12.1 12.59 11.87 12.35 14345435 12.35 Z1P 12.52 |
Group by mean for element with value >0 df=pd.DataFrame({"x":[1,2,3,0],"y":[1,1,1,1]})df.groupby("y").agg(x_sum=("x",np.mean))This code gives average of x, the output is 1.5 ((1+2+3+0)/4=1.5)but I want average of x where the number of larger than 0, so the output should be (1+2+3)/3=2.How should I address it? | Replace not greater like 0 in x column to NaN:df.x = df.x.where(df.x.gt(0))#alternative#df.x = df.x.mask(df.x.le(0))print (df) x y0 1.0 11 2.0 12 3.0 13 NaN 1df1 = df.groupby("y").agg(x_sum=("x",np.mean))print (df1) x_sumy 1 2.0 |
Combining two dataframes so that the values in one dataframe become headers in the other My first data frame d1 is something like this. num value0 1 2291 2 2032 3 244The second one, d2: num person cash0 1 person1 291 1 person2 812 2 person1 173 2 person2 754 3 person1 625 3 person3 55And I would like to combine them based on num in a way that inputs of person become headers of new columns in d1. And the new columns are filled with the values cash from d2. num value person1 person2 person 30 1 229 29 81 01 2 203 17 75 02 3 244 62 0 55Is it some kind of combination between merge() and unstack()? The example seems trivial, most likely I was not able to describe it sufficiently well when googling the answer. | Try pivot df2 and merge:df1.merge(df2.pivot('num','person','cash'), on='num')Output: num value person1 person20 1 229 29 811 2 203 17 752 3 244 62 55Edit: For the updated data, same idea but use set_index().unstack() instead of pivot. This helps fill the missing values with 0 easier.df1.merge(df2.set_index(['num','person'])['cash'] .unstack(fill_value=0), on='num')Output: num value person1 person2 person30 1 229 29 81 01 2 203 17 75 02 3 244 62 0 55 |
iterate a certain column to extract its values to a new column The 'POLYLINE' column includes all GPS points that a car travels at a certain time (x-axis, y-axis). I need to draw the points in a scatter plot.The following are some values for POLYLINE column:-I want to clean the data first and add two new columns for x-axis and y-axis derived from POLYLINE in order to draw the scatter plot.Replaced all:-The new 'a' column only has numbers in it. So the x-axis labels are the first number, third number, and all the way till the end. Accordingly, the y-axis labels are the second number, fourth number, and all the way till the end.I am thinking to create a list iterating through the 'a' column to append corresponding values with indexes but it keeps giving me errors.I have tried many ways but none of them helped. Thank you so much if you could help me solve this problem or give me some ideas! | I'd rearrange the original data as the following:df = pd.DataFrame.from_dict({ (i, j): {'x': x, 'y': y} for i, P in taxi.POLYLINE.iteritems() for j, (x, y) in enumerate(P)}, 'index').rename_axis(['taxi', 'time'])This is just an idea. Take it if you want it. |
Fetching the dataset for convolutional neural network ( CNN ) with TensorFlow 2.0 (python 3) I understand how fetch dataset from public TensorFlow Datasets (for example "mnist')dataset = tfds.load( 'horses_or_humans' , split=tfds.Split.TRAIN )How fetch dataset for my image dataset ? | your question is somewhat inaudible, you can search in tensorflow datasets for get any database you want to use. TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf.data.Datasets , enabling easy-to-use and high-performance input pipelines. To get started see the guide and our list of datasets. See more and List of available datasets . |
Iterate over single row in pandas I'm isolating a subset from a dataframe, and trying to convert the headers into values. Here is the subset I'm working with.I'm trying to convert the headers back into data, and remove the mV tag, but when I convert the header back into a row, Pandas isn't letting me iterate over it. How can I remove the "mV_" text and convert the values into floats? Here's what I've tried so far.def scatterer(df): df=df.reset_index(drop=True) df=df.drop(['Wavelength'], axis=1) df = df.columns.to_frame().T.append(df, ignore_index=True) df.columns = range(len(df.columns)) print(df.head(1)) for i in df.head(1): i=i.replace("mV_", "") i=float(i)]This gives the error"AttributeError: 'int' object has no attribute 'replace'" | here is your solution:-write this code outside of function:-df=df.reset_index(drop=True)df=df.drop(['Wavelength'], axis=1)df = df.columns.to_frame().T.append(df, ignore_index=True)df.columns = range(len(df.columns))Now just use apply() method:-df.loc[0]=df.loc[0].apply(lambda x:x.replace('mV_',''))Now Just use astype() method to convert them to floatdf.loc[0].astype(float)Now if you print df you will get your desired output |
How to filter hours in Pandas Dataframe If I've a pandas dataframe and I'd like to filter certain hours of every day, for example all data between 10:00 and 16:00 time open high low close tick_volume spread real_volume0 2021-02-23 15:25:00 114990.0 115235.0 114980.0 115185.0 55269 5 2355551 2021-02-23 15:30:00 115180.0 115215.0 115045.0 115135.0 31642 5 1169142 2021-02-23 15:35:00 115135.0 115240.0 115055.0 115220.0 29381 5 1165163 2021-02-23 15:40:00 115220.0 115300.0 115030.0 115060.0 46740 5 1847034 2021-02-23 15:45:00 115055.0 115075.0 114785.0 114885.0 48185 5 2002415 2021-03-02 15:40:00 111680.0 111895.0 111580.0 111825.0 38471 5 1447356 2021-03-02 16:15:00 111820.0 112500.0 111750.0 112270.0 71153 5 278122How to do it? | This should do the trick:import pandas as pddf = pd.read_excel(path)df['time'] = pd.to_datetime(df['time']) #convert column to datetime if not already in that formatdf.set_index(['time'], inplace=True) #temporarily put time column into indexdf = df.between_time('10:00','16:00') #filter between timesdf = df.reset_index() #reset the index to make time a column again |
Python+Pandas; How to proper merge a dictionary of lists of dataframes and save to xlsx or csv as single table I'm going to scrape a database which was placed in a public web-site in most user-unfriendly way - as a table with thousands of pages. Each page structure is identical and URLs differ only by page number.I tried several options with bf4 and pandas and ended up with following code:import pandas as pdimport sslssl._create_default_https_context = ssl._create_unverified_contexthdr = {'User-Agent': 'Chrome/70.0.3538.110'}table_dfs = {}for page_number in range(5): http = "https://www._SomeLongURL_&page={}".format(page_number + 1) table_dfs[page_number] = pd.read_html(http)print(table_dfs)This code successfully creates dictionary of lists of DataFrames. 5 items as an example.Print gives table from each page as a dict element, so code seams to be working as intended.I also planned to implement sleep elements to lower server load, when I will go for full 1k pages.But now I'm facing two issues:It gives each table in a shortened version. Each table on website have 200 rows, but code output shows only first and last 5 rows of each table. Maybe after proper merging as saving to a file, it will have all the rows?Ultimately I need to get 1 huge table which combines all the smaller ones, save it to a file (xlsx, csv etc.) for further processing. I tried merging, concatenating, converting something, but really lucking some knowledge here as I'm new to Python.Please help me to finalize this code. How should I merge everything in a single huge table?Update 1.To append all the individual DataFrames, I tried to extract each one and then iterate, but print gives only one DF:final_df = table_dfs[0].__getitem__(0)for page_number in range(1, 5): temp_df = table_dfs[page_number].__getitem__(0) final_df.append(temp_df, ignore_index=True)print(final_df)I think we are close to solution, but I made a mistake somewhere. Please take a look on this code part above.Update 2. SOLVEDInstead of append, I tried to use pd.concat and it's working. Here is my final code:import pandas as pdimport sslssl._create_default_https_context = ssl._create_unverified_contexthdr = {'User-Agent': 'Chrome/70.0.3538.110'}table_dfs = {}for page_number in range(5): http = "https://www._SomeLongURL_&page={}".format(page_number + 1) table_dfs[page_number] = pd.read_html(http)#pd.set_option('display.max_rows', None)final_df = table_dfs[0].__getitem__(0)for page_number in range(1, 5): temp_df = table_dfs[page_number].__getitem__(0) final_df = pd.concat([final_df, temp_df])print(final_df)final_df.to_excel("All_pages.xlsx") | pandas by default prints dataframes only partly. try setting pd.set_option('display.max_rows', None) before the printing the dataframe.try to iterate through each df of the list and appendfinal_df = table_dfs[0]for page_number in range(1,5): final_df.append(table_dfs[page_number], ignore_index=True) |
How to pass tensor placeholder in for loop range? I need to set the range of a for loop according to the input in my tensorflow graph:X = tf.placeholder(tf.int32,shape=[3, None])videos_timesteps_placeholder = tf.placeholder(tf.int32,shape=[None])....for v_ind in range(batch_size): start = timesteps_placeholder[v_ind] end = timesteps_placeholder[v_ind+1] for t in range(start,end): ....But I get the error: 'Tensor' object cannot be interpreted as an integerWhat can I do instead? | Replace range with tf.range.Example in tensorflow 2.ximport tensorflow as tftf.compat.v1.disable_eager_execution()@tf.functiondef loop_tensor(start, end): for t in tf.range(start, end): print(t) X = tf.compat.v1.placeholder(tf.int32, shape=[3, None])videos_timesteps_placeholder = tf.compat.v1.placeholder(tf.int32, shape=[None])for v_ind in range(3): start = videos_timesteps_placeholder[v_ind] end = videos_timesteps_placeholder[v_ind + 1] loop_tensor(start, end) |
How to replace certain elements of a NumPy array via an index array I have an numpy array a that I would like to replace some elements. I have the value of the new elements in a tuple/numpy array and the indexes of the elements of a that needs to be replaced in another tuple/numpy array. Below is an example of using python to do what I want.How do I do this efficiently in NumPy?Example script:a = np.arange(10)print( f'a = {a}' )newvalues = (10, 20, 35)indexes = (2, 4, 6)for n,i in enumerate( indexes ): a[i]=newvalues[n]print( f'a = {a}' )Output:a = array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])a = array([ 0, 1, 10, 3, 20, 5, 35, 7, 8, 9])I tried a[indexes]=newvalues but got IndexError: too many indices for array: array is 1-dimensional, but 3 were indexed | The list of indices indicating which elements you want to replace should be a Python list (or similar type), not a tuple. Different items in the selection tuple indicate that they should be selected from different axis dimensions.Therefore, a[(2, 4, 6)] is the same as a[2, 4, 6], which is interpreted as the value at index 2 in the first dimension, index 4 in the second dimension, and index 6 in the third dimension.The following code works correctly:indexes = [2, 4, 6]a[indexes] = newvaluesSee also the page on Indexing from the numpy documentation, specifically the second 'Note' block in the introduction as well as the first 'Warning' under Advanced Indexing:In Python, x[(exp1, exp2, ..., expN)] is equivalent to x[exp1, exp2, ..., expN]; the latter is just syntactic sugar for the former.The definition of advanced indexing means that x[(1,2,3),] is fundamentally different than x[(1,2,3)]. The latter is equivalent to x[1,2,3] which will trigger basic selection while the former will trigger advanced indexing. Be sure to understand why this occurs. |
Title words in a column except certain words How could I title all words except the ones in the list, keep?keep = ['for', 'any', 'a', 'vs']df.col `` 0 1. The start for one1 2. Today's world any2 3. Today's world vs. yesterday.Expected Output: number title0 1 The Start for One1 2 Today's World any2 3 Today's World vs. Yesterday.I trieddf['col'] = df.col.str.title().mask(~clean['col'].isin(keep)) | Here is one way of doing with str.replace and passing the replacement function:def replace(match): word = match.group(1) if word not in keep: return word.title() return worddf['title'] = df['title'].str.replace(r'(\w+)', replace) number title0 1 The Start for One1 2 Today'S World any2 3 Today'S World vs. Yesterday. |
Is there a Python package for plotting a spike map A spike map (as shown in the image below, implemented with D3.js) is a method for displaying differences in the magnitude of a certain discrete, abruptly changing phenomenon such as counts of people.Is there a package I could use (or example code I could follow) to create a static spike map, similar to the map shown above, in Python? e.g. Matplotlib | You could try with a Ridge Plot. It's not exactly the same, but maybe it can work for you. The implementation in seaborn looks like this:import numpy as npimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as pltsns.set_theme(style="white", rc={"axes.facecolor": (0, 0, 0, 0)})# Create the datars = np.random.RandomState(1979)x = rs.randn(500)g = np.tile(list("ABCDEFGHIJ"), 50)df = pd.DataFrame(dict(x=x, g=g))m = df.g.map(ord)df["x"] += m# Initialize the FacetGrid objectpal = sns.cubehelix_palette(10, rot=-.25, light=.7)g = sns.FacetGrid(df, row="g", hue="g", aspect=15, height=.5, palette=pal)# Draw the densities in a few stepsg.map(sns.kdeplot, "x", bw_adjust=.5, clip_on=False, fill=True, alpha=1, linewidth=1.5)g.map(sns.kdeplot, "x", clip_on=False, color="w", lw=2, bw_adjust=.5)g.map(plt.axhline, y=0, lw=2, clip_on=False)# Define and use a simple function to label the plot in axes coordinatesdef label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes)g.map(label, "x")# Set the subplots to overlapg.fig.subplots_adjust(hspace=-.25)# Remove axes details that don't play well with overlapg.set_titles("")g.set(yticks=[])g.despine(bottom=True, left=True)plt.show()And creates the following graph |
df.column.any() I am getting a really strange boolean result from my pandas Dataframe when running a query to see if any values in a particular column are less than 1. My df looks as such with columns marketcap and assets: marketcap assets0 11730364.0 36675000.01 12288758.0 36838000.02 13033591.0 37314000.03 16235899.0 39775000.04 14888920.0 40114000.05 14237392.0 38979000.06 13474342.0 38166000.07 12562067.0 45970000.08 13896045.0 45619000.09 15347038.0 46759000.010 14044865.0 46744000.011 14361107.0 49749000.012 14317742.0 49425000.013 17608963.0 49592000.014 19412627.0 49624000.015 26690171.0 51732000.016 27470803.0 53220000.017 27674325.0 52500000.018 37433151.0 53103000.019 53900763.0 53811000.020 58714659.0 54113000.021 47562777.0 55545000.022 51949184.0 54622000.023 40667196.0 56321000.024 35314293.0 56854000.025 39607768.0 56221000.026 44291558.0 56401000.027 45258054.0 59492000.028 45072190.0 60893000.029 56131139.0 60376000.030 45072190.0 60509000.031 43852174.0 67544000.032 44607528.0 67333000.033 51205725.0 66435000.034 52042116.0 67265000.035 48083198.0 70056000.036 43083437.0 68674000.037 42748881.0 67977000.038 39496249.0 68755000.039 41985349.0 102904000.0Clearly all the values in the column marketcap are well above 1 yet for the following code:df.marketcap.any() <= 1It results a result of True. Could someone explain this to me as I can't understand why this is True and what it thinks is less than 1.Thanks | why this is True and what it thinks is less than 1.You are doing:df.marketcap.any() <= 1df.marketcap.any() does evaluate to True as you have one or more non-zero elements in marketcap, so comparison isTrue <= 1which does hold True as in python when True and False are used for arithmetic it does have same effect as using 1 and 0 respectively. Note that their usage is not limited to comparisons - for example you can do True+True+True and you will get 3. |
What is the name of this image similarity/ distance based metric? I used the following code to calculate the similarity between images 1 and 2 (i1 and i2). 1=exactly similar while 0=very different. I'd like to know what method this algorithm is using (i.e. Euclidian distance or..?) Thank you.import mathi1=all_images_saved[0][1]i2=all_images_saved[0][2]i1_norm = i1/np.sqrt(np.sum(i1**2))i2_norm = i2/np.sqrt(np.sum(i2**2))np.sum(i1_norm*i2_norm) | Looks like cosine similarity. You can check it gives the same results as:from scipy import spatial cosine_distance = spatial.distance.cosine(i1.flatten(), i2.flatten()) cosine_similarity = 1 - cosine_distance |
KFServing pod "error: container storage-initializer is not valid" I am new to KFServing and Kubeflow.I was following https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1alpha2/tensorflow to deploy a simple inference service.However, when looking at the logs, I am unable to find the container storage-initializer. The only containers my predict service pod has are kfserving and queue-proxy.I am currently on Kubeflow 1.2 and Kubernetes 1.17 on IBM Cloud.Error Message Image | storage-initializer is an init container, so if you describe the pod you won't find it in the containers section of pod spec but in the initContainers section.$ kubectl get pod flowers-sample-predictor-default-00002-deployment-58bb9557sf7g2 -o json | jq .status.initContainerStatuses[ { "containerID": "docker://e40e5f86401b3715118b873fec4ae6c3ef57765ffbb5c9ab48757234c4f53b6f", "image": "gcr.io/kfserving/storage-initializer:v0.5.0", "imageID": "docker-pullable://gcr.io/kfserving/storage-initializer@sha256:1d396c0c50892f5562a1c24d925691ec786e5d48e08200f3f9bb17bb48da40ae", "lastState": {}, "name": "storage-initializer", "ready": true, "restartCount": 0, "state": { "terminated": { "containerID": "docker://e40e5f86401b3715118b873fec4ae6c3ef57765ffbb5c9ab48757234c4f53b6f", "exitCode": 0, "finishedAt": "2021-02-27T20:13:25Z", "reason": "Completed", "startedAt": "2021-02-27T20:13:11Z" } } }]I'm not familiar with the model label you are using, can you retry by using the app label or the pod name directly?$ kubectl logs -l app=flowers-sample-predictor-default-00002 -c storage-initializer[I 210227 20:13:12 initializer-entrypoint:13] Initializing, args: src_uri [gs://kfserving-samples/models/tensorflow/flowers] dest_path[ [/mnt/models][I 210227 20:13:12 storage:43] Copying contents of gs://kfserving-samples/models/tensorflow/flowers to local[W 210227 20:13:15 _metadata:104] Compute Engine Metadata server unavailable onattempt 1 of 3. Reason: timed out[W 210227 20:13:15 _metadata:104] Compute Engine Metadata server unavailable onattempt 2 of 3. Reason: [Errno 113] No route to host[W 210227 20:13:18 _metadata:104] Compute Engine Metadata server unavailable onattempt 3 of 3. Reason: timed out[W 210227 20:13:18 _default:250] Authentication failed using Compute Engine authentication due to unavailable metadata server.[I 210227 20:13:19 storage:127] Downloading: /mnt/models/0001/saved_model.pb[I 210227 20:13:19 storage:127] Downloading: /mnt/models/0001/variables/variables.data-00000-of-00001[I 210227 20:13:25 storage:127] Downloading: /mnt/models/0001/variables/variables.index[I 210227 20:13:25 storage:76] Successfully copied gs://kfserving-samples/models/tensorflow/flowers to /mnt/models |
Repeating blocks in numpy arrays I have an array that looks like this:A = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]and from it, I'd like to create an array that looks like this:B = [[1, 1, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0], [0, 0, 1, 1, 0, 0], [0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 1, 1]]Where every element of A gets repeated in a square shape n times.I'm sure there's a simple way of doing this -- can anybody think of something? | What you're looking for is a block matrix. See this documentation. For your specific application, each block would just be a constant (A[i][j]) times a matrix of ones (np.ones(n)). |
Pandas: returning last element of column value I created the following function to retrieve data from an internal incident management system:def get_issues(session, query): block_size = 50 block_num = 0 start = 0 all_issues = [] while True: issues = sesssion.search_issues(query, start, block_size, expand='changelog') if len(issues) == 0 # no more issues break start += len(issues) for issue in issues: all_issues.append(issue) issues = pd.DataFrame(issues) for issue in all_issues: changelog = issue.changelog for history in changelog.histories: for item in history.items: if item.field == 'status' and item.toString == 'Pending': groups = issue.fields.customfield_02219 d = { 'key' : issue.key, 'issue_type' : issue.fields.issuetype, 'creator' : issue.fields.creator, 'business' : issue.fields.customfield_082011, 'groups' : groups } fields = issue.fields issues = issues.append(d, ignore_index=True) return issuesI use this function to create a dataframe df using:df = get_issues(the_session, the_query)The resulting dataset looks similar to the following: key issue_type creator business groups0 MED-184 incident Smith, J Mercedes [Finance, Accounting, Billing]1 MED-186 incident Jones, M Mercedes [Finance, Accounting]2 MED-187 incident Williams, P Mercedes [Accounting, Sales, Executive, Tax]3 MED-188 incident Smith, J BMW [Sales, Executive, Tax, Finance]When I call dtypes on df, I get:key objectissue_type objectcreator objectbusiness objectgroups objectI would like to get only the last element of the groups column, such that the dataframe looks like: key issue_type creator business groups0 MED-184 incident Smith, J Mercedes Billing1 MED-186 incident Jones, M Mercedes Accounting2 MED-187 incident Williams, P Mercedes Tax3 MED-188 incident Smith, J BMW FinanceI tried to amend the function above, as follows:groups = issue.fields.customfield_02219[-1]But, I get an error that it's not possible to index into that field:TypeError: 'NoneType' object is not subscriptableI also tried to create another column using:df['groups_new'] = df['groups']:[-1]But, this returns the original groups column with all elements.Does anyone have any ideas as to how to accomplish this?Thanks!########################################################UPDATEprint(df.info()) results in the following:<class 'pandas.core.frame.DataFrame'>RangeIndex 13 entries, 0 to 12Data columns (total 14 columns)# Column Non-Null Count Dtype--- ------ ------------- -----0 activity 7 non-null object1 approvals 8 non-null object2 business 13 non-null object3 created 13 non-null object4 creator 13 non-null object5 region_a 5 non-null object6 issue_type 13 non-null object7 key 13 non-null object8 materiality 13 non-null object9 region_b 5 non-null object10 resolution 2 non-null object11 resolution_time 1 non-null object12 target 13 non-null object13 region_b 5 non-null objecttypes: object(14)memory usage: 1.5+ KBNone | Here it is:df['new_group'] = df.apply(lambda x: x['groups'][-1], axis = 1)UPDATE: If you get an IndexError with this, it means that at least one one your lists in empty. You can try this:df['new_group'] = df.apply(lambda x: x['groups'][-1] if x['groups'] else None, axis = 1)EXAMPLE:df = pd.DataFrame({'key':[121,234,147], 'groups':[[111,222,333],[34,32],[]]})print(f'ORIGINAL DATAFRAME:\n{df}\n')df['new_group'] = df.apply(lambda x: x['groups'][-1] if x['groups'] else None, axis = 1)print(f'FINAL DATAFRAME:\n{df}')#ORIGINAL DATAFRAME: key groups0 121 [111, 222, 333]1 234 [34, 32]2 147 []FINAL DATAFRAME: key groups new_group0 121 [111, 222, 333] 333.01 234 [34, 32] 32.02 147 [] NaN |
Python Pandas - add column on a specific row, add specific row from one dataframe to another Have being trying this desperately for 7 hours and still hasnt figured out a solution.So I have 2 Dataframes that I want to combine,using python pandas, with the below conditions:from the name in 'first table', add the remaining columns from the 'second table' with the same nameif name cannot be found in 'second table', leave it blankif name is present in 'second table' but not in 'first table', add a new row in the dataframe with 'number' and 'description' (WITHOUT THE NAME) in the 'second table'first table example: Name 0 Apple 1 Bear 2 Car 3 DogSecond table example: Name Number Description0 Apple 1 I am apple1 Bear 2 you are bear2 Dog 4 so are dogs3 Elephant 5 mooideal result: Name Number Description0 Apple 1.0 I am apple1 Bear 2.0 you are bear2 Car NaN NaN3 Dog 4.0 so are dogs4 NaN 5.0 mooI have tried merging but it would not work (both inner and outer ways of pandas merging would give incorrect results)Once I include more conditions then I would get lots of errors.I was able to get to this point (finding the name 'firsttable' row in 'secondtable')for a in firsttable['Name']: for b in secondtable['Name']: if a == b: row = pd.DataFrame(secondtable.loc[(secondtable['Name']== b)])but after that I was not able to add it into the dataframe. Thanks for the help! | Use outer join in DataFrame.merge and then set NaN (default value) in Series.where for not matched values by df1['Name'] tested by Series.isin:df = df1.merge(df2, on='Name', how='outer')df['Name'] = df['Name'].where(df['Name'].isin(df1['Name']))print (df) Name Number Description0 Apple 1.0 I am apple1 Bear 2.0 you are bear2 Car NaN NaN3 Dog 4.0 so are dogs4 NaN 5.0 moo |
Keras throwing error: ('Keyword argument not understood:', 'init') and ('Keyword argument not understood:', 'dim_ordering') I've built the following model with Keras from Tensorflow (version = 2.2.4-tf)model = tf.keras.Sequential()model.add(Convolution2D(24, 5, 5, padding='same',init='he_normal', input_shape = (target_Width,target_Height, 3),dim_ordering="tf"))model.add(Activation('relu'))model.add(GlobalAveragePooling2D())model.add(Dense(18))But somehow I'm getting the following error: ('Keyword argument not understood:', 'init') and ('Keyword argument not understood:', 'dim_ordering') | It seems that you are trying to use keras.layers.convolutional.Convolution2D instead of tf.keras.layers.Conv2D. If that is the case then use this instead:from keras.models import Sequentialmodel = Sequential()model.add(keras.layers.convolutional.Convolution2D(24, 5, 5, padding='same',init='he_normal', input_shape = (target_Width,target_Height, 3),dim_ordering="tf"))Or using Conv2D from tf.keras which does not have the arguments init and dim_ordering:model = tf.keras.Sequential()model.add(Conv2D(24, 5, 5, padding='same', kernel_initializer='he_normal', input_shape = (target_Width,target_Height, 3))) |
Same weights, implementation but different results n Keras and Pytorch I have an encoder and a decoder model (monodepth2). I try convert them from Pytorch to Keras using Onnx2Keras, but :Encoder(ResNet-18) succeedsI build the decoder myself in Keras (with TF2.3), and copy the weights (numpy array, including weight and bias) for each layer from Pytorch to Keras, without any modification.But it turns out both Onnx2Keras-converted Encoder and self-built Decoder fails to reproduce the same results. The cross-comparison pictures are below, but I'd first introduce the code of Decoder.First the core Layer, all the conv2d layer (Conv3x3, ConvBlock) is based on this, but different dims or add an activation:# Conv3x3 (normal conv2d without BN nor activation)# There's also a ConvBlock, which is just "Conv3x3 + ELU activation", so I don't list it here.def TF_Conv3x3(input_channel, filter_num, pad_mode='reflect', activate_type=None): # Actually it's 'reflect, but I implement it with tf.pad() outside this padding = 'valid' # if TF_ConvBlock, then activate_type=='elu conv = tf.keras.layers.Conv2D(filters=filter_num, kernel_size=3, activation=activate_type, strides=1, padding=padding) return convThen the structure. Note that the definition is EXACTLY the same as the original code. I think it must be some details about the implementation.def DepthDecoder_keras(num_ch_enc=np.array([64, 64, 128, 256, 512]), channel_first=False, scales=range(4), num_output_channels=1): num_ch_dec = np.array([16, 32, 64, 128, 256]) convs = OrderedDict() for i in range(4, -1, -1): # upconv_0 num_ch_in = num_ch_enc[-1] if i == 4 else num_ch_dec[i + 1] num_ch_out = num_ch_dec[i] # convs[("upconv", i, 0)] = ConvBlock(num_ch_in, num_ch_out) convs[("upconv", i, 0)] = TF_ConvBlock(num_ch_in, num_ch_out, pad_mode='reflect') # upconv_1 num_ch_in = num_ch_dec[i] if i > 0: num_ch_in += num_ch_enc[i - 1] num_ch_out = num_ch_dec[i] convs[("upconv", i, 1)] = TF_ConvBlock(num_ch_in, num_ch_out, pad_mode='reflect') # Just Conv3x3 with ELU-activation for s in scales: convs[("dispconv", s)] = TF_Conv3x3(num_ch_dec[s], num_output_channels, pad_mode='reflect') """ Input_layer dims: (64, 96, 320), (64, 48, 160), (128, 24, 80), (256, 12, 40), (512, 6, 20) """ x0 = tf.keras.layers.Input(shape=(96, 320, 64)) # then define the the rest input layers input_features = [x0, x1, x2, x3, x4] """ # connect layers """ outputs = [] ch = 1 if channel_first else 3 x = input_features[-1] for i in range(4, -1, -1): x = tf.pad(x, paddings=[[0, 0], [1, 1], [1, 1], [0, 0]], mode='REFLECT') x = convs[("upconv", i, 0)](x) x = [tf.keras.layers.UpSampling2D()(x)] if i > 0: x += [input_features[i - 1]] x = tf.concat(x, ch) x = tf.pad(x, paddings=[[0, 0], [1, 1], [1, 1], [0, 0]], mode='REFLECT') x = convs[("upconv", i, 1)](x) x = TF_ReflectPad2D_1()(x) x = convs[("dispconv", 0)](x) disp0 = tf.math.sigmoid(x) """ build keras Model ([input0, ...], [output0, ...]) """ # decoder = tf.keras.Model(input_features, outputs) decoder = tf.keras.Model(input_features, disp0) return decoderThe cross-comparison is as follows... I would really appreciate it if anyone could offer some insights. Thanks!!!Original results:Original Encoder + Self-build Decoder:ONNX-converted Enc + Original Dec (Texture is good, but the contrast is not enough, the car should be very close, i.e. very bright color):ONNX-converted Enc + Self-built Dec: | Solved!It turns out there's indeed no problem with implementation (at least not significant ones). It's the problem with weights copying.The original weights has (H, W, 3, 3), but TF-model requires dim of (3, 3, W, H), so I permuted it by [3,2,1,0], overlooking the (3, 3) also have their own sequence.So it should be weights.permute([2,3,1,0]), and all is well! |
TensorFlow v1 on Colab, tf.contrib module not found Trying to train GPT-2 in Google Colab. The cells I'm running look like this:!git clone https://github.com/shawwn/gpt-2 -b tpu /content/gpt-2[...]%tensorflow_version 1.x!pip freeze | grep tensorflowmesh-tensorflow==0.1.12tensorflow==1.15.2tensorflow-datasets==4.0.1tensorflow-estimator==1.15.1tensorflow-gan==2.0.0tensorflow-gcs-config==2.4.0tensorflow-hub==0.11.0tensorflow-metadata==0.28.0tensorflow-probability==0.7.0%tensorflow_version 1.x!PYTHONPATH=src ./train.py --helpI get the following error message:TensorFlow 1.x selected.2021-03-14 14:22:11.118915: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0Traceback (most recent call last): File "./train.py", line 17, in <module> import model, sample, encoder File "/content/gpt-2/src/model.py", line 3, in <module> from tensorflow.contrib.training import HParamsModuleNotFoundError: No module named 'tensorflow.contrib'As far as I know, TensorFlow v1.15.2 should contain the contrib module. What am I doing wrong here?Note that I did not write this code, I'm using someone else's git repo. | Tensorflow 1.15.2 has contrib.In colab, make sure once the Tensorflow 1.x install restart runtime because colab has default 2.x version.I tried same code on colab it worked. |
Best way to reassemble a pandas data frame Need to reassemble a data frame that is the result of a group by operation. It is assumed to be ordered. Major Minor RelType SomeNulls0 0.0 0.0 1 1.01 NaN NaN 2 NaN2 1.0 1.0 1 NaN3 NaN NaN 2 NaN4 NaN NaN 3 NaN5 2.0 3.0 1 NaN6 NaN NaN 2 2.0And looking for something like thisMajor Minor RelType SomeNulls0 0.0 0.0 1 1.01 0.0 0.0 2 NaN2 1.0 1.0 1 NaN3 1.0 1.0 2 NaN4 1.0 1.0 3 NaN5 2.0 3.0 1 NaN6 2.0 3.0 2 2.0Wondering if there is an elegant way to resolve it.import pandas as pdimport numpy as npdef refill_frame(df, cols): while df[cols].isnull().values.any(): for col in cols: if col in list(df): #print (col) df[col]= np.where(df[col].isnull(), df[col].shift(1), df[col]) return dfdf = pd.DataFrame({'Major': [0, None, 1, None, None,2, None], 'Minor': [0, None, 1, None, None,3, None], 'RelType': [1, 2, 1, 2,3, 1,2], 'SomeNulls': [1, None,None, None,None,None,2] })print (df)cols2fill =['Major', 'Minor']df = refill_frame(df, cols2fill) print (df) | If I understand the question correctly, You could do a transform on the specific columns:df.loc[:, ['Major', 'Minor']] = df.loc[:, ['Major', 'Minor']].transform('ffill') Major Minor RelType SomeNulls0 0.0 0.0 1 1.01 0.0 0.0 2 NaN2 1.0 1.0 1 NaN3 1.0 1.0 2 NaN4 1.0 1.0 3 NaN5 2.0 3.0 1 NaN6 2.0 3.0 2 2.0You could also use the fill_direction function from pyjanitor:# pip install pyjanitorimport janitordf.fill_direction({"Major":"down", "Minor":"down"}) Major Minor RelType SomeNulls0 0.0 0.0 1 1.01 0.0 0.0 2 NaN2 1.0 1.0 1 NaN3 1.0 1.0 2 NaN4 1.0 1.0 3 NaN5 2.0 3.0 1 NaN6 2.0 3.0 2 2.0 |
Saving a DataFrame as .csv, with columns data type: object (list). Load the DataFrame but the columns data type are object (str), what am I missing? I saved a DataFrame as a .csv file, some of the DataFrame columns are populated with python list objects, but when I reload the same DataFrame, the columns that were populated with python list objects are now populated with python string objects.See code outputs. type(df['col1'][0]) out>> list print(df['col1'][0]) out>> ['a', 'b', 'c'] df.to_csv('df.csv') df_reloaded = pd.read_csv('df.csv') type(df_reloaded['col1'][0]) out>> str print(df_reloaded['col1'][0]) out>> "['a', 'b', 'c']"What am I missing? | The csv file format cannot store arrays, it can only store text. |
Error in TF 2.3. when mixing eager and non-eager Keras models I'm having this issue when trying to fit a model in Tenserlfow 2.3, are there any workarounds or solutions to the problem? this error occurs also when i try to predict some records using TensorFlow Neural Network models. I hope someone expert in Tensorflow can find out what is wrong!Code:import tensorflow as tfimport numpy as npDO_BUG = Trueinputs = tf.keras.Input((1,))outputs = tf.keras.layers.Dense(10)(inputs)model0 = tf.keras.Model(inputs=inputs, outputs=outputs)if DO_BUG: with tf.Graph().as_default(): inputs = tf.keras.Input((1,)) outputs = tf.keras.layers.Dense(10)(inputs) model1 = tf.keras.Model(inputs=inputs, outputs=outputs)model0.compile(optimizer=tf.optimizers.SGD(0.1), loss=tf.losses.mse)model0.fit(np.zeros((4, 1)), np.zeros((4, 10)))Logs:Traceback (most recent call last): File ".../tmp.py", line 15, in <module> model0.fit(np.zeros((4, 1)), np.zeros((4, 10))) File "...\tensorflow\python\keras\engine\training_v1.py", line 807, in fit use_multiprocessing=use_multiprocessing) File "...\tensorflow\python\keras\engine\training_arrays.py", line 666, in fit steps_name='steps_per_epoch') File "...\tensorflow\python\keras\engine\training_arrays.py", line 189, in model_iteration f = _make_execution_function(model, mode) File "...\tensorflow\python\keras\engine\training_arrays.py", line 557, in _make_execution_function return model._make_execution_function(mode) File "...\tensorflow\python\keras\engine\training_v1.py", line 2072, in _make_execution_function self._make_train_function() File "...\tensorflow\python\keras\engine\training_v1.py", line 2021, in _make_train_function **self._function_kwargs) File "...\tensorflow\python\keras\backend.py", line 3933, in function 'eager execution. You passed: %s' % (updates,))ValueError: `updates` argument is not supported during eager execution. You passed: [<tf.Operation 'training/SGD/SGD/AssignAddVariableOp' type=AssignAddVariableOp>] | The below code works without error. Any specific reason to use commented part below.import tensorflow as tfimport numpy as npDO_BUG = Trueinputs = tf.keras.Input((1,))outputs = tf.keras.layers.Dense(10)(inputs)model0 = tf.keras.Model(inputs=inputs, outputs=outputs)"""if DO_BUG: with tf.Graph().as_default(): inputs = tf.keras.Input((1,)) outputs = tf.keras.layers.Dense(10)(inputs) model1 = tf.keras.Model(inputs=inputs, outputs=outputs)"""model0.compile(optimizer=tf.optimizers.SGD(0.1), loss=tf.losses.mse)model0.fit(np.zeros((4, 1)), np.zeros((4, 10))) |
Failure ONNX InferenceSession ONNX model exported from PyTorch I am trying to export a custom PyTorch model to ONNX to perform inference but without success... The tricky thing here is that I'm trying to use the script-based exporter as shown in the example here in order to call a function from my model.I can export the model without any complain but then when trying to start an InferenceSession I get the following error:Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from ner.onnx failed:Type Error: Type parameter (T) bound to different types (tensor(int64) and tensor(float) in node (Concat_1260).I tried to identify the root cause of that problem and it seems to be generated by the use of torch.matmul() in the following function (quite nasty cause I'm trying to only use pytorch operators):@torch.jit.scriptdef valid_sequence_output(sequence_output, valid_mask): X = torch.where(valid_mask.unsqueeze(-1) == 1, sequence_output, torch.zeros_like(sequence_output)) bs, max_len, _ = X.shape tu = torch.unique(torch.nonzero(X)[:, :2], dim=0) batch_axis = tu[:, 0] rows_axis = tu[:, 1] a = torch.arange(bs).repeat(batch_axis.shape).reshape(batch_axis.shape[0], -1) a = torch.transpose(a, 0, 1) T = torch.cumsum(torch.where(batch_axis == a, torch.ones_like(a), torch.zeros_like(a)), dim=1) - 1 cols_axis = T[batch_axis, torch.arange(batch_axis.shape[0])] A = torch.zeros((bs, max_len, max_len)) A[(batch_axis, cols_axis, rows_axis)] = 1.0 valid_output = torch.matmul(A, X) valid_attention_mask = torch.where(valid_output[:, :, 0] != 0, torch.ones_like(valid_mask), torch.zeros_like(valid_mask)) return valid_output, valid_attention_maskIt seems like torch.matmul isn't supported (according to the docs) so I tried a bunch of workaround (e.g. A.matmul(X), torch.baddbmm) but I still get the same issue...Any suggestions on how to fix this behavior would be awesome :DThanks for your help! | This points to a model conversion issue. Please open an issue againt the Torch exporter feature. A type (T) has to be bound to the same type for the model to be valid and ORT is basically complaining about this. |
Numpy array : how to convert values of a 2D array into a 3D one-hot array I have a numpy 2D array 'ya' of shape (1000, 20) where each cell has values between 0 and 5. I would like to create a 3D array 'yb' of shape (1000, 6, 20) that I create with np.zeros((1000, 6, 20)), where the cells in dim(1) would take a value 1 in the column corresponding to the value of ya.Example:ya[125, 12] = 4 and ya[248,7] = 1=> yb[125, 4, 12] = 1 and yb[248, 1, 7] = 1and all other cells of yb[125, i!=4, 12] and of [248, i!=1, 7] = 0Is there a nice way to do it without loops ?I hope my question is clear enough... ;-)Thanks a lot in advance.Simplified example with a 1D array extended into 2 D and values between 0 and 2 :ya = ([0, 2, 1, 1, 0])yb = ([1, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 0], [1, 0, 0])The underlying idea is to replace values by 0 or 1 in a new dimension of the array. | Try broadcasting:(a[:,None,:] == np.arange(6)[None,:,None]).astype(int)Sample data:np.random.seed(1)m,n=3,4a = np.random.randint(0,6, (m,n))Output:array([[[0, 0, 0, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0]], [[0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0]], [[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]]]) |
Python Pandas merging 2 tables with inner join: Hi I am merging two tables with an inner join using pandas but I am getting a weird output. Below I am pasting the two tables:I want to inner join the tables so it only shows the zipcodes in the df2 table so I use this line:result = pd.merge(ppy, df2, how="inner", on=["zipcode"])But now I am getting two records for each zipcodeAnyone have any idea on how to fix this or what might be causing this issue? | The data types of zipcode variables in datasets can be different. Check it with the dtype () method. If the data types are different, you can combine them and combine them. |
Pivot table with count if I want to count if the number is higher than 0.1 and then group them by month-year to see which month-year has the most days with more than 0.1 variations.I have a df like these with daily data but only showing month-year index.table = df.pivot_table(df, columns=['btc','bnb','eth','xmr','xrp'], aggfunc=df[df > 0.1].count())print(table)Why not working?The result needs to be something like this | You can stack the dataframe then compare the stacked frame with 0.1 to create a booolean mask then take sum on level=0 to count the values which are greater than 0.1 per month-year:df.stack().gt(0.1).sum(level=0)Alternate approach:df[df > 0.1].stack().count(level=0)EDIT: If you want to count the values which are greater than 0.1 in each of the column per month-year:df.gt(0.1).sum(level=0) |
to_dict() function makes a list from dataframe instead of OrderedDict I am trying to convert a dataframe into an OrderedDict.I tried with the 2 options shown below. None of them result an OrderedDict and I don't know why it doesn't work.Option 1 and 2 result to be listssales_data = pd.read_csv ("data/sales-data.csv")Motorcycles = sales_data.loc[sales_data['PRODUCTLINE'] == 'Motorcycles']Motorcycles = Motorcycles.sort_values('ORDERID').head(100)# Option 1Dict_Motorcycles = Motorcycles.to_dict(into=OrderedDict, orient='records')type(Dict_Motorcycles)# Opction 2a= Motorcycles.to_dict('records')type(a)If I write the code Motorcycles.to_dict('records') it becomes and OrderedDict, but I'm not able to save it in a variable.If I change from orient = 'records' to ```orient = 'index'`` it is fitted, but it is not shown in the way I want. | Based on the commentCreating a test dataframedf = pd.DataFrame({"A": [1,2,3], "B": ["a","b","c"]})Printing dataframe>>> print(df) A B0 1 a1 2 b2 3 cAssociating to_dict result to a variablendf = df.to_dict(into=OrderedDict, orient='records')print(ndf)[OrderedDict([('A', 1), ('B', 'a')]), OrderedDict([('A', 2), ('B', 'b')]), OrderedDict([('A', 3), ('B', 'c')])]print(type(ndf))<class 'list'>Note The ndf type is a list because the orient parameter is 'records'However, if you remove the orient parameter, you will getndf = df.to_dict(into=OrderedDict)print(ndf)OrderedDict([('A', OrderedDict([(0, 1), (1, 2), (2, 3)])), ('B', OrderedDict([(0, 'a'), (1, 'b'), (2, 'c')]))])print(type(ndf))<class 'collections.OrderedDict'>Update Mar 18th, 2021If you really need an OrderedDictfrom collections import OrderedDictndf = OrderedDict({ "myDataFrame": df.to_dict(into=OrderedDict, orient='records')})print(type(ndf))<class 'collections.OrderedDict'>print(ndf)OrderedDict([('myDataFrame', [OrderedDict([('A', 1), ('B', 'a')]), OrderedDict([('A', 2), ('B', 'b')]), OrderedDict([('A', 3), ('B', 'c')])])]) |
python string replace only special characters keeping non-english alphabets How can I remove only special characters from a string, but not foreign language characters. When I try the below code, it removes both special characters and non-english alphabets. But I want to remove only special characters (special characters that appear in regular English sentences).import pandas as pdfrom io import StringIOdata = """id,name1,A1,B1,C1,D2,E2,F2,ds2,G2, dsds3,Endüstrisi`"""df = pd.read_csv(StringIO(data))df['name'].str.replace('[^a-zA-Z\d\s]','',regex=True)The above code results in0 A1 B2 C3 D4 E5 F6 ds7 G8 dsds9 EndstrisiName: name, dtype: objectBut what I want isThe above code results in0 A1 B2 C3 D4 E5 F6 ds7 G8 dsds9 EndüstrisiName: name, dtype: object | You can usedf['name'] = df['name'].str.replace(r'[^\w\s]|_', '', regex=True)In Python 3, all regex shorthand character classes (like \w, \d, \s) are Unicode aware by default, as the re.U (re.UNICODE) flag is on by default. Thus, if you use \w construct in a negated character class, it matches all chars other than any Unicode letters, digits and _.Since you do not want to match whitespaces, \s is added to the negated character class.An underscore cannot be included into the negated character class (since it will not be matched then), you need an alternative to match _.So, the pattern matches[^\w\s] - any char but Unicode letters, digits, whitespaces and _| - or_ - an underscore. |
Extracting multiple values in different rows I have a datasetID col1 col2 year1 A 111,222,3334 20102 B 344, 111 20103 C 121,123 2011I wanna rearrange the dataset in the following wayID col1 col2 year1 A 111 20101 A 222 20101 A 3334 20102 B 344 20102 B 111 20103 C 121 20113 C 123 2011I can do it using the following code.a = df.COMP_MONITOR_TYPE_CODE.str[:3]df['col2'] = np.where(a == 111, 111)Since, I have a very long data, its would be time consuming to do it one by one. Is there any other way to do it | split + explode:df.assign(col2 = df.col2.str.split(',')).explode('col2')# ID col1 col2 year#0 1 A 111 2010#0 1 A 222 2010#0 1 A 3334 2010#1 2 B 344 2010#1 2 B 111 2010#2 3 C 121 2011#2 3 C 123 2011 |
subset pandas with different values but same index I want to create a dataset from condition on values on another datasetFROM minV maxV2008-01-02 NaN NaN2008-01-03 NaN NaN2008-01-04 -0.022775 NaN2008-01-07 NaN 0.0101792008-01-08 -0.039777 NaN2008-01-09 NaN NaNto val 2008-01-04 1000 2008-01-07 -1000 2008-01-08 1000i can get the list of index...but i am not sure what to do with it indexmin = df.index[df.minV < 0].tolist() indexmax = df.index[df.maxV > 0].tolist() | Assuming we can only have one non-NaN value in a row, we can take max of the row, then drop NaNs with dropna, take sign with np.sign and multiply by -1000:df.max(axis=1).dropna().apply(np.sign) * -1000Output:2008-01-04 1000.02008-01-07 -1000.02008-01-08 1000.0 |
how to reshape/ explode pandas dataframe? i have this dataframe that have row of each key*id , i want to explode it to id,key1,key2 and remove duplicate rows and keep data_field , i am working with python2.7 but i would glad to a solution that will work both for python2.7 and python3.7dataframe i have:import pandas as pdd = {'id': [111, 222, 222, 333, 333], 'key': ['key1', 'key2','key1','key2','key1'], 'value':[1,1,2,3,3],'data_field':['dummy1','dummy1','dummy2','dummy3','dummy2']}df = pd.DataFrame(data=d)print df[['id','key','value','data_field']].to_string(index=False) id key value data_field 111 key1 1 dummy1 222 key2 1 dummy1 222 key1 2 dummy2 333 key2 3 dummy3 333 key1 3 dummy2dataframe i want it to be transformed to:d = {'id': [111, 222, 333], 'key1': [1, 2, 3],'key2':[pd.np.nan,1,3] , 'data_field': ['dummy1', 'dummy2', 'dummy3']}df = pd.DataFrame(data=d)print df[['id', 'key1', 'key2', 'data_field']].to_string(index=False) id key1 key2 data_field 111 1 NaN dummy1 222 2 1.0 dummy2 333 3 3.0 dummy3tried as suggested heredf.pivot(index='id', columns='key', values='value').join(df.drop_duplicates('id')['data_field'])and got : key1 key2 data_fieldid 111 1.0 NaN NaN222 2.0 1.0 NaN333 3.0 3.0 NaNdata_field was not kept and id is now index and not column | Use DataFrame.pivot with DataFrame.join only first duplicated rows in data_field per id:df = (df.pivot(index='id', columns='key', values='value') .join(df.set_index('id')['data_field'].drop_duplicates()) .reset_index())print (df) id key1 key2 data_field0 111 1.0 NaN dummy11 222 2.0 1.0 dummy22 333 3.0 3.0 dummy3Another idea for first data from data_field to new columns per id and key:df = df.pivot_table(index='id',columns='key',values=['value','data_field'],aggfunc='first')df.columns = df.columns.map('_'.join)df = df.reset_index()print (df) id data_field_key1 data_field_key2 value_key1 value_key20 111 dummy1 NaN 1.0 NaN1 222 dummy2 dummy1 2.0 1.02 333 dummy2 dummy3 3.0 3.0 |
fit_generator not running after the first epoch I'm practicing with the implementation of RNNs and LSTMs in Keras on R and I was first trying to run some examples from Deep Learning With R book by Chollet; since I'm working with time series I decided to start from the temperature example:dir.create("~/Downloads/jena_climate", recursive = TRUE)download.file( "https://s3.amazonaws.com/keras-datasets/jena_climate_2009_2016.csv.zip", "~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip")unzip( "~/Downloads/jena_climate/jena_climate_2009_2016.csv.zip", exdir = "~/Downloads/jena_climate")library(tibble)library(readr)data_dir <- "~/Downloads/jena_climate"fname <- file.path(data_dir, "jena_climate_2009_2016.csv")data <- read_csv(fname)glimpse(data)data <- data.matrix(data[,-1])train_data <- data[1:200000,]mean <- apply(train_data, 2, mean)std <- apply(train_data, 2, sd)data <- scale(data, center = mean, scale = std)generator <- function(data, lookback, delay, min_index, max_index, shuffle = FALSE, batch_size = 128, step = 6) { if (is.null(max_index)) max_index <- nrow(data) - delay - 1 i <- min_index + lookback function() { if (shuffle) { rows <- sample(c((min_index+lookback):max_index), size = batch_size) } else { if (i + batch_size >= max_index) i <<- min_index + lookback rows <- c(i:min(i+batch_size-1, max_index)) i <<- i + length(rows) } samples <- array(0, dim = c(length(rows), lookback / step, dim(data)[[-1]])) targets <- array(0, dim = c(length(rows))) for (j in 1:length(rows)) { indices <- seq(rows[[j]] - lookback, rows[[j]]-1, length.out = dim(samples)[[2]]) samples[j,,] <- data[indices,] targets[[j]] <- data[rows[[j]] + delay,2] } list(samples, targets) }}lookback <- 1440step <- 6delay <- 144batch_size <- 128train_gen <- generator( data, lookback = lookback, delay = delay, min_index = 1, max_index = 200000, shuffle = TRUE, step = step, batch_size = batch_size)val_gen = generator( data, lookback = lookback, delay = delay, min_index = 200001, max_index = 300000, step = step, batch_size = batch_size)test_gen <- generator( data, lookback = lookback, delay = delay, min_index = 300001, max_index = NULL, step = step, batch_size = batch_size)# How many steps to draw from val_gen in order to see the entire validation setval_steps <- (300000 - 200001 - lookback) / batch_size# How many steps to draw from test_gen in order to see the entire test settest_steps <- (nrow(data) - 300001 - lookback) / batch_sizemodel <- keras_model_sequential() %>% layer_flatten(input_shape = c(lookback / step, dim(data)[-1])) %>% layer_dense(units = 32, activation = "relu") %>% layer_dense(units = 1)model %>% compile( optimizer = optimizer_rmsprop(), loss = "mae")history <- model %>% fit_generator( train_gen, steps_per_epoch = 500, epochs = 20, validation_data = val_gen, validation_steps = val_steps)I have no problems till the creation of the model, but after running the fit_generator function the processing gets stuck on the first epoch:Epoch 1/20 1/500 [..............................] - ETA: 0s - loss: 1.2643I've installed Keras 2.3.0.0 and tensorflow 2.2.0. Do you now how to solve this? | I had a similar problem with Rstudio but when I use simply R, it works.Could you please check? |
Reading a CSV file with irregular number of columns using Pandas I am trying to read a csv file, which doesn't contain a header line, and it contains an indefinite amount of columns, with pandas.I have search how to work around this, but all the answers that I have found require for me to already know (search by opening the file) the maximum number that a column can have and create a names= attribute on read_csv function, like this:names = ["a", "b", "c", "d"]table = pandas.read_csv('freqs.tsv', header=None, sep='\t+', names=names)My question is, is there any possible ways to do this without knowing the maximum number of columns? For future reusability of the script, I want to generalize if it is possible.Here is a sample text file I was using to run some tests:mathematics 1.548438245 1.4661764369999999 1.429891562 english 1.237816576 1.043399455physics 2.415563662 11.165497484000001 5.954598265 7.853732762999999 7.929835858drama 2.0439384830000003 9.81210385 5.068332477 8.579349377 5.962282599999999health 1.557941553 1.222267933science 1.550193476gym 1.240610831 1.149375944 1.899408195 1.3713249980000002Thank you | I get the following output01234mathematics1.548441.466181.42989nannanenglish1.237821.0434nannannanphysics2.4155611.16555.95467.853737.92984drama2.043949.81215.068338.579355.96228health1.557941.22227nannannanscience1.55019nannannannangym1.240611.149381.899411.37132nanBy writing:import pandas as pd # Assume your data is in test.txt in the current working directory f = open("test.txt", "r")# This assumes your spacing is arbitrary data = [line.split() for line in f]data = {line[0] : [float(item) for item in line[1:]] for line in data}# The orient = "index" allows us to handle differing lengths of entriesdf = pd.DataFrame.from_dict(data, orient="index")# this just provides the above table for printing in StackOverflowprint(df.to_markdown()) Note that I've assumed the spacing in your file is arbitrary and hence we don't need to track which columns are empty, we can just split at spaces and keep the values.Also note that nan means "not a number" and is what you should expect to see in your dataframe if you have rows of differing lengths.Finally, if you want the subjects as the columns, use df = df.transpose(). |
pandas groupby tuple of different length - ValueError: Values not found in passed level: MultiIndex Edit: example DataFrame for the original error-message found and posted.(As I just recognized, the Error does only appear, if the tuple has a certain length. The example is now adapted.)Original text:I need to group by tuple of different length.For the grouping I'm applying a summary_function.import pandas as pddef summary_function(df): value_mean = df['value'].mean() df1 = pd.DataFrame({'value_mean':[value_mean] }) return df1tuple_list = [(1,2,1,1,1,1,1,1,1,1,1,1,1),(2,3,1,1,1,1,1,1,1,1,1,1,1), \ (1,2,1,1,1,1,1,1,1,1,1,1,1), \ (2,3,4,4,4,4,4,4,4,4,4,4,4,4,4,1,1,1,1,1,1,1,1,1,1,1)]value = [1,2,3,4]letter = list('abab')df = pd.DataFrame({'letter':letter, 'tuple':tuple_list, 'value':value})df> letter tuple value>0 a (1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 1>1 b (2, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 2>2 a (1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 3>3 b (2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, ... 4If I'm using a direct mean() function, the result is how expected:df.groupby(['letter','tuple']).mean()> value>letter tuple >a (1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 2>b (2, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 2> (2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, ...) 4But if I apply the function. (which I need to use since I have dozens of summaries) The tupel is empty while using the simpledf.groupby(['letter','tuple']).apply(lambda x:summary_function(x))I get a ValueError:>ValueError: Values not found in passed level: MultiIndex([(2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4)], )It would be awesome to get some ideas on how to solve this. | In your case, do not return the dataframe, return the series.When you return the series, Pandas will align the series horizontally. For example:def summary_function(df): return df['value'].agg(['min','mean','max'])df.groupby(['letter','tuple']).apply(summary_function)Output:value min mean maxletter tuple a (1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 1.0 2.0 3.0b (2, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 2.0 2.0 2.0 (2, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 1... 4.0 4.0 4.0 |
replace pandas dataframe None value with dictionary I have a pandas dataframe called "myRawDF1" and:print(myRawDF1)Result is: stop_price last_trail_price0 79.74 {'amount': '100.47', 'currency_code': 'USD'}1 None2 73.06 {'amount': '114.52', 'currency_code': 'USD'}etc...I want to replace "None" with a dictionary {'amount': '0.0'} So the result would be: stop_price last_trail_price0 79.74 {'amount': '100.47', 'currency_code': 'USD'}1 {'amount': '0.0'}2 73.06 {'amount': '114.52', 'currency_code': 'USD'}Or if it is somehow easier... I could use {'amount': '0.0', 'currency_code': 'USD'} like this: stop_price last_trail_price0 79.74 {'amount': '100.47', 'currency_code': 'USD'}1 {'amount': '0.0', 'currency_code': 'USD'}2 73.06 {'amount': '114.52', 'currency_code': 'USD'}I can't figure out how to do this. I thought I could use "fillna" because:myRawDF1.fillna(0.0, inplace=True)Successfully replaces all None values with zeros... So I thought this would work:myRawDF1.fillna({'amount': '0.0'}, inplace=True)But it doesn't... I also tried:myRawDF1['last_trail_price'].fillna({'amount': '0.0'}, inplace=True)This changes all the None vales to NAN... so its doing somethingI also tried this... Which I found online... But it doesn't seem to work eithermyRawDF1['last_trail_price'] = myRawDF1['last_trail_price'].fillna(pd.Series([{'amount': '0.0'}], index = myRawDF1.index)) | I think you need to use loc access:s = df['last_trail_price'].isna()df.loc[s, 'last_trail_price'] = [{'amount':0.0} for _ in range(s.sum())] |
Preparing dataset TimeSeries data So I am working on a project where I have some time series data that I want to predict. The problem is that my dataset consists of different water samples taken from a water source and there are in a single csv file.My dataset looks kinda like this:Date Sample_Name pH temp etc...2009-01-01 ABC1 7.2 122009-01-02 ABC2 5.5 11...2015-01-05 ABC1 8.9 132015-01-05 ABC4 8.8 13So ABC1 and ABC2 are different samples and have information recorded every month. What I want to do is feed ABC1 explicitly into the model, but I don't know how to do that. I can group samples by their names with this line of code:abc1 = df.loc[df['Sample_Name'] == "ABC1"]How can I feed this kind of data into a model?I did not decide on the final model but it will probably be an Encoder/decoder(with attention) or an LSTM.Each Sample contains about 70 rows and I have over 100 samples. | Let me give it a go as it is not entirely clear what the desired output is, but hopefully will steer you in the right directionLoad your example:from io import StringIOdata = StringIO('''Date Sample_Name pH temp2009-01-01 ABC1 7.2 122009-01-02 ABC2 5.5 112015-01-05 ABC1 8.9 132015-01-05 ABC4 8.8 13''')df = pd.read_csv(data, sep = '\s+')Then we can use groupby method to create a dictionary, keyed on each sample name, with the corresponding value being the dataframe for that sample that could be fed into a model:input_dict = {key:df.drop(columns = 'Sample_Name').reset_index(drop = True) for key, df in df.sort_values('Date').groupby('Sample_Name')}You can access individual sample dfs by the name of the sample, for example withinput_dict['ABC1']you get the corresponding df: Date pH temp0 2009-01-01 7.2 121 2015-01-05 8.9 13 |
Is there a way to import csv files into pandas using values of a dictionary for the name of the dataframes? I Just started with python and am currently trying to import multiple csv files as dataframes. While there are some similar questions, they seem not to be helpfull for my problem.The csv files have the same structure and the names are not how I want them to be when imported as dataframes. A list of dictionaries contains the names of the dataframes (how they should be) together with the names of the csv files.Since I need to do this multiple times with different folders I tried to create a formula:def import_csv(CSVdict): for index in range(len(CSVdict)): CSVdict[index]["New_ID"]=pd.read_csv(("C:/path/"+str(CSVdict[index]["csvDatei"])+".csv"),sep=';',decimal=',')I am not sure where my mistake is, Can you help me?The list of dictionaries looks something like this:[{'Nr': 0905', 'New_ID': '0905a', 'csvDatei': 'LG__380'}, {'Nr': '0905', 'New_ID': '0905b', 'csvDatei': 'LG__376'}, {'Nr': '0955', 'New_ID': '0955a', 'csvDatei': 'LG__53'}, {'Nr': '0955', 'New_ID': '0955b', 'csvDatei': 'LG__50'}]Later on I need to pd.concat() the dataframes with the Same value in Nr. So dataframe with New_ID =0955a and NeW_ID=0955b and so on need to be in one. Before that they have to be ajusted. So I can't read the file and use pd.concat() directly. | You can use the file name as the key, something like:CSV_dict = [ {'Nr': '0905', 'New_ID': '0905a', 'csvDatei': 'LG__380'}, {'Nr': '0905', 'New_ID': '0905b', 'csvDatei': 'LG__376'}, {'Nr': '0955', 'New_ID': '0955a', 'csvDatei': 'LG__53'}, {'Nr': '0955', 'New_ID': '0955b', 'csvDatei': 'LG__50'},]dataframes = {}for d in csv_dict: path = "C:/path/{}.csv".format(d["csvDatei"]) dataframes[d["New_ID"]] = pd.read_csv(path, sep=";", decimal=",")If you need to label each DataFrame you can add the following line into the for loop:dataframes[path]["ID_col"] = d["New_ID"]where d["New_ID"] would be the ID applied to each dataframe. |
Filter a dataframe based specific condition in pandas I have a dataframe as shown belowdf:ID Age_days N_30 N_31_90 N_91_180 N_181_365 Group1 201 60 15 30 40 Good2 20 2 15 5 20 Normal3 10 4 0 0 0 Normal4 100 0 0 0 80 Normal5 600 0 6 5 60 Good6 800 0 0 15 0 Good7 500 10 10 30 40 Normal 8 200 0 0 0 100 Good9 500 0 0 0 20 Normal10 80 0 12 0 20 NormalwhereN_30 - Number of transactions in last 30 daysN_31_90 - Number of transactions in last 31 to 90 days and so on.Conditions for filtering If Age_days is less than 30, N_31_90, N_91_180, N_181_365 should be 0. If Age_days is less than 90, N_91_180, N_181_365 should be 0. If Age_days is less than 180, N_181_365 should be 0.But in the above data there are some rows where Age_days is less and transacted before.I would like to filter such rows.Expected output:ID Age_days N_30 N_31_90 N_91_180 N_181_365 Group2 20 2 15 5 20 Normal4 100 0 0 0 80 Normal10 80 0 12 0 20 Normal | Use Boolean Mask to filter conditions:m1 = (df['Age_days'] <= 30) & ((df['N_31_90'] !=0) | (df['N_91_180'] !=0) | (df['N_181_365'] !=0))m2 = (df['Age_days'] <= 90) & ((df['N_91_180'] !=0) | (df['N_181_365'] !=0))m3 = (df['Age_days'] <= 180) & (df['N_181_365'] !=0)print(df[m1|m2|m3])m1 is the boolean mask for the invalid condition where Age_days is <= 30 while there are non-zero values for transactions performed more than 30 days ago. Similarly for m2 and m3.Then we do a Boolean Or with m1|m2|m3 in df[m1|m2|m3] to filter the rows with any one of the 3 invalid conditions.Output: ID Age_days N_30 N_31_90 N_91_180 N_181_365 Group1 2 20 2 15 5 20 Normal3 4 100 0 0 0 80 Normal9 10 80 0 12 0 20 Normal |
How to move each value in a row one position by a position in the array? How to move the values in the field by scrolling by one position a position in sequence? Then replace unnecessary values with the number zero?Example:My arraynp.array([[51 52 53 54 55 56 57] [41 42 43 44 45 46 47] [31 32 33 34 35 36 37] [21 22 23 24 25 26 27] [11 12 13 14 15 16 17]]) I need:[[0 0 0 0 51 52 53] [0 0 0 41 42 43 0] [0 0 31 32 33 0 0] [0 21 22 23 0 0 0] [11 12 13 0 0 0 0]] Here we see that the last line does not shift and unnecessary values are zero. In the next line, the first three numbers are shifted to the right and the others are reset. I would need such a sequence. It is possible?array's shape is not fixed | I assume that the number of elements shifted to the right is also arbitrary (elements_shifted > 0). Here is my first attempt:import numpy as npa = np.array([[51, 52, 53, 54, 55, 56, 57], [41, 42, 43, 44, 45, 46, 47], [31, 32, 33, 34, 35, 36, 37], [21, 22, 23, 24, 25, 26, 27], [11, 12, 13, 14, 15, 16, 17]])elements_shifted = 3 # You can change this number to another desired one > 0b = [row[:elements_shifted] for row in a]a_shifted = np.zeros(a.shape)start_idx = -elements_shifteda_shifted[0][start_idx:] = b[0]for i in range(1, len(a)): start_idx -= 1 if -start_idx > a.shape[1]: a_shifted[i][:elements_shifted] = b[i] else: a_shifted[i][start_idx:start_idx + elements_shifted] = b[i]print(a_shifted)Output:[[ 0. 0. 0. 0. 51. 52. 53.] [ 0. 0. 0. 41. 42. 43. 0.] [ 0. 0. 31. 32. 33. 0. 0.] [ 0. 21. 22. 23. 0. 0. 0.] [11. 12. 13. 0. 0. 0. 0.]] |
I've downloaded bert pretrained model 'bert-base-cased'. I'm unable to load the model with help of BertTokenizer I've downloaded bert pretrained model 'bert-base-cased. I'm unable to load the model with help of BertTokenizer. I'm trying for bert tokenizer. In the bert-pretrained-model folder I have config.json and pytorch_model.bin.tokenizer = BertTokenizer.from_pretrained(r'C:\Downloads\bert-pretrained-model')I'm facing error likeOSError Traceback (most recent call last)<ipython-input-17-bd4c0051c48e> in <module>----> 1 tokenizer = BertTokenizer.from_pretrained(r'\Downloads\bert-pretrained-model')~\sentiment_analysis\lib\site-packages\transformers\tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1775 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing relevant tokenizer files\n\n" 1776 )-> 1777 raise EnvironmentError(msg) 1778 1779 for file_id, file_path in vocab_files.items():OSError: Can't load tokenizer for 'C:\Downloads\bert-pretrained-model'. Make sure that:- 'C:\Downloads\bert-pretrained-model' is a correct model identifier listed on 'https://huggingface.co/models'- or 'C:\Downloads\bert-pretrained-model' is the correct path to a directory containing relevant tokenizer filesWhen I'm trying load with BertModel, it's loading. But when i'm trying with BertTokenizer it's not loading. | Whats' the version of transformers you are using?. I had a similar issue, the solution was to upgrade the transformers to the latest(like 4.3.3 currently) version (I was using an old 2..1 version because I had to make an older code run) and it worked. Looks like older versions of transformers have this issue with loading the language model from the local path.I would suggest upgrading your transformers in a separate virtual environment, this way you won't mess up other codes. If you don't use a virtual environment, it's highly recommended that you do now, here is a good and simple way of installing and creating one (includes Windows, considering your case) in this link.Alternative recommendation:This may not be an answer to your question, but I would suggest using the pre-trained language model directly, instead of downloading it and pointing to its local path. At least that's a recommended way by huggingface. The only downside of this is when you don't have fast internet, it might take a while to load it. Other than that, including this line of code instead of yours would easily solve your peoblem:tokenizer = BertTokenizer.from_pretrained("bert-base-cased") |
Float values in a list become string when converting the list to a numpy array I have a list (scores) that contains float values and I want to convert it to a numpy array. However, after converting it, the type of the values changes from float to string. This is what I have written:scores = [5.0, 5.0, 4.0, 4.0, 5.0, 5.0, 5.0]import numpy as np scores = np.array(scores)scores = ['5.0' '5.0' '4.0' ... '5.0']I have tried to convert the values to float once again:scores = np.asarray(scores, dtype=np.float64, order='C')However, the following error appears:ValueError: could not convert string to float:How can I convert a list to a numpy array in a way that the type of the values doesn't change? | I don't think they are converting to a string I tried your code and added a simple line of addition between two indices and the result is not in a string means they are not concatenated they are actual decimal numbers.scores = [5.0, 5.0, 4.0, 4.0, 5.0, 5.0, 5.0]import numpy scores = numpy.array(scores)print(scores[0]+scores[1]) |
Merge Row based on Condition i have df like this Date Description Debit Credit Balance originalIdx0 01-03-19 AAAA NaN NaN 49Cr 01 01-03-19 ASSS NaN 6,000.00 55Cr 12 NaN XYZ ABC saa NaN 13 01-03-19 ABZ 289.00 NaN 55Cr 3I want this Date Description Debit Credit Balance originalIdx0 01-03-19 AAAA NaN NaN 49Cr 01 01-03-19 ASSSXYZABCsaa NaN 6,000.00 55Cr 13 01-03-19 ABZ 289.00 NaN 55Cr 3I want to merge The Row if The originalIdx Is the same , so merge the Row in Description Columnthis was my real time data | Assuming that Date will have NaN if the row needs to be merged, here's the code.first create a dummy column merged. It will merge all the values of Description, Debit, and Credit. It will only merge if the value is alpha (excludes numeric values)Then replace Description by using groupby transform (lambda) function.Then dropna if rows have Date as NaN. Also drop the temp column merged.df['merged'] = df[['Description','Debit','Credit']].apply(lambda x: ''.join([str(a) for a in x if pd.notnull(a) and not isinstance(a, float)]) ,axis=1)df['Description'] = df.groupby("originalIdx")['merged'].transform(lambda x: "".join(x))df.dropna(subset=['Date'],inplace=True)df.drop(columns='merged',inplace=True)print (df)This will give you: Date Description Debit Credit Balance originalIdx0 01-03-19 AAAA NaN NaN 49Cr 01 01-03-19 ASSSXYZABCsaa NaN 6000.0 55Cr 13 01-03-19 ABZ 289.0 NaN 55Cr 3Here's the full code with data and output.Replace your df['merged'] with the below code:df['merged'] = df[['Description','Debit','Credit']].apply(lambda x: ''.join([str(a) for a in x if pd.notnull(a) and not isinstance(a, float)]) ,axis=1)Full Code is:import pandas as pdimport numpy as nppd.set_option('display.max_columns', 200)pd.set_option('display.max_colwidth', 250)c = ['Date','Description','Debit','Credit','Balance','originalIdx']d = [['01-03-19','FORTAP-MUMBAI/',np.NaN, np.NaN, '49656.25Cr',0], ['01-03-19','FORTAP-MUMBAI/******',np.NaN,6000.00,'55656.25Cr',1], [np.NaN,'UP/*ABC*/*DEF*','UPI/*PQR*/*XYZ*','paytm/NA',np.NaN,1],['01-03-19','MBK/*ABCDEF*/*ZZZ*',289.00,np.NaN,'55357.25Cr',3]]df = pd.DataFrame(d,columns=c)print (df)df['merged'] = df[['Description','Debit','Credit']].apply(lambda x: ''.join([str(a) for a in x if pd.notnull(a) and not isinstance(a, float)]) ,axis=1)df['Description'] = df.groupby("originalIdx")['merged'].transform(lambda x: "".join(x))df.dropna(subset=['Date'],inplace=True)df.drop(columns='merged',inplace=True)print (df)Before and after output attached:Before: Date Description Debit Credit Balance originalIdx 0 01-03-19 FORTAP-MUMBAI/ NaN NaN 49656.25Cr 0 1 01-03-19 FORTAP-MUMBAI/****** NaN 6000.0 55656.25Cr 1 2 NaN UP/*ABC*/*DEF* UPI/*PQR*/*XYZ* paytm/NA NaN 1 3 01-03-19 MBK/*ABCDEF*/*ZZZ* 289.0 NaN 55357.25Cr 3 After: Date Description Debit Credit Balance originalIdx 0 01-03-19 FORTAP-MUMBAI/ NaN NaN 49656.25Cr 0 1 01-03-19 FORTAP-MUMBAI/******UP/*ABC*/*DEF*UPI/*PQR*/*XYZ*paytm/NA NaN 6000.0 55656.25Cr 1 3 01-03-19 MBK/*ABCDEF*/*ZZZ* 289.0 NaN 55357.25Cr 3 |
why doesn't NumPy import in idle 3.30 on Ubuntu 12.10 64 Bit I installed NumPy by running the following in a linux shell:sudo apt-get install python-numpyIn Idle for python 3.30 when I import numpy it outputs the following: Python 3.3.0 (default, Sep 29 2012, 17:14:58) [GCC 4.7.2] on linuxType "copyright", "credits" or "license()" for more information.>>> import numpyTraceback (most recent call last): File "<pyshell#0>", line 1, in <module> import numpy File "/usr/lib/python3/dist-packages/numpy/__init__.py", line 137, in <module> from . import add_newdocs File "/usr/lib/python3/dist-packages/numpy/add_newdocs.py", line 9, in <module> from numpy.lib import add_newdoc File "/usr/lib/python3/dist-packages/numpy/lib/__init__.py", line 4, in <module> from .type_check import * File "/usr/lib/python3/dist-packages/numpy/lib/type_check.py", line 8, in <module> import numpy.core.numeric as _nx File "/usr/lib/python3/dist-packages/numpy/core/__init__.py", line 5, in <module> from . import multiarrayImportError: cannot import name multiarray>>> I also have SciPy, matplotlib, and mayavi2 installed. They also through errors when I import them. Why does this happen. How can I fix this? | On my ubuntu 12.10. I use pip to install packages. I use Python3.2.sudo apt-get install python3-pipsudo pip-3.2 install numpyI have tried this and installed numpy successfully. |
numpy.unique generates a list unique in what regard? If you input an array with general objects to numpy.unique, the result will be unique based upon what?I have tried:import numpy as npclass A(object): #probably exists a nice mixin for this :P def __init__(self, a): self.a = a def __lt__(self, other): return self.a < other.a def __le__(self, other): return self.a <= other.a def __eq__(self, other): return self.a == other.a def __ge__(self, other): return self.a >= other.a def __gt__(self, other): return self.a > other.a def __ne__(self, other): return self.a != other.a def __repr__(self): return "A({})".format(self.a) def __str__(self): return self.__repr__()np.unique(map(A, range(3)+range(3)))which returnsarray([A(0), A(0), A(1), A(1), A(2), A(2)], dtype=object)but my intentions are to get:array([A(0), A(1), A(2)], dtype=object) | Assuming the duplicate A(2) is a typo, I think you simply need to define __hash__ (see the docs):import numpy as npfrom functools import total_ordering@total_orderingclass A(object): def __init__(self, a): self.a = a def __lt__(self, other): return self.a < other.a def __eq__(self, other): return self.a == other.a def __ne__(self, other): return self.a != other.a def __hash__(self): return hash(self.a) def __repr__(self): return "A({})".format(self.a) def __str__(self): return repr(self)produces>>> map(A, range(3)+range(3))[A(0), A(1), A(2), A(0), A(1), A(2)]>>> set(map(A, range(3)+range(3)))set([A(0), A(1), A(2)])>>> np.unique(map(A, range(3)+range(3)))array([A(0), A(1), A(2)], dtype=object)where I've used total_ordering to reduce the proliferation of methods, as you guessed was possible. :^)[Edited after posting to correct missing __ne__.] |
Matlab to Python Code I have piece of code in matlab: Tf=eye(2);Tb=eye(2);Tt=eye(2);n=250;f=zeros(2,n);for i=1:n f(:,i)=Tf*f(:,i-1);endI tried to change it to Python code:Tf=eye(2)n=250f=numpy.zeros((2,n))for i in range (n) f[:,i]=numpy.dot(Tf, f[:,i-1])this gives "TypeError: array() takes exactly 1 argument (2 given)"Any help? | As @CharlesBrunet notes, there's a few issues with the python implementation, which should be:import numpyTf=numpy.eye(2)n=5f=numpy.zeros((2,n))for i in range(n): f[:,i]=numpy.dot(Tf, f[:,i-1])The resulting f is:[[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]]You also have an issue in your matlab implementation, since you're trying to index f(:,0) in the first iteration of the for loop, which will result in an error: Attempted to access f(:,0); index must be a positive integer or logical. Here's the fixed version:Tf=eye(2);n=5;f=zeros(2,n);for i=2:n f(:,i)=Tf*f(:,i-1);endThe resulting f is:f = 0 0 0 0 0 0 0 0 0 0In other words, other than those few typos, there doesn't seem to be any problem with each implementation. You just have to be more careful when crafting these examples, particularly when thinking of posting a question about them.Note that I've re-defined n=5 so that the value of f doesn't take too many lines of the answer. |
Using Pandas rolling function on text columns I have a pandas dataframe with column values in string format and a datetime index. I want to create a new column which will have a list of values of a column for last two days. Is it possible to achieve this using pandas?original datafarme: date col1 col20 2018-07-08 a b1 2018-07-09 c d2 2018-07-10 e f3 2018-07-11 g h4 2018-07-12 i j5 2018-07-13 k l6 2018-07-14 m nFinal dataframe: date col1 col2 col30 2018-07-08 a b NaN1 2018-07-09 c d NaN2 2018-07-10 e f b, d3 2018-07-11 g h d, f4 2018-07-12 i j f, h5 2018-07-13 k l h, j6 2018-07-14 m n j, l | df.iloc[:,2].shift(2)+ ',' +df.iloc[:,2].shift(1)EditWe could extend this to a more generic setting, Define a customized rolling concat function,rolling_cat = lambda s, n: pd.Series(zip(*[s.shift(x+1) for x in range(n)])).str.join(',')Apply the functionrolling_cat(df.iloc[:,2], n=2) |
In each row of a numpy array, I have an int, and a python list of ints. How do I convert this list into a numpy int array, without using pandas? I have a numpy array, where each row contains a list of a int, and a python list of ints. How do I convert the lists into numpy arrays? I am working with very large arrays, and I would like to avoid using Pandas as loading it into pandas will take more memory. Sample variable:new = np.array([[0, list([4928722, 3922609, 14413953, 10103423, 8948498])], [1, list([12557217, 5572869, 13415223, 2532000, 14609022, 9830632, 9800679, 7504595, 10752682])], [2, list([10458710, 7176517,10203584, 12816205, 7484678, 7985600, 2745090, 14842579, 788308, 5984365])], [62711, list([6159359, 5003282, 11818909, 11760670])], [62712, list([4363069, 8566447, 9547966, 14554871, 2108131, 12207856, 14840255, 13087558])], [62713, list([11252023, 8710787, 4233645, 11415316, 13888594, 10860521, 1798095, 4389487, 4461271, 10070622, 12675925, 729773])]])Sample result I am looking for (some numbers may have been rearranged; I am just giving an example of how it should be structures):new2 = np.array([[0, np.array([ 4928722, 3922609, 14413953, 10103423, 8948498])], [1, np.array([12557217, 5572869, 13415223, 2532000, 14609022, 9830632, 9800679, 7504595, 10752682])], [2, np.array([10458710, 7176517, 10268240, 4173086, 8617671, 4674075, 12580461, 2434641, 3694004, 9734870, 1314108, 8879955, 6597761, 7034485, 3008940, 9816877, 1748801, 10159466, 2745090, 14842579, 788308, 5984365])], [62711, np.array([ 6159359, 5003282, 11818909, 11760670])], [62712, np.array([ 4363069, 8566447, 9547966, 14554871, 2108131, 12207856, 14840255, 13087558])], [62713, np.array([11252023, 8710787, 4233645, 11415316, 13888594, 7410770, 13672430, 6677251, 10431890, 3447966, 12675925, 729773])]] )What I tried:I tried trying to display only the list, in hopes that I can do somenew[:][1] = new[:][1].tolist() But new[:][1] doesn't display only the lists, and I couldn't figure out a way to do this. | You can use a list comprehension and convert each list to an np.array:result = np.array([[row[0], np.array(row[1])] for row in new])print(result)Output:[[0 array([ 4928722, 3922609, 14413953, 10103423, 8948498])] [1 array([12557217, 5572869, 13415223, 2532000, 14609022, 9830632, 9800679, 7504595, 10752682])] [2 array([10458710, 7176517, 10203584, 12816205, 7484678, 7985600, 2745090, 14842579, 788308, 5984365])] [62711 array([ 6159359, 5003282, 11818909, 11760670])] [62712 array([ 4363069, 8566447, 9547966, 14554871, 2108131, 12207856, 14840255, 13087558])] [62713 array([11252023, 8710787, 4233645, 11415316, 13888594, 10860521, 1798095, 4389487, 4461271, 10070622, 12675925, 729773])]] |
Write the values of a customer group into a Series I have a dataframe which has various entries of customers. These customers, which has different customer numbers, belong to certain customer groups (contract, wholesaler, tender, etc.). I have to sum some of these values of the dataframe into a Series for each customer group (e.g., total sales of contract customers would be a single entry in the Series.)I've tried using .isin() but I had an attribute error (float object has no attribute 'isin'). It is working if I work with or operator but then I will have to manually enter all customer numbers for all customer groups. I'm sure there must be a much simple way and efficient of doing it. Many thanks in advance. for i in range(len(grouped_sales)): if df.iloc[i,1]==value1 or df.iloc[i,1]==value2 or df.iloc[i,1]==...: series[1]=series[1]+df.iloc[i,3] elif df.iloc[1,i]==valueN or df.iloc[i,1]==value(N+1)...: series[2]=series[2]+df.iloc[1,3] elif: ... | If you want to sum the sales for every group you may want to look into panda's df.groupby() maybeI'm trying to reproduce what you want it would look like this>>> df = pd.DataFrame()>>> df['cust_numb']=[1,2,3,4,5]>>> df['group']=['group1','group2','group3','group3','group1']>>> df['sales']=[50,30,50,40,20]>>> df cust_numb group sales0 1 group1 501 2 group2 302 3 group3 503 4 group3 404 5 group1 20>>> df.groupby('group').sum()['sales']groupgroup1 70group2 30group3 90Name: sales, dtype: int64You'll have a series with groups as index and the sum of the sales as valuesEDIT: Based on your comment you have the group data in a separate dictionary, the implementation would like this>>> sales_data = {'CustomerName': ['cust1', 'cust2', 'cust3', 'cust4'],'CustomerCode': [1,2,3,4], 'Sales': [10,10,15,25], 'Risk':[55,55,45,79]} >>> sdf = pd.DataFrame.from_dict(sales_Data)>>> group_data ={'group1': [1,3], 'group2': [2,4]}You want to map your customer number to the groups so you need an inverted dictionary:>>> dc = {v:k for k in group_data.keys() for v in group_data[k]}{1: 'group1', 3: 'group1', 2: 'group2', 4: 'group2'}You replace your customer number column by the group mapping in a new column and reproduce what I did above>>> sdf['groups'] = sdf.replace({'CustomerCode': dc})['CustomerCode']>>> sdf CustomerName CustomerCode Sales Risk groups0 cust1 1 10 55 group11 cust2 2 10 55 group22 cust3 3 15 45 group13 cust4 4 25 79 group2>>> sdf.groupby('groups').sum()['Sales']groupsgroup1 25group2 35Name: Sales, dtype: int64 |
How do I know is there a value in another column? I have a df something like this:lst = [[30029509,37337567,41511334,41511334,41511334]]lst2 = [35619048]lst3 = [[41511334,37337567,41511334]]lst4 = [[37337567,41511334]]df = pd.DataFrame()df['0'] = lst, lst2, lst3, lst4I need to count how many times there is a '41511334' in every columnI do this code:df['new'] = '41511334' in str(df['0'])And I got True in every column's row, but it's a mistake for second line. What's wrong? Thanks | str(df['0']) gives a string representation of column 0 and so includes all the data. You will then see that '41511334' in str(df['0'])gives True, and you assign this to every row of the 'new' column. You are looking for something likedf['new'] = df['0'].apply(lambda x: '41511334' in str(x))or df['new'] = df['0'].astype(str).str.contains('41511334') |
Handling string values when rounding a dataframe column I have a data-frame (df) that looks like DATE_OF_BIRTH AGE0 1974-03-28 43.00954121 NOT KNOWN NOT KNOWN2 1970-11-27 46.34198433 1974-05-09 42.89441684 1985-03-14 32.0474122I would like to round the AGE column to 3 decimal places so the desired output would look like: DATE_OF_BIRTH AGE0 1974-03-28 43.0101 NOT KNOWN NOT KNOWN2 1970-11-27 46.3423 1974-05-09 42.8944 1985-03-14 32.047 I have tried using df['AGE'] = df['AGE'].round(3) but when a string (like NOT KNOWN) is encountered then I get the error:TypeError: can't multiply sequence by non-int of type 'float' How can I handle strings when rounding a data-frame column? | I suggest convert non numeric and not datetimes values to missing values with to_datetime and to_numeric for avoid mixed types - numeric/datetimes with strings - then numeric/datetimeslike functions failed:df['DATE_OF_BIRTH'] = pd.to_datetime(df['DATE_OF_BIRTH'], errors='coerce')df['AGE'] = pd.to_numeric(df['AGE'], errors='coerce').round(3)print (df) DATE_OF_BIRTH AGE0 1974-03-28 43.0101 NaT NaN2 1970-11-27 46.3423 1974-05-09 42.8944 1985-03-14 32.047 |
Problem when using cx_Freeze: "cannot import name 'tf2'" I have a code in python and I used cx_Freeze to convert it to an .exe. This task works without any error.But when I try to run my .exe the following error happens: from tensorflow.python import tf2 ImportError: cannot import name 'tf2'My ann.py code is:import numpy as npimport sys... X_test=XinNY_test=XoutN#Criando o modelofrom keras.models import Sequentialfrom keras.layers import Densemodelo = Sequential()for i in range(int((num_par-4)/2)): modelo.add(Dense(int(parametros[i+4]), kernel_initializer='normal',activation=ativacao(int(parametros[i+5])))) #camadas ocultasmodelo.add(Dense(num_out, kernel_initializer='normal',activation=ativacao(int(parametros[num_par-1])))) #camada de saídamodelo.compile(optimizer='adam',loss='mean_squared_error')hist = modelo.fit(X_train, Y_train, epochs=800, verbose=0, batch_size=10,validation_data=(X_test, Y_test))XobsoutN=modelo.predict(XobsN)Xobsout=XobsoutN*(max_out-min_out)+min_outnp.savetxt("Xobsout.txt",Xobsout.transpose(),delimiter='\t')loss=[" "," "]loss[0] = hist.history['loss']loss[1] = hist.history['val_loss']np.savetxt("erro.txt",loss,delimiter='\t')And my setyp.py for cx_Freeze is:from cx_Freeze import setup, Executableimport sysbase = Noneif sys.platform == 'win32': base = Noneexecutables = [Executable("ANN.py", base=base)]packages = ["idna"]options = { 'build_exe': { 'includes':['atexit', 'numpy.core._methods', 'numpy.lib.format'], 'packages':packages, },}import osos.environ['TCL_LIBRARY'] = "C:\\ProgramData\\Anaconda3\\tcl\\tcl8.6"os.environ['TK_LIBRARY'] = "C:\\ProgramData\\Anaconda3\\tcl\\tk8.6"setup( name = "Nome Executavel", options = options, version = "1.0", description = 'Descricao do seu arquivo', executables = executables)Anyone can help me to solve this error?I had many others errors using cx_Freeze and this forum was pretty helpful to solve all of them. Thanks a lot! | Try to add "tensorflow" to the packages list in your setup.py script: packages = ["idna", "tensorflow"] |
Compute mean of values for each index across multiple arrays Currently I'm looking for a compact and more efficient solution (rather than multiple nested for loops) to compute mean of values given an index across multiple numpy array.Specifically given [array([2.4, 3.5, 2.9]),array([4.5, 1.8, 1.4])]I need to compute the following array:[array([3.45, 2.65, 2.15])]Any idea? Thank you all. | It's possible by just one line command with numpyimport numpy as nparr=[np.array([2.4, 3.5, 2.9]),np.array([4.5, 1.8, 1.4])]np.mean(arr, axis = 0) |
How to merge two dataframe based on time intervals and transform them I have two dataframes, first one is creating by users manually and second one is errors from machines.I want to merge them based on time interval in first dataframe(df_a)Here are the dataframes;d_a = {'Station' : ['A1','A2'], 'Reason_a' : ['Electronic','Feed'], 'StartTime_a' : ['2019-01-02 02:00:00','2019-01-02 04:22:00'], 'EndTime_a' : ['2019-01-02 02:20:00', '2019-01-02 04:45:00']}d_b = {'Station' : ['A1','A1','A1','A2','A2','A2'], 'Reason_b' : ['a','n','c','d','e','n'], 'StartTime_b' : ['2019-01-02 00:00:00.000','2019-01-02 00:05:00.000','2019-01-01 23:55:00.000','2019-01-02 04:19:53.000','2019-01-02 04:19:37.000','2019-01-02 04:23:00.000'], 'EndTime_b' : ['2019-01-02 00:19:15.000','2019-01-02 00:29:45.000','2019-01-02 00:12:12.000','2019-01-02 04:27:12.000','2019-01-02 04:47:16.000','2019-01-02 04:52:45.000']}df_a = pd.DataFrame(d_a)df_b = pd.DataFrame(d_b)Any intersection point of time intervals of two dataframes considered as valid record.condition1 = df_b start_time start after df_a start time and ends before df_a endtime condition2 = df_b start_time starts before df_a start time but ends before df_a endtimecondition3 = df_b start_timestarts between df_a starttime and df_a end time but ends after df_a endtimeIn the end I want to merge these two dataframes based on conditions. my ideal table looks like below Station Reason_a a n c d e A1 Electronic 1 1 1 0 0 A2 Feed 0 1 0 1 0How should I approach this problem?Any comment would be helpful.Thanks in advance. | I would solve it by merging the tables on station and calculating the intersections :Dimport numpy as npdf = pd.merge(df_a, df_b, on="Station")# Convert to datefor datevar in ["StartTime_a", "StartTime_b", "EndTime_a", "EndTime_b"]: df[datevar] = pd.to_datetime(df[datevar])# Intersections definitiondf["intersection"] = (((df.StartTime_a > df.StartTime_b) & (df.StartTime_a < df.EndTime_b)) | ((df.StartTime_a < df.StartTime_b) & (df.EndTime_a > df.StartTime_b)))# Filter only intersections(df[["Station", "Reason_a", "Reason_b", "intersection"]].pivot_table(index=["Station", "Reason_a"], columns="Reason_b", aggfunc=np.sum).fillna(0).astype(int)) |
Filter data in pytorch tensor I have a tensor X like [0.1, 0.5, -1.0, 0, 1.2, 0], and I want to implement a function called filter_positive(), it can filter the positive data into a new tensor and return the index of the original tensor. For example:new_tensor, index = filter_positive(X)new_tensor = [0.1, 0.5, 1.2]index = [0, 1, 4]How can I implement this function most efficiently in pytorch? | Take a look at torch.nonzero which is roughly equivalent to np.where. It translates a binary mask to indices:>>> X = torch.tensor([0.1, 0.5, -1.0, 0, 1.2, 0])>>> mask = X >= 0>>> masktensor([1, 1, 0, 1, 1, 1], dtype=torch.uint8)>>> indices = torch.nonzero(mask)>>> indicestensor([[0], [1], [3], [4], [5]])>>> X[indices]tensor([[0.1000], [0.5000], [0.0000], [1.2000], [0.0000]])A solution would then be to write:mask = X >= 0new_tensor = X[mask]indices = torch.nonzero(mask) |
How can I do the centercrop of 3D volumes inside the network model with pytorch In keras, there is Cropping3D layer for centercropping tensors of 3D volumnes inside the neural network. However, I failed to find out anything similar in pytorch, though they have torchvision.transforms.CenterCrop(size) for 2D images.How can I do the cropping inside the network? Otherwise I need to do it in preprocessing which is the last thing I want to do for specific reasons.Do I need to write a custom layer like slicing the input tensors along each axices? Hope to get some inspiration for this | In PyTorch you don't necessarily need to write layers for everything, often you can just do what you want directly during the forward pass. The basic rules you need to keep in mind when operating on torch tensors for which you will need to compute gradients areDon't convert torch tensors to other types for computation (e.g. use torch.sum instead of converting to numpy and using numpy.sum).Don't perform in-place operations (e.g. changing one element of a tensor or using inplace operators, so use x = x + ... instead of x += ...).That said, you can just use slicing, maybe it would look something like thisdef forward(self, x): ... x = self.conv3(x) x = x[:, :, 5:20, 5:20] # crop out part of the feature map x = self.relu3(x) ... |
How to fix 'Expected object of backend CUDA' when trying to apply_tfms in custom LearnerCallback Very new to machine learning, fastai, pytorch, and python, and I was trying to adapt a LearnerCallback to do transformations after manually modifying the images. When I start my learn.fit_one_cycle, it's immediately interrupted as shown below:I've tried sticking .to(torch.device('cuda')) everywhere I could think of #... def on_batch_begin(self, last_input, last_target, train, **kwargs): if not train: return #Get new input new_input = last_input.clone() new_target = last_target.clone() tfms = get_transforms(max_zoom=1.5) # modify the images here in some other way # apply_tfms for i in range(len(new_input)): new_input[i] = Image(new_input[i]).apply_tfms(tfms[0]).data new_target[i] = Image(new_target[i]).apply_tfms(tfms[0], do_resolve=False).data #...The 'apply_tfms' in the second to last line is the culprit in the traceback ending with: 553 m[1,0] *= w/h 554 c.flow = c.flow.view(-1,2)--> 555 c.flow = torch.addmm(m[:2,2], c.flow, m[:2,:2].t()).view(size) 556 return c 557 RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1'Is there a way I can apply the transforms within a LearnerCallback without getting that error, or an alternate method where I can get my LearnerCallback added with learn.callback_fns.append to run before apply_tfms runs the same transforms on both the modified input and target images? I need pixel information from the target image to modify the input image. I also need this process applied during training and validation.If it makes any difference, I get the same error even if I don't modify the cloned images before apply_tfms. | The problem that you have here is that some of your tensors are on CPU and some are on GPU.You have to make sure that all your tensors are on the same device (either GPU or CPU depending on the situation) to get rid of this error. If I remember correctly, fastai get_transforms will create transforms that are supposed to be applied during data loading (and so on CPU), so you might want to take a look at the source of get_transforms and adapt it so it can be called on your tensor on GPU.(last_input should be on GPU if you're training on GPU).Another solution is to apply those transforms during dataloading, in which case your data tensor will be on CPU. |
Translating Pytorch program into Keras: different results I have translated a pytorch program into keras.A working Pytorch program:import numpy as npimport cv2import torchimport torch.nn as nnfrom skimage import segmentationnp.random.seed(1)torch.manual_seed(1)fi = "in.jpg"class MyNet(nn.Module): def __init__(self, n_inChannel, n_outChannel): super(MyNet, self).__init__() self.seq = nn.Sequential( nn.Conv2d(n_inChannel, n_outChannel, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.BatchNorm2d(n_outChannel), nn.Conv2d(n_outChannel, n_outChannel, kernel_size=3, stride=1, padding=1), nn.ReLU(inplace=True), nn.BatchNorm2d(n_outChannel), nn.Conv2d(n_outChannel, n_outChannel, kernel_size=1, stride=1, padding=0), nn.BatchNorm2d(n_outChannel) ) def forward(self, x): return self.seq(x)im = cv2.imread(fi)data = torch.from_numpy(np.array([im.transpose((2, 0, 1)).astype('float32')/255.]))data = data.cuda()labels = segmentation.slic(im, compactness=100, n_segments=10000)labels = labels.flatten()u_labels = np.unique(labels)label_indexes = np.array([np.where(labels == u_label)[0] for u_label in u_labels])n_inChannel = 3n_outChannel = 100model = MyNet(n_inChannel, n_outChannel)model.cuda()model.train()loss_fn = torch.nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)label_colours = np.random.randint(255,size=(100,3))for batch_idx in range(100): optimizer.zero_grad() output = model( data )[ 0 ] output = output.permute( 1, 2, 0 ).view(-1, n_outChannel) ignore, target = torch.max( output, 1 ) im_target = target.data.cpu().numpy() nLabels = len(np.unique(im_target)) im_target_rgb = np.array([label_colours[ c % 100 ] for c in im_target]) # correct position of "im_target" im_target_rgb = im_target_rgb.reshape( im.shape ).astype( np.uint8 ) for inds in label_indexes: u_labels_, hist = np.unique(im_target[inds], return_counts=True) im_target[inds] = u_labels_[np.argmax(hist, 0)] target = torch.from_numpy(im_target) target = target.cuda() loss = loss_fn(output, target) loss.backward() optimizer.step() print (batch_idx, '/', 100, ':', nLabels, loss.item()) if nLabels <= 3: break fo = "out.jpg"cv2.imwrite(fo, im_target_rgb)(source: https://github.com/kanezaki/pytorch-unsupervised-segmentation/blob/master/demo.py)My translation into Keras:import cv2import numpy as npfrom skimage import segmentationfrom keras.layers import Conv2D, BatchNormalization, Input, Reshapefrom keras.models import Modelimport keras.backend as kfrom keras.optimizers import SGD, Adamfrom skimage.util import img_as_floatfrom skimage import iofrom keras.models import Sequentialnp.random.seed(0)fi = "in.jpg"im = cv2.imread(fi).astype(float)/255.labels = segmentation.slic(im, compactness=100, n_segments=10000)labels = labels.flatten()print (labels.shape)u_labels = np.unique(labels)label_indexes = [np.where(labels == u_label)[0] for u_label in np.unique(labels)]n_channels = 100model = Sequential()model.add ( Conv2D(n_channels, kernel_size=3, activation='relu', input_shape=im.shape, padding='same')) model.add( BatchNormalization())model.add( Conv2D(n_channels, kernel_size=3, activation='relu', padding='same')) model.add( BatchNormalization())model.add( Conv2D(n_channels, kernel_size=1, padding='same'))model.add( BatchNormalization())model.add( Reshape((im.shape[0] * im.shape[1], n_channels)))img = np.expand_dims(im,0)print (img.shape)output = model.predict(img)print (output.shape)im_target = np.argmax(output[0], 1)print (im_target.shape)for inds in label_indexes: u_labels_, hist = np.unique(im_target[inds], return_counts=True) im_target[inds] = u_labels_[np.argmax(hist, 0)]def custom_loss(loss_target, loss_output): return k.categorical_crossentropy(target=k.stack(loss_target), output=k.stack(loss_output), from_logits=True)model.compile(optimizer=SGD(lr=0.1, momentum=0.9), loss=custom_loss)model.fit(img, output, epochs=100, batch_size=1, verbose=1)pred_result = model.predict(x=[img])[0]print (pred_result.shape)target = np.argmax(pred_result, 1)print (target.shape)nLabels = len(np.unique(target))label_colours = np.random.randint(255, size=(100, 3))im_target_rgb = np.array([label_colours[c % 100] for c in im_target])im_target_rgb = im_target_rgb.reshape(im.shape).astype(np.uint8)cv2.imwrite("out.jpg", im_target_rgb)However, Keras output is really different than of pytorchInput image:Pytorch result:Keras result:Could someone help me for this translation? Edit 1:I corrected two errors as advised by @sebrockm1. removed `relu` from last conv layer2. added `from_logits = True` in the loss functionAlso, changed the no. of conv layers from 4 to 3 to match with the original code. However, output image did not improve than before and the `loss` are resulted in negative:Epoch 99/1001/1 [==============================] - 0s 92ms/step - loss: -22.8380Epoch 100/100 1/1 [==============================] - 0s 99ms/step - loss: -23.039I think that the Keras code lacks connection between model and output. However, could not figure out to make this connection. | Two major mistakes that I see (likely related):The last convolutional layer in the original model does not have an activation function, while your translation uses relu.The original model uses CrossEntropyLoss as loss function, while your model uses categorical_crossentropy with logits=False (a default argument). Without mathematical background the difference is tricky to explain, but in short: CrossEntropyLoss has a softmax built in, that's why the model doesn't have one on the last layer. To do the same in keras, use k.categorical_crossentropy(..., logits=True). "logits" means the input values are expected not to be "softmaxed", i.e. all values can be arbitrary. Currently, your loss function expects the output values to be "softmaxed", i.e. all values must be between 0 and 1 (and sum up to 1).Update:One other mistake, likely a huge one: In Keras, you calculate the output once in the beginning and never change it from there on. Then you train your model to fit on this initially generated output.In the original pytorch code, target (which is the variable being trained on) gets updated in every training loop. So, you cannot use Keras' fit method which is designed for doing the entire training for you (given fixed training data). You will have to replicate the training loop manually, just as it is done in the pytorch code. I'm not sure if this is easily doable with the API Keras provides. train_on_batch is one method you surely will need in your manual loop. You will have to do some more work, I'm afraid... |
How to count how many times every year shows up in my dataset in python wondering if anyone can help me.I have a dataset with the column "created_at" which has rows like this data = pd.read_csv("dataset.csv")col = data["created_at"]print(col.head())print(col.tail())0 2014-06-01 21:03:161 2014-06-01 09:06:482 2014-06-01 00:31:523 2014-06-04 10:04:474 2014-06-04 10:05:40Name: created_at, dtype: object380064 2019-05-31 23:49:39380065 2019-05-31 23:52:34380066 2019-05-31 23:27:28380067 2019-05-31 14:01:31380068 2019-05-31 12:30:33Name: created_at, dtype: objectI'm trying to count how many times each year appears so how many times does the year 2014 appear and 2015 and so on. I've tried counters and for loops but I just can't seem to get it to work. If anyone can help, would be greatly appreciated | First convert your column into datetime type because I see that it is in object type:data['created_at'] = pd.to_datetime(data['created_at'])Now extract the year part using dt:data['year'] = data['created_at'].dt.yearFinally, do the count using value_counts:data.year.value_counts()Sample Output:data.year.value_counts()Out[142]: 2014 32015 2Name: year, dtype: int64 |
How to Convert Datetime to String in Python? I want to make a line chart by this code :df = pd.DataFrame.from_dict({ 'sentencess' : sentencess, 'publishedAts' : publishedAts, 'hasil_sentimens' : hasil_sentimens })df.to_csv('chart.csv')df['publishedAts'] = pd.to_datetime(df['publishedAts'], errors='coerce')by_day_sentiment = df.groupby([pd.Grouper(key='publishedAts',freq='D'),'hasil_sentimens']).size().unstack('hasil_sentimens')sentiment_dict = by_day_sentiment.to_dict('dict')and the output from sentiment_dict is{'Negatif ': {Timestamp('2019-08-26 00:00:00', freq='D'): 2.0, Timestamp('2019-08-27 00:00:00', freq='D'): 4.0, Timestamp('2019-08-28 00:00:00', freq='D'): 2.0, Timestamp('2019-08-29 00:00:00', freq='D'): 3.0}, 'Netral ': {Timestamp('2019-08-26 00:00:00', freq='D'): 1.0, Timestamp('2019-08-27 00:00:00', freq='D'): 3.0, Timestamp('2019-08-28 00:00:00', freq='D'): 1.0, Timestamp('2019-08-29 00:00:00', freq='D'): 3.0}, 'Positif ': {Timestamp('2019-08-26 00:00:00', freq='D'): nan, Timestamp('2019-08-27 00:00:00', freq='D'): nan, Timestamp('2019-08-28 00:00:00', freq='D'): nan, Timestamp('2019-08-29 00:00:00', freq='D'): 1.0}}From that sentiment_dict, how to make a new dict but the key (which is now datetime) is changed to a string? | Use strftime('%Y-%m-%d %H:%M:%S')Ex:from pandas import Timestampfrom numpy import nandata = {'Negatif ': {Timestamp('2019-08-26 00:00:00', freq='D'): 2.0, Timestamp('2019-08-27 00:00:00', freq='D'): 4.0, Timestamp('2019-08-28 00:00:00', freq='D'): 2.0, Timestamp('2019-08-29 00:00:00', freq='D'): 3.0}, 'Netral ': {Timestamp('2019-08-26 00:00:00', freq='D'): 1.0, Timestamp('2019-08-27 00:00:00', freq='D'): 3.0, Timestamp('2019-08-28 00:00:00', freq='D'): 1.0, Timestamp('2019-08-29 00:00:00', freq='D'): 3.0}, 'Positif ': {Timestamp('2019-08-26 00:00:00', freq='D'): nan, Timestamp('2019-08-27 00:00:00', freq='D'): nan, Timestamp('2019-08-28 00:00:00', freq='D'): nan, Timestamp('2019-08-29 00:00:00', freq='D'): 1.0}}print({k: {m.strftime('%Y-%m-%d %H:%M:%S'): v for m, v in v.items()} for k, v in data.items()})Output:{'Negatif ': {'2019-08-26 00:00:00': 2.0, '2019-08-27 00:00:00': 4.0, '2019-08-28 00:00:00': 2.0, '2019-08-29 00:00:00': 3.0}, 'Netral ': {'2019-08-26 00:00:00': 1.0, '2019-08-27 00:00:00': 3.0, '2019-08-28 00:00:00': 1.0, '2019-08-29 00:00:00': 3.0}, 'Positif ': {'2019-08-26 00:00:00': nan, '2019-08-27 00:00:00': nan, '2019-08-28 00:00:00': nan, '2019-08-29 00:00:00': 1.0}} |
Reading sas7bdat as pandas dataframe from zipfile I have a zip file called myfile.zip, which contains a file mysasfile.sas7bdat, which I would like to read as a pandas dataframe. I've tried a few things which haven't worked, but here is my current methodology: import zipfilezipfile = zipfile.ZipFile('myfile.zip', 'r')sasfile = zipfile.open('mysasfile.sas7bdat')df = pd.read_sas(sasfile)Error: ---------------------------------------------------------------------------ValueError Traceback (most recent call last)<ipython-input-82-6d55436287b5> in <module>() 3 imgfile = archive.open('curated_dataset_preview.sas7bdat') 4 ----> 5 df = pd.read_sas(imgfile)/opt/python/python35/lib/python3.5/site-packages/pandas/io/sas/sasreader.py in read_sas(filepath_or_buffer, format, index, encoding, chunksize, iterator) 38 filepath_or_buffer = _stringify_path(filepath_or_buffer) 39 if not isinstance(filepath_or_buffer, compat.string_types):---> 40 raise ValueError(buffer_error_msg) 41 try: 42 fname = filepath_or_buffer.lower()ValueError: If this is a buffer object rather than a string name, you must specify a format string | You are missing the parameter formatimport zipfilezipfile = zipfile.ZipFile('myfile.zip', 'r')sasfile = zipfile.open('mysasfile.sas7bdat')df = pd.read_sas(sasfile, format='sas7bdat') |
How to fix 'RuntimeError: Address already in use' in PyTorch? I am trying to run a distributive application with PyTorch distributive trainer. I thought I would first try the example they have, found here. I set up two AWS EC2 instances and configured them according to the description in the link, but when I try to run the code I get two different errors: in the first terminal window for node0 I get the error message: RuntimeError: Address already in useUnder the other three windows I get the same error message: RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:272, unhandled system errorI followed the code in the link, and terminated the instances an redid but it didn't help/This is using python 3.6 with the nightly build Cuda 9.0. I tried changing the MASTER_ADDR to the ip for node0 on both nodes, as well as using the same MASTER_PORT (which is an available, unused port). However I still get the same error message. After running this, my goal is to the adjust this StyleGan implementation so that I can train it across multiple GPUs in two different nodes. | So after a lot of failed attempts I found out what the problem is. Note that this solution applies to using ASW deep learning instances. After creating two instances I had to adjust the security group. Add two rules: The first rule should be ALL_TCP, and set the source to the Private IPs of the leader. The second rule should be the same (ALL_TCP), but with the source as the Private IPs of the slave node. Previously, I had the setting security rule set as: Type SSH, which only had a single available port (22). For some reason I was not able to use this port to allow the nodes to communicate. After changing these settings the code worked fine. I was also able to run this with the above mentioned settings. |
How to flatten nest Json data with json_normalize I'm trying to import JSON data to Dataframe via json_normalize but cannot get it to work.My data:a key is same as c1 key[ { "a": "A1", "b": "B1", "c": [ { "c1": "C111", "c2": "C121", "c3": ["C1131","C1132"] } ] }, { "a": "A2", "b": "B2", "c": [ { "c1": "C211", "c2": "C212", "c3": ["C2131","C2132"] }, { "c1": "C221", "c2": "C222", "c3": ["C2231"] } ] }]I want to make a DataFrame like a c1(a) c2 c30 A1 C111 C121 ["C1131","C1132"]1 A2 C211 C212 ["C2131","C2132"]2 A2 C221 C222 ["C2231"]When I use json_normalize it shows ValueError:entity_df = json_normalize(data, 'c', 'a')ValueError: Conflicting metadata name a, need distinguishing prefix How should I change the json_normalize parameters?Any help will be appreciated. | you can try: from collections import defaultdictnorm_data = defaultdict(list)for item in data: for element in item['c']: norm_data['a'].append(item['a']) for k, v in element.items(): if k in {'a', 'c1'}: norm_data['c1(a)'].append(v) else: norm_data[k].append(v)pd.DataFrame(norm_data) |
How to load Multiple headers into Pandas dataframe there and thank you for spending your time on my question!I'm using python 3.7 + pandas to load .xlsx file with multiple columns into a dataframe.My Input:https://imgur.com/i7I9xAd . Desired Output (example for the first row only):https://imgur.com/uDHEFMKimport pandas as pddf = pd.read_excel('file.xlsx', header=[2,3,4])print(df.columns)It returns 'Unnamed 1', 'Unnamed 2' etc as main level header is merged.I'd appreciate any help.Have a great day! | You need to define the index column for this particular excel file:df = pd.read_excel('file.xlsx', header=[0, 1, 2], index_col=0) |
Is it possible to set a random number generator seed to get reproducible training? I would like to re-run training with fewer epochs to stop with the same state it had at that point in the earlier training.I see that tf.initializers take a seed argument. tf.layers.dropout does as well but 1.2.7 reports "Error: Non-default seed is not implemented in Dropout layer yet: 1". But even without dropout are there other sources of randomness? And can those be provided with a seed? | You can get a reproductible training by setting the weights default value. These default value are randomly generated at the beginning of the training. To set the value of the weights the property kernerInitializer of the layer object parameter can be used.Another way to set the weights is to call setWeights on the model passing as arguments the weights valuesAlso shuffle in model.fit parameter property is set to true by default. It has to be set to false to prevent the training data to be shuffled at each epoch. |
How to fix numpy.random.choice output nested inside a for loop when importing from python file? Problem:I am trying to generate multiple lists of random numbers, of differing length. Inside a for loop of length num_baskets, I am using np.random.choice to generate a number for the length of each successive list n. Then, again, np.random.choice generates a list of random numbers of length n. Expected output:List of length num_baskets, with each item in the list another list of random length n.Current output:List of length num_baskets but with uniform sublist lengths. However, each time I run the function, sublist length is different (but still uniform) per call.What I have tried:When I import the function from a python file, (e.g. from python_file import create_baskets there is deviation from expected output. Despite redefining n in each loop, all the output lists have the same length. However, when I copy and paste the function and define it inside a jupyter notebook, I get the expected output with different list lengths. My code:import numpy as npdef create_baskets(num_baskets, max_basket_size, unique_items): """ Create list of baskets of variable length >= 3 Parameters ---------- num_baskets: number of baskets (sub-lists) max_basket_size: maximum basket size Baskets will be of size range(3, max_basket_size) unique_items: number of unique items Returns ------- ret: list of "baskets", of randomly generated length """ baskets = [] for i in range(num_baskets): n = np.random.choice(range(3, max_basket_size), 1) basket = np.random.choice(range(0, unique_items), n, replace=False) baskets.append(basket) return basketsI am not sure if there is something that fundamentally wrong/improveable with how the function is written or if it is a problem with the import. | I restarted the Jupyter kernel such that the necessary changes that had been made to the python file code were now reflected on import. Despite re-importing the module multiple times within the notebook, this was rather an issue with Jupyter not updating changes made to the python file and saved in my text editor. |
Keras custom layer and custom loss function - need to preserve state Background: Text summarization using extractive method.The article I'm following - link.Edit 1 link to colabThe last layer in my network does classification using feature extraction from several inputs. Inputs: (? meaning batch size)d = document_embeddings shape = (?, 400)s = sentence_embeddings shape = (?, 10, 400)(explanation - 10 sentences per document)h_state = h_state of the LSTM that produced the document_embeddings of shape (?, 10, 400) (explanation - 10 is the timestamps in the LSTM corresponding to the 10 sentences in each document, 400 is the size)Outputs:1/0 per sentence so shape is (10,1)In the last layer I use those inputs to compute features:C_j = Wc * s_jM_j = s_j.T * W_s * dN_j = s_j.T * W_r * tanh(o_j), P_j = W_p * h_state O_j is the summary representation of the document. and is calculated by summing the multiplication of each sentence_embeddings so far by it's probability to be in the summary.for i in range(j-1): sum += S_i * prob_in_summary(S_i) This prob_in_summary for sentence i is computed by:sigmoid(C_i + M_i - N_j + P_j + b)Now. The loss function to minimise of the entire model is the negative log-likelihood of the observed labels (pseudo code) loss(Wieghts, bias) = for doc.. for sentence.. sent_label * log(prob(sent_label == 1 | S_emb, O_j, D_emb)) + (1-sent_label) * log(1-prob(sent_label==1 | S_emb, O_j, D_emb))My questions are:I do not know where to enter this loss function probability calculation given keras. How do I define the label if what I get is probability out of the sigmoid? I need something like "if prob>0.7 decide 1 else 0"Where do I compute O_j per sentence? I need to preserve some sort of state inside the layer.. but what I get to the layer is matrix of sentences and not one by one...My code so far:Custom layer:class MyLayer(Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(MyLayer, self).__init__(**kwargs) def build(self, input_shape): assert isinstance(input_shape, list) self.W_c = self.add_weight(name='W_c', shape=(1,), initializer='uniform',trainable=True) self.W_s = self.add_weight(name='W_s', shape=(1,), initializer='uniform',trainable=True) # self.W_r = self.add_weight(name='W_r', shape=(1,), initializer='uniform',trainable=True) self.W_p = self.add_weight(name='W_p', shape=(1,), initializer='uniform',trainable=True) # self.bias = self.add_weight(name='bias', shape=(1,), initializer='uniform',trainable=True) super(MyLayer, self).build(input_shape) # Be sure to call this at the end def call(self, x): assert isinstance(x, list) document_embedding, sentences_embeddings_stacked, state_h = x content_richness = self.W_c * sentences_embeddings_stacked print("content_richness", content_richness.shape) print("sentences_embeddings_stacked", sentences_embeddings_stacked.shape) print("document_embedding", document_embedding.shape) print("document_embedding_repeat", K.repeat(document_embedding, 10).shape) novelty = sentences_embeddings_stacked * self.W_s # TODO transpose, * K.repeat(document_embedding, 10) print("novelty", novelty.shape) print("state_h", state_h.shape) position = self.W_p * state_h print("position", position.shape) return content_richness def compute_output_shape(self, input_shape): assert isinstance(input_shape, list) shape_a, shape_b, shape_c = input_shape # TODO what to put here? needs to be (?,10,1) or (?, 10) because 1/0 for each sentence in doc and there are 10 sentences return [(shape_a[0], self.output_dim), shape_b[:-1]]Custom loss:Do I need custom loss? or is there negative log-likelihood of the observed labeled in keras?How do I compute y_pred inside model given the function to compute prob_in_sentence (where do I put it and where and how I implement the for loops? | solved. had to treat batch size in side my custom layer. also some stacking and splitting.class MyLayer(Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(MyLayer, self).__init__(**kwargs) def build(self, input_shape): # Create a trainable weight variable for this layer. self.W_p = self.add_weight(name='W_p', shape=(400,), initializer='uniform', trainable=True) self.W_c = self.add_weight(name='W_c', shape=(400,), initializer='uniform', trainable=True) self.W_s = self.add_weight(name='W_s', shape=(400,), initializer='uniform', trainable=True) self.W_r = self.add_weight(name='W_r', shape=(400,), initializer='uniform', trainable=True) super(MyLayer, self).build(input_shape) # Be sure to call this at the end def call(self, x): def compute_sentence_features(d, sentences_embeddings_stacked, p_j, j, sentences_probs): s = sentences_embeddings_stacked[:, j] c = s * self.W_c m = s * self.W_s * d # missing transpose o = 0 if j == 0: o = sentences_embeddings_stacked[:, 0] * 0.5 else: for i in range(0, j): o += sentences_embeddings_stacked[:, i] * sentences_probs[i] n = s * self.W_r * K.tanh(o) # missing transpose p = self.W_p * p_j return c, m, n, p, o def compute_sentence_prob(features): c, m, n, p = features sentece_prob = K.sigmoid(c + m - n + p) return sentece_prob document_embedding, sentences_embeddings_stacked, doc_lstm = x O = [] sentences_probs = [] for j in range(0, 9): c, m, n, p, o = compute_sentence_features(document_embedding, sentences_embeddings_stacked, doc_lstm[:, j], j, sentences_probs) print("c,m,n,p,o", c, m, n, p, o) sentences_probs.append(compute_sentence_prob((c, m, n, p))) O.append(o) sentences_probs_stacked = tf.stack(sentences_probs, axis=1) dense4output10= Dense(10, input_shape=(400,))(K.sum(sentences_probs_stacked, axis=1)) output = K.softmax(dense4output10) # missing bias print("output", output) return output def compute_output_shape(self, input_shape): return input_shape[0][0], self.output_dim |
Python 3.7 Indentation error when importing pandas as pd I am simply running import pandas as pd to import pandas.I am getting an indentation error which I am unable to understand.I have updated everything using Anaconda.I have attempted to import pandas in Spyder and Jupyter Notebookmy error message:import pandas as pdTraceback (most recent call last): File "C:\Users\g\AppData\Local\Continuum\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-3f7aa48ad27f>", line 3, in <module> import pandas as pd File "C:\Users\g\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\__init__.py", line 49, in <module> from pandas.io.api import * File "C:\Users\g\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\io\api.py", line 8, in <module> from pandas.io.excel import ExcelFile, ExcelWriter, read_excel File "C:\Users\g\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\io\excel.py", line 34, in <module> from pandas.io.parsers import TextParser File "C:\Users\g\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\io\parsers.py", line 1122 L self._engine = CParserWrapper(self.f, **self.options) ^IndentationError: expected an indented block | Reinstall pandas. I'd imagine the file has been edited somehow, introducing that indentation error. |
How do they know mean and std, the input value of transforms.Normalize The question is about the data loading tutorial from the PyTorch website. I don't know how they write the value of mean_pix and std_pix of the in transforms.Normalize without calculationI'm unable to find any explanation relevant to this question on StackOverflow.import torchfrom torchvision import transforms, datasetsdata_transform = transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ])hymenoptera_dataset = datasets.ImageFolder(root='hymenoptera_data/train', transform=data_transform)dataset_loader = torch.utils.data.DataLoader(hymenoptera_dataset, batch_size=4, shuffle=True, num_workers=4)The value mean=[0.485,0.456, 0.406] and std=[0.229, 0.224, 0.225] is not obvious to me. How do they get them? And why are they equal to these? | For normalization input[channel] = (input[channel] - mean[channel]) / std[channel], the mean and standard deviation values are to be taken from the training dataset.Here, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] are the mean and std of Imagenet dataset. On Imagenet, we’ve done a pass on the dataset and calculated per-channel mean/std. check hereThe pre-trained models available in torchvision for transfer learning were pretrained on Imagenet, so using its mean and std deviation would be fine for fine-tuning your model.If you're trying to train your model from scratch, it would be better to use the mean and std deviation of your training dataset (face dataset in this case). Other than that, in most of the cases, the mean and std of Imagenet suffice for your problem. |
How to aggregate certain elements in a list of disctionaries I have a list of dictionaries (state:(score, type)) list1 and would like to aggregate the states within each disctionary of list1 based on list2.import pandas as pdlist1 = [{'NY':(40, 'EQ'), 'NJ':(30, 'EQ'), 'CT':(10, 'EQ'),'FL':(30, 'FI'), 'IL':(60, 'AI')}, {'NY':(40, 'EQ'), 'NJ':(50, 'EQ'), 'GA':(10, 'RE'), 'CA':(20, 'HA')}] list2 = ['NY', 'NJ', 'CT']For the first list1 element, aggregate 'NY', 'NJ', and 'CT'. For the second list1 element, aggregate 'NY' and 'NJ'. So that the expected output after aggregation is:list1 = [{'NY':(80, 'EQ'),'FL':(30, 'FI'), 'IL':(60, 'AI')}, {'NY':(90, 'EQ'), 'GA':(10, 'RE'), 'CA':(20, 'HA')}] Thanks. | Try thisdf=pd.DataFrame(list1)a=df.loc[:, list2].sum(axis=1).reset_index(name='s').drop('index', 1)df.loc[:, 'NY'] = a['s']df.drop(['NJ','CT'], axis = 1,inplace=True)list2=df.apply(lambda x : x.dropna().to_dict(),axis=1).tolist()print(list2) |
Filter dataframe based on string within column So for simplicity purposes since my data set is very large, let's say I have a dataframe:df = pd.DataFrame([['Foo', 'Foo1'], ['Bar', 'Bar2'], ['FooBar', 'FooBar3']],columns= ['Col_A', 'Col_B'])I need to filter this dataframe in a way that would eliminate an entire row when a specified column row contains a partial, non case sensitive string (foo). In this case, I tried this to no avail...PS, my regex skills are trash so forgive me if it's not working for that reason. df = df[df['Col_A'] != '^[Ff][Oo][Oo].*']Due to the size of my dataset, efficiency is a concern which is why I have not opted for the iteration route. Thanks in advance. | Use str.matchdf[~df['Col_A'].str.match('^[Ff][Oo][Oo].*')]result Col_A Col_B1 Bar Bar2 |
How to read files written by Spark with pandas? When Spark writes dateframe data to parquet file, Spark will create a directory which include several separate parquet files. Code for saving:term_freq_df.write .mode("overwrite") .option("header", "true") .parquet("dir/to/save/to")I need to read data from this directory with pandas:term_freq_df = pd.read_parquet("dir/to/save/to") The error:IsADirectoryError: [Errno 21] Is a directory: How to resolve this problem with the simple method that the two code samples could use same path of files? | Normally, pandas.read_parquet can handle reading a directory of multiple (partitioned) parquet files fine. So I am curious to see the full error traceback you get.To demo that this works fine:In [82]: pd.__version__ Out[82]: '0.25.0'In [83]: df = pd.DataFrame({'A': ['a', 'b']*2, 'B':[1, 2, 3, 4]})In [85]: df.to_parquet("test_directory", partition_cols=['A'])This created a "test_directory" folder with multiple parquet files. I can read those back in using pandas:In [87]: pd.read_parquet("test_directory/")Out[87]: B A0 1 a1 3 a2 2 b3 4 b |
"DataFrame" is not callable It seems to be a recurrent problem on the site but i was not able to understand any of the similar problems/topics. I'm trying to get a scatter matrix from pandas (pandas.plotting.scatter_matrix), but I get the error DataFrame is not callable.Sorry to bother you, the error is maybe obvious but I'm not able to deal with it.I'm not very familiar with pandas.#Data_set is data from load_iris from sklearn.datasets, it is a bunch and it #has 5 keys : 'features_names','target_names','target','DESCR', 'data'iris_df = pd.DataFrame(Data_set['data'], columns=Data_set['feature_names'])iris_df['species'] = Data_set['target']pd.plotting.scatter_matrix(iris_df, alpha=0.2, figsize=(10, 10))plt.show()I just want to print the scatter matrix of my data and I get the error DataFrame is not callable and I'm not able to understand why. | I can get the scatter_matrix without any problems using the following code:from sklearn import datasetsimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snssns.set()pal = sns.color_palette("cubehelix", 8)sns.set_palette(pal)Data_set = datasets.load_iris()iris_df = pd.DataFrame(Data_set['data'], columns=Data_set['feature_names'])iris_df['species'] = Data_set['target']pd.plotting.scatter_matrix(iris_df, alpha=0.2, figsize=(10, 10))plt.show()There's a possibility you haven't read in the data set correctly. Check the contents of your Data_set. |
cifar100 with MobiletNetV2 I am trying to train MobileNetV2 on CIFAR100 using keras.applicationsHere is my code:(x_train,y_train),(x_test,y_test) = tf.keras.datasets.cifar100.load_data(label_mode='fine')x_test = x_test.astype("float32")x_train = x_train.astype("float32")x_test /=255x_train /=255y_test = tf.keras.utils.to_categorical(y_test,100)y_train = tf.keras.utils.to_categorical(y_train,100)model = MobileNetV2(input_shape=(32,32,3), alpha=1.0, include_top=True, weights=None, classes=100)epochs = 200batch_size = 64print('Using real-time data augmentation.') # This will do preprocessing and realtime data augmentation:datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening zca_epsilon=1e-06, # epsilon for ZCA whitening rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180) # randomly shift images horizontally (fraction of total width) width_shift_range=0.1, # randomly shift images vertically (fraction of total height) height_shift_range=0.1, shear_range=0., # set range for random shear zoom_range=0., # set range for random zoom channel_shift_range=0., # set range for random channel shifts # set mode for filling points outside the input boundaries fill_mode='nearest', cval=0., # value used for fill_mode = "constant" horizontal_flip=True, # randomly flip images vertical_flip=False, # randomly flip images # set rescaling factor (applied before any other transformation) rescale=None, # set function that will be applied on each input preprocessing_function=None, # image data format, either "channels_first" or "channels_last" data_format=None, # fraction of images reserved for validation (strictly between 0 and 1) validation_split=0.0)datagen.fit(x_train)model.compile(optimizer='adam', loss=tf.keras.losses.categorical_crossentropy, metrics=['acc'])history = model.fit_generator(datagen.flow(x_train, y_train,batch_size=batch_size), epochs=epochs, validation_data=(x_test, y_test) )The issue is with the validation accuracy, after 200 epochs the acc is almost 40%. I tried to fine_tune the optimizer/loss params but still the same. My guess is the dim of the input is too small for the model as the default is 224*224, however according to the documentation you could use whatever you want!Any advice? (I do not want to change the dim of cifar100 to 224*224 because of some assumptions related to this experiment)! | Some things that come to my mind...Check the data augmentation pipeline (datagen). It may be distorting the input too much and hence the model may be learning weird stuff instead of learning to classify the imagesCheck also the training accuracy... Is it better than the validation? by how much? For me, 32x32 is small but I think you should even though get higher accuracy... |
How to split concatenated column name into separate columns? In order to perform an analysis, I have been provided with a column name which contains specific information about the product, market and distribution. The structure of the dataset is as follows:Date Product1|CBA|MKD Product1|CPA|MKD Product1|CBA|IHR Product2|CBA|IHR2018-11 12 23 0 2There are a lot of unique column combinations. What I would like to do is to get the following structure:Date Product Partner Market Quantity2020-1 Product1 CBA MKD 112020-1 Product1 CPA MKD 222020-1 Product1 CBA IHR 02020-1 Product2 CBA IHR 1So, I want to create 3 different columns and populate them with pasted values from the column name. The quantity column would obviously contain the value of the old concatenated column (that bit I know how to do), the issue is getting the first 3 columns. I have tried to do this in pandas by matching strings but I am really stuck. I'd appreciate some help, thank you! | It looks like you could use pandas.meltdf_ = df.melt(id_vars = 'Date', value_name = 'Quantity')df_[['Product', 'Partner','Market']] = df_.variable.str.split('|', expand = True)\ .dropna(axis = 1) df_.pop('variable')df_Out[67]: Date Quantity Product Partner Market0 2018-11 12 Product1 CBA MKD1 2018-11 23 Product1 CPA MKD2 2018-11 0 Product1 CBA IHR3 2018-11 2 Product2 CBA IHR |
Using TimeSeriesSplit in RandomSearchCV I want to use TimeSeriesSplit in RandomSearchCV.Look at the example below.X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])df = pd.DataFrame(X, columns = ['one', 'two'])df.index = [0,0,0,1,1,2]df one two0 1 20 3 40 1 21 3 41 1 22 3 4Say I want to split X such that:In the first split, train set corresponds to rows with index 0,0,0 and validation set are rows with indices 1,1In the second split, train set are rows with index 0,0,0,1,1 and validation set rows with index 2I tried using TimeSeriesSplit with n_splits = 2 but could not get the result I wanted.tscv = TimeSeriesSplit(n_splits=2)for train_index, test_index in tscv.split(df.index): print(df.index[train_index], df.index[test_index])Int64Index([0, 0], dtype='int64') Int64Index([0, 1], dtype='int64')Int64Index([0, 0, 0, 1], dtype='int64') Int64Index([1, 2], dtype='int64')P.S: If not TimeSeriesSplit can I use PredefinedSplit? | If you want to filter the rows based on index, you can use loc method from DataFrames:For example for your initial data split you have:>>> df.loc[[0]] # train set one two0 1 20 3 40 1 2>>> df.loc[[1]] # validation set one two1 3 41 1 2For the second split you have:>>> df.loc[[0,1]] # train set one two0 1 20 3 40 1 21 3 41 1 2>>> df.loc[[2]] # validation set one two2 3 4 |
How to make N random choices for each value in a 3D numpy array without using loops I have:cats, an array of 10 categories with shape (10,)probs, an array of probabilities with shape (10, 50), representing the chance of each category being chosen for 50 different variablesn_choices, an array with shape (num_sims, 50) containing integers representing the number of categories to choose with replacement for each variable. For example, this could be 0 choices for variable 1, 33 for variable 2 etcsims, an array filled with zeros with shape (num_sims, 50, 10), which will later be populated with resultsWhat I am trying to do is as follows:For each row in the array (representing one simulation), and each variable in that row, make N choices from 'cats', where N equals the corresponding value in 'n_choices'Once the choices are made, add 1 to 'sims' for each time the category was chosen. In other words, I want to allocate out the values from 'n_choices' over the 10 categories based on 'probs', and save the results to 'sims'Currently I have managed to get this working using loops, as you can see below. This is fine for a small number of sims, but in practice num_sims will be in the thousands, which means my code is far too slow.def allocate_N(N, var_index): """Make N choices from cats for a given variable, and return the incides of each category var_index is the position of the variable in n_choices""" allocation = np.random.choice(cats, size=N, p=probs[:, var_index]) allocation_sorted = np.argsort(cats) ypos = np.searchsorted(cats[allocation_sorted], allocation) cat_indices = allocation_sorted[ypos] return cat_indicesdef add_to_sim(sims, cat_indices, var_index): """Takes the category indices from allocate_n and adds 1 to sims at the corresponding location for each occurrence of the category in cat_indices""" from collections import Counter a = Counter(list(cat_indices)) vals = [1*a[j] for j in cat_indices] pos = [(var_index, x) for x in cat_indices] sims[tuple(np.transpose(pos))] = vals# For each variable and each row in sims, make N allocations# and add results to 'sims'for var_index in range(len(n_choices.T)): sim_count = 0 # slice is (vars x cats), a single row of 'sims' for slice in sims: N = n_choices[sim_count, var_index] if N > 0: cat_indices = allocate_N(N, var_index) add_to_sim(slice, cat_indices, var_index) sim_count += 1I am sure there must be a way to vectorize this? I was able to make a single random choice for each variable simultaneously using the approach here, but I wasn't sure how to apply that to my particular problem.Thanks for your help! | What you seem to be describing are samples of a multinomial distribution. You can take samples from the distribution directly. Unfortunately, the parameters of the distribution (number of trials and probabilities) change for each simulation and variable, and neither np.random.multinomial nor scipy.stats.multinomial allow for vectorized sampling with multiple sets of parameters. This means that, if you want to do it like this, you would have to do it with loops still. At least, your code could be simplified to the following:import numpy as npnp.random.seed(0)# Problem sizen_cats = 10n_vars = 50n_sims = 100n_maxchoices = 50# Make example problemprobs = np.random.rand(n_cats, n_vars)probs /= probs.sum(0)n_choices = np.random.randint(n_maxchoices, size=(n_sims, n_vars))sims = np.zeros((n_sims, n_vars, n_cats), np.int32)# Sample multinomial distribution for each simulation and variablefor i_sim in range(n_sims): for i_var in range(n_vars): sims[i_sim, i_var] = np.random.multinomial(n_choices[i_sim, i_var], probs[:, i_var])# Check number of choices per simulation and variable is correctprint(np.all(sims.sum(2) == n_choices))# TrueNote you can still make this faster if you are willing to use Numba, with a function like this:import numpy as npimport numba as [email protected](parallel=True)def make_simulations(probs, n_choices, sims): for i_sim in nb.prange(n_sims): for i_var in nb.prange(n_vars): sims[i_sim, i_var] = np.random.multinomial(n_choices[i_sim, i_var], probs[:, i_var])EDIT: A possible alternative solution that does not use multinomial sampling with just one loop could be this:import numpy as npnp.random.seed(0)# Problem sizen_cats = 10n_vars = 50n_sims = 100n_maxchoices = 50# Make example problemprobs = np.random.rand(n_cats, n_vars)probs /= probs.sum(0)n_choices = np.random.randint(n_maxchoices, size=(n_sims, n_vars))sims = np.zeros((n_sims, n_vars, n_cats), np.int32)# Fill simulations arrayn_choices_var = n_choices.sum(0)sims_r = np.arange(n_sims)# For each variablefor i_var in range(n_vars): # Take choices for all simulations choices_var = np.random.choice(n_cats, n_choices_var[i_var], p=probs[:, i_var]) # Increment choices counts in simulations array i_sim = np.repeat(sims_r, n_choices[:, i_var]) np.add.at(sims, (i_sim, i_var, choices_var), 1)# Check resultprint(np.all(sims.sum(2) == n_choices))# TrueI am not sure if this would actually be faster, since it generates many intermediate arrays. I suppose it depends on the particular parameters of the problem, but I would be surprised if the Numba solution is not the fastest one. |
Why do I get this error and what is the solution? I just started Tensorflow and am solving this problem but I am getting errors. The problem is that the base price for a house is 50k and each bedroom costs 50k each. So a 1 bedroom house is 100k, 2 bedroom is 150k and so on. We have to predict the cost of a 7 Bedroom house.I have tried using 'import numpy as np' and also 'import numpy' but error still remains.import tensorflow as tfimport numpyfrom tensorflow import kerasdef house_model(y_new): xs = numpy.array([1.0,2.0,3.0,4.0], dtype = float) ys = numpy.array([1.0,1.5,2.0,2.5], dtype = float) model = tf.keras.Sequential([keras.layers.dense(units=1, input_shape=[1])]) model.compile(optimizer = 'sgd', loss = 'mean_squared_error') model.fit(xs,ys,epochs = 500) return model.predict(y_new)[7.0]prediction = house_model([7.0])print(prediction)---------------------------------------------------------------------------NameError Traceback (most recent call last)<ipython-input-23-55d468d60746> in <module>----> 1 prediction = house_model([7.0]) 2 print(prediction)<ipython-input-20-0e67265afcf6> in house_model(y_new) 1 def house_model(y_new):----> 2 xs = numpy.array([1.0,2.0,3.0,4.0], dtype = float) 3 ys = numpy.array([1.0,1.5,2.0,2.5], dtype = float) 4 model = tf.keras.Sequential([keras.layers.dense(units=1, input_shape=[1])]) 5 model.compile(optimizer = 'sgd', loss = 'mean_squared_error')NameError: name 'numpy' is not defined | Yes you should consider installing NumPy using !pip install numpyAlso, you used a small letter d (keras.layers.dense)-this is wrong.It should be keras.layers.Dense |
Filtering a dataframe by two columns in another dataframe I need some tips about a pandas issue.I have the following DataFrame, df1, which contains the names in the dates that I need to keep in the output dataframe:name date column_1 column_11 Anne 2018-01-01 some info1 some info11John 2018-01-01 some info1 some info11Mark 2018-02-01 some info1 some info11Ethan 2018-03-01 some info1 some info11Anne 2018-04-01 some info1 some info11Ethan 2018-04-01 some info1 some info11I have this other DataFrame, df2, that contains all the names and dates in my data sample:name date column_2 column_22Bob 2018-01-01 some info2 some info22Bob 2018-01-01 some info2 some info22Anne 2018-01-01 some info2 some info22John 2018-01-01 some info2 some info22Mark 2018-02-01 some info2 some info22Mark 2018-02-01 some info2 some info22Ethan 2018-03-01 some info2 some info22Anne 2018-04-01 some info2 some info22Anne 2018-04-01 some info2 some info22Ethan 2018-04-01 some info2 some info22Carl 2018-01-01 some info2 some info22Joe 2018-01-01 some info2 some info22And, as an output, I need a DataFrame like df1, but with all the columns in df2.Note that df1 and df2 have other columns in addition to the ones I show, thus they have different information. The thing is, I want the columns in df2, but only with the names in the dates shown in df1.Sample output would be:name date column_2 column_22 Anne 2018-01-01 some info2 some info22John 2018-01-01 some info2 some info22Mark 2018-02-01 some info2 some info22Mark 2018-02-01 some info2 some info22Ethan 2018-03-01 some info2 some info22Anne 2018-04-01 some info2 some info22Anne 2018-04-01 some info2 some info22 Ethan 2018-04-01 some info2 some info22NOTE:doing: df = df2.merge(df1)Didn't workNOTE 2:df1 contains aggregated and filtered data from df2, that's why there are less rows in df1 than in df2. I just want to keep, in df2, those rows that contain the name and the date in df1.None of the solutions work, so I thought maybe this explanation would help get the right anser. | I'm going to do this in steps with intermediate DataFrames. This is less efficient but it will give you more insight into what is happening.Take only the name and date from df1:df_key = df1.loc[:, ["name", "date"]] Use an inner join (referred to as a natural join in this article) of the key table and df2, which will produce only records where name and date match:df_out_1 = df_2.merge( df_key, how="inner", left_on=["name", "date"], right_on=["name", "date"]] Pick out the columns you want from the resulting join and you are done:df_out_2 = df_out_1.loc[:, ["name", "date", "column_2", "column_22"]] |
Get row values as list when column value equal something I want to extract the entire row values as list from df when column equal something.I tried df.loc['column'== x]but it gives the column headers and not a listBasically I want is to parse through each row in df and get the entire row as list when df['column']==x .The column I am parsing is the first column (company name). For company x I would like to get the list of the values in all columns (I don't want the company name and the column names to be in the list. I just want the values). ps:There are no duplicates in the company names | Use .loc() to get only the rows that you are interested in. Then turn your Dataframe into a list of lists. So you need tyo get the values and turn them into list with tolist() method in Series.You probably want to use:df.loc[df['column']==x].values.tolist()Have a look at this linkHere is an example:In [1]:import pandas as pdipl_data = {'Team': ['Riders', 'Riders', 'Devils', 'Devils', 'Kings', 'kings', 'Kings', 'Kings', 'Riders', 'Royals', 'Royals', 'Riders'], 'Rank': [1, 2, 2, 3, 3,4 ,1 ,1,2 , 4,1,2], 'Year': [2014,2015,2014,2015,2014,2015,2016,2017,2016,2014,2015,2017], 'Points':[876,789,863,673,741,812,756,788,694,701,804,690]}df = pd.DataFrame(ipl_data)df.loc[df['Team']=='Riders'].values.tolist()Out [1]:[['Riders', 1, 2014, 876], ['Riders', 2, 2015, 789], ['Riders', 2, 2016, 694], ['Riders', 2, 2017, 690]] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.