questions
stringlengths 56
48k
| answers
stringlengths 13
43.8k
|
---|---|
Sample x number of days from a data frame with multiple entries per day in pandas I have a data frame with multiple time indexed entries per day. I want to sample and x number of days (eg 2 days) and the iterate forward 1 day to the end of the range of days. How can I achieve this.For example if each day has greater than one entry: datetime value 2015-12-02 12:02:35 1 2015-12-02 12:02:44 2 2015-12-03 12:39:05 4 2015-12-03 12:39:12 7 2015-12-04 14:27:41 2 2015-12-04 14:27:45 8 2015-12-07 09:52:58 3 2015-12-07 13:52:15 5 2015-12-07 13:52:21 9I would like to iterate through taking two day samples at a time eg 2015-12-02 12:02:35 1 2015-12-02 12:02:44 2 2015-12-03 12:39:05 4 2015-12-03 12:39:12 7then 2015-12-03 12:39:05 4 2015-12-03 12:39:12 7 2015-12-04 14:27:41 2 2015-12-04 14:27:45 8ending with 2015-12-04 14:27:41 2 2015-12-04 14:27:45 8 2015-12-07 09:52:58 3 2015-12-07 13:52:15 5 2015-12-07 13:52:21 9Any help would be appreciated! | You can use:#https://stackoverflow.com/a/6822773/2901002from itertools import islicedef window(seq, n=2): "Returns a sliding window (of width n) over data from the iterable" " s -> (s0,s1,...s[n-1]), (s1,s2,...,sn), ... " it = iter(seq) result = tuple(islice(it, n)) if len(result) == n: yield result for elem in it: result = result[1:] + (elem,) yield resultdfs = [df[df['datetime'].dt.day.isin(x)] for x in window(df['datetime'].dt.day.unique())]print (dfs[0]) datetime value0 2015-12-02 12:02:35 11 2015-12-02 12:02:44 22 2015-12-03 12:39:05 43 2015-12-03 12:39:12 7print (dfs[1]) datetime value2 2015-12-03 12:39:05 43 2015-12-03 12:39:12 74 2015-12-04 14:27:41 25 2015-12-04 14:27:45 8 |
PyTorch arguments not valid on android I want to use this model in my android app. But when I start the app it falls with an error. The model works fine on my PC.To ReproduceSteps to reproduce the behavior:Clone repository and use instructions in readme to run the model.Add code below to save the model traced_script_module = torch.jit.trace(i2d, data) traced_script_module.save("i2d.pt")I used PyTorch Android DemoApp link to run the model on android.Error: E/AndroidRuntime: FATAL EXCEPTION: ModuleActivity Process: com.hypersphere.depthvisor, PID: 4765 com.facebook.jni.CppException: Arguments for call are not valid. The following variants are available: aten::upsample_bilinear2d(Tensor self, int[2] output_size, bool align_corners) -> (Tensor): Expected at most 3 arguments but found 5 positional arguments. aten::upsample_bilinear2d.out(Tensor self, int[2] output_size, bool align_corners, *, Tensor(a!) out) -> (Tensor(a!)): Argument out not provided. The original call is: D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\functional.py(3013): interpolate D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\functional.py(2797): upsample <ipython-input-1-e1d92bec6901>(75): _upsample_add <ipython-input-1-e1d92bec6901>(89): forward D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\modules\module.py(534): _slow_forward D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\nn\modules\module.py(548): __call__ D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\jit\__init__.py(1027): trace_module D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\torch\jit\__init__.py(875): trace <ipython-input-12-19d2ccccece4>(16): <module> D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(3343): run_code D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(3263): run_ast_nodes D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(3072): run_cell_async D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\async_helpers.py(68): _pseudo_sync_runner D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(2895): _run_cell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\IPython\core\interactiveshell.py(2867): run_cell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\zmqshell.py(536): run_cell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\ipkernel.py(300): do_execute D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(209): wrapper D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelbase.py(545): execute_request D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(209): wrapper D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelbase.py(268): dispatch_shell D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(209): wrapper D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelbase.py(365): process_one D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(748): run D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\gen.py(787): inner D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\ioloop.py(743): _run_callback D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\ioloop.py(690): <lambda> D:\ProgramData\Anaconda\envs\ml3 torch\lib\asyncio\events.py(88): _run D:\ProgramData\Anaconda\envs\ml3 torch\lib\asyncio\base_events.py(1786): _run_once D:\ProgramData\Anaconda\envs\ml3 torch\lib\asyncio\base_events.py(541): run_forever D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\tornado\platform\asyncio.py(149): start D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel\kernelapp.py(597): start D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\traitlets\config\application.py(664): launch_instance D:\ProgramData\Anaconda\envs\ml3 torch\lib\site-packages\ipykernel_launcher.py(16): <module> D:\ProgramData\Anaconda\envs\ml3 torch\lib\runpy.py(85): _run_code D:\ProgramData\Anaconda\envs\ml3 torch\lib\runpy.py(193): _run_module_as_main Serialized File "code/__torch__/___torch_mangle_907.py", line 39 _17 = ops.prim.NumToTensor(torch.size(_16, 2)) _18 = ops.prim.NumToTensor(torch.size(_16, 3))2020-06-29 23:50:09.536 4765-4872/com.hypersphere.depthvisor E/AndroidRuntime: _19 = torch.upsample_bilinear2d(_15, [int(_17), int(_18)], False, None, None) ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE input = torch.add(_19, _16, alpha=1) _20 = (_6).forward(input, ) at org.pytorch.NativePeer.initHybrid(Native Method) at org.pytorch.NativePeer.<init>(NativePeer.java:18) at org.pytorch.Module.load(Module.java:23) at com.hypersphere.depthvisor.MainActivity.analyzeImage(MainActivity.java:56) at com.hypersphere.depthvisor.MainActivity.analyzeImage(MainActivity.java:21) at com.hypersphere.depthvisor.AbstractCameraXActivity.lambda$setupCameraX$2$AbstractCameraXActivity(AbstractCameraXActivity.java:86) at com.hypersphere.depthvisor.-$$Lambda$AbstractCameraXActivity$KgCZmrRflavSsq5aSHYb53Fi-P4.analyze(Unknown Source:2) at androidx.camera.core.ImageAnalysisAbstractAnalyzer.analyzeImage(ImageAnalysisAbstractAnalyzer.java:57) at androidx.camera.core.ImageAnalysisNonBlockingAnalyzer$1.run(ImageAnalysisNonBlockingAnalyzer.java:135) at android.os.Handler.handleCallback(Handler.java:873) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:214) at android.os.HandlerThread.run(HandlerThread.java:65)EnvironmentPyTorch version: 1.5.0Is debug build: NoCUDA used to build PyTorch: Could not collectOS: Windows 10 ProGCC version: Could not collectCMake version: Could not collectPython version: 3.7Is CUDA available: NoCUDA runtime version: 10.2.89GPU models and configuration: Could not collectNvidia driver version: Could not collectcuDNN version: Could not collectVersions of relevant libraries:[pip3] numpy==1.18.5[pip3] torch==1.5.0[pip3] torchvision==0.6.0[conda] _pytorch_select 0.1 cpu_0 [conda] blas 1.0 mkl [conda] cudatoolkit 10.2.89 h74a9793_1 [conda] libmklml 2019.0.5 0 [conda] mkl 2019.4 245 [conda] mkl-service 2.3.0 py37hb782905_0 [conda] mkl_fft 1.1.0 py37h45dec08_0 [conda] mkl_random 1.1.0 py37h675688f_0 [conda] numpy 1.18.5 py37h6530119_0 [conda] numpy-base 1.18.5 py37hc3f5095_0 [conda] pytorch 1.5.0 cpu_py37h9f948e0_0 [conda] torchvision 0.6.0 py37_cu102 pytorchAndroid Studio 4.0Device: Samsung s8 plusAndroid version: 9 | My pc PyTorch version was 1.5 and in dependences were 1.4. So solution is:implementation 'org.pytorch:pytorch_android:1.5.0'implementation 'org.pytorch:pytorch_android_torchvision:1.5.0' |
Pass series instead of integer to pandas offsets I have a dataframe (df) with a date and a number. I want to add the number to the date. How do I add the df['additional_days'] series to the df['start_date'] series using pd.offsets()? Is there a better way to do this? start_date additional_days 2018-03-29 360 2018-07-31 0 2018-11-01 360 2016-11-03 720 2018-12-04 480I get an error when I trydf['start_date'] + pd.offsets.Day(df['additional_days']) Here is the errorTypeError Traceback (most recent call last)pandas/_libs/tslibs/offsets.pyx in pandas._libs.tslibs.offsets._BaseOffset._validate_n()/opt/conda/lib/python3.6/site-packages/pandas/core/series.py in wrapper(self) 117 raise TypeError("cannot convert the series to "--> 118 "{0}".format(str(converter))) 119 TypeError: cannot convert the series to <class 'int'>During handling of the above exception, another exception occurred:TypeError Traceback (most recent call last)<ipython-input-76-03920804db29> in <module>----> 1 df_test['start_date'] + pd.offsets.Day(df_test['additional_days'])/opt/conda/lib/python3.6/site-packages/pandas/tseries/offsets.py in __init__(self, n, normalize) 2219 def __init__(self, n=1, normalize=False): 2220 # TODO: do Tick classes with normalize=True make sense?-> 2221 self.n = self._validate_n(n) 2222 self.normalize = normalize 2223 pandas/_libs/tslibs/offsets.pyx in pandas._libs.tslibs.offsets._BaseOffset._validate_n()TypeError: `n` argument must be an integer, got <class 'pandas.core.series.Series'> | Use pd.to_timedeltaimport pandas as pd#df['start_date'] = pd.to_datetime(df.start_date)df['start_date'] + pd.to_timedelta(df.additional_days, unit='d')#0 2019-03-24#1 2018-07-31#2 2019-10-27#3 2018-10-24#4 2020-03-28#dtype: datetime64[ns] |
tensorflow sequential model outputting nan Why is my code outputting nan? I'm using a sequential model with a 30x1 input vector and a single value output. I'm using tensorflow and python. This is one of my firsWhile True: # Define a simple sequential model def create_model(): model = tf.keras.Sequential([ keras.layers.Dense(30, activation='relu',input_shape=(30,)), keras.layers.Dense(12, activation='relu'), keras.layers.Dropout(0.2), keras.layers.Dense(7, activation='relu'), keras.layers.Dense(1, activation = 'sigmoid') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) return model # Create a basic model instance model = create_model() # Display the model's architecture model.summary() train_labels=[1] test_labels=[1] train_images= [[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]] test_images=[[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]] model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels), verbose=1) print('predicted:',model.predict(train_images)) | You are using SparseCategoricalCrossentropy. It expects labels to be integers starting from 0. So, you have only one label 1, but it means you have at least two categories - 0 and 1. So you need at least two neurons in the last layer:keras.layers.Dense(2, activation = 'sigmoid')( If your goal is classification, you should maybe consider to use softmax instead of sigmoid, without from_logits=True ) |
Extracting keys from dataframe of json I'm sorry, I am new to Python and wondering if anyone can help me with extracting data? I've been trying to extract data from a df with json-content.0 [{'@context': 'https://schema.org', '@type': '...1 [{'@context': 'https://schema.org', '@type': '...2 [{'@context': 'https://schema.org', '@type': '...3 [{'@context': 'https://schema.org', '@type': '...4 [{'@context': 'https://schema.org', '@type': '...5 [{'@context': 'https://schema.org', '@type': '...So rows look like this:"[{'@context': 'https://schema.org', '@type': 'Audiobook', 'bookFormat': 'AudiobookFormat', 'name': 'Balle-Lars og mordet i Ugledige 1858', 'description': '<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt.</p><p>Lars Peter Poulsen (1866-1941) var en dansk lærer og forfatter.</p>', 'image': '/images/e/200x200/0002496352.jpg', 'author': [{'@type': 'Person', 'name': 'L.P. Poulsen'}], 'readBy': [], 'publisher': {'@type:': 'Organization', 'name': ''}, 'isbn': '', 'datePublished': '', 'inLanguage': 'da', 'aggregateRating': {'@type': 'AggregateRating', 'ratingValue': 3.56, 'ratingCount': 9}}, {'@context': 'https://schema.org', '@type': 'Book', 'bookFormat': 'EBook', 'name': 'Balle-Lars og mordet i Ugledige 1858', 'description': '<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt.</p>', 'image': '/images/e/200x200/0002496352.jpg', 'author': [{'@type': 'Person', 'name': 'L.P. Poulsen'}], 'publisher': {'@type:': 'Organization', 'name': 'SAGA Egmont'}, 'isbn': '9788726519877', 'datePublished': '2021-06-21', 'inLanguage': 'da', 'aggregateRating': {'@type': 'AggregateRating', 'ratingValue': 3.56, 'ratingCount': 9}}]"What I want is to get some of the keys (e.g. 'name') from the json data, for all rows.I've been trying:for d in unsorted: print (d["name"])... and variations. Is that the way to go (somehow) or should I convert everything to json and go from there?Thank you! | Considering that the dataframe looks like thisdf = pd.DataFrame({'json_data': ['[{"@context": "https://schema.org", "@type": "Audiobook", "bookFormat": "AudiobookFormat", "name": "Balle-Lars og mordet i Ugledige 1858", "description": "<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg."}]', '[{"@context": "https://schema.org", "@type": "Audiobook", "bookFormat": "AudiobookFormat", "name": "Balle-Lars og mordet i Ugledige 1858", "description": "<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg."}]', '[{"@context": "https://schema.org", "@type": "Audiobook", "bookFormat": "AudiobookFormat", "name": "Balle-Lars og mordet i Ugledige 1858", "description": "<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg."}]', '[{"@context": "https://schema.org", "@type": "Audiobook", "bookFormat": "AudiobookFormat", "name": "Balle-Lars og mordet i Ugledige 1858", "description": "<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg."}]', '[{"@context": "https://schema.org", "@type": "Audiobook", "bookFormat": "AudiobookFormat", "name": "Balle-Lars og mordet i Ugledige 1858", "description": "<p>I 1858 blev der begået et mord i Ugledige mellem Præstø og Vordingborg. Den 58-årige enke Ane Marie Hemmingsdatter blev skudt, da hun stod ved vinduet i sin stue efter at være kommet hjem fra et begravelsesg."}]'] })[Out]: json_data0 [{"@context": "https://schema.org", "@type": "...1 [{"@context": "https://schema.org", "@type": "...2 [{"@context": "https://schema.org", "@type": "...3 [{"@context": "https://schema.org", "@type": "...4 [{"@context": "https://schema.org", "@type": "...And assuming that OP's goal is just to obtain a list with the names, one can get it as followsimport json as jsname_list = [js.loads(x)[0]['name'] for x in df['json_data'].tolist()][Out]:['Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858', 'Balle-Lars og mordet i Ugledige 1858']If OP wants to store the names on a different column, called name, of the dataframe df, then one can do the followingimport json as jsdf['name'] = [js.loads(x)[0]['name'] for x in df['json_data'].tolist()] [Out]:json_data name0 [{"@context": "https://schema.org", "@type": "... Balle-Lars og mordet i Ugledige 18581 [{"@context": "https://schema.org", "@type": "... Balle-Lars og mordet i Ugledige 18582 [{"@context": "https://schema.org", "@type": "... Balle-Lars og mordet i Ugledige 18583 [{"@context": "https://schema.org", "@type": "... Balle-Lars og mordet i Ugledige 18584 [{"@context": "https://schema.org", "@type": "... Balle-Lars og mordet i Ugledige 1858 |
Python: Add dictionary to an existing dataframe where dict.keys() match dataframe row I'm trying to add a dictionary to a 26x26 dataframe with row and column both go from a to z:My dictionary where I want to put in the dataframe is:{'b': 74, 'c': 725, 'd': 93, 'e': 601, 'f': 134, 'g': 200, 'h': 1253, 'i': 355, 'j': 5, 'k': 2, 'l': 324, 'm': 756, 'n': 317, 'o': 88, 'p': 227, 'r': 608, 's': 192, 't': 456, 'u': 152, 'v': 142, 'w': 201, 'x': 51, 'y': 10, 'z': 53}I want each of my dictionary keys to match the row name of my dataframe, meaning I want this dictionary to be added vertically under the column a. As you can see, the 'a' and 'q' are missing in my dictionary, and I want them to be 0 instead of being skipped. How can I possibly achieve this? | You can use:df.loc[list(dic), 'a'] = pd.Series(dic)Or:df.loc[list(dic), 'a'] = list(dic.values())Full example:dic = {'b': 74, 'c': 725, 'd': 93, 'e': 601, 'f': 134, 'g': 200, 'h': 1253, 'i': 355, 'j': 5, 'k': 2, 'l': 324, 'm': 756, 'n': 317, 'o': 88, 'p': 227, 'r': 608, 's': 192, 't': 456, 'u': 152, 'v': 142, 'w': 201, 'x': 51, 'y': 10, 'z': 53}from string import ascii_lowercaseidx = list(ascii_lowercase)df = pd.DataFrame(0, index=idx, columns=idx)df.loc[list(dic), 'a'] = pd.Series(dic)print(df)output: a b c d e f g h i j ... q r s t u v w x y za 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0b 74 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0c 725 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0d 93 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0e 601 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0f 134 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0g 200 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0h 1253 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0i 355 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0j 5 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0k 2 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0l 324 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0m 756 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0n 317 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0o 88 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0p 227 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0q 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0r 608 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0s 192 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0t 456 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0u 152 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0v 142 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0w 201 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0x 51 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0y 10 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0z 53 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0[26 rows x 26 columns] |
Splitting strings in dataframe I have a column with strings. I want to split and create a new column in the dataframe.For example:2022-01-28 15-43-45 150I want to split after 45 and create a new column. | We can use str.extract here:df["new_col"] = df["filename"].str.extract(r'(\d+)$')df["filename"] = df["filename"].str.extract(r'(.*)\s+\d+$') |
Is there a way to covert date (with different format) into a standardized format in python? I have a column calls "date" which is an object and it has very different date format like dd.m.yy, dd.mm.yyyy, dd/mm/yyyy, dd/mm, m/d/yyyy etc as below. Obviously by simply using df['date'] = pd.to_datetime(df['date']) will not work. I wonder for messy date value like that, is there anyway to standardized and covert the date into one single format ?date17.2.22 # means Feb 17 202223.02.22 # means Feb 23 202217/02/2022 # means Feb 17 202218.2.22 # means Feb 18 20222/22/2022 # means Feb 22 20223/1/2022 # means March 1 2022<more messy different format> | Coerce the dates to datetime and allow invalid entries to be turned into nulls.Also, allow pandas to infer the format. code belowdf['date'] = pd.to_datetime(df['date'], errors='coerce',infer_datetime_format=True) date0 2022-02-171 2022-02-232 2022-02-173 2022-02-184 2022-02-225 2022-03-01 |
Why does tf.variable_scope has a default_name argument? The first two arguments of tf.variable_scope's __init__ method are name_or_scope: string or VariableScope: the scope to open. default_name: The default name to use if the name_or_scope argument is None, this name will be uniquified. If name_or_scope is provided it won't be used and therefore it is not required and can be None. If I understand correctly, this argument is equivalent to (and therefore could be easily replaced with)if name_or_scope is None: name_or_scope = default_namewith tf.variable_scope(name_or_scope, ...): ...Now, I am not sure I understand why it was deemed necessary to have this special treatment for the scope name — after all, many parameters could use a parameterizable default argument.So what is the rationale behind the introduction of this argument? | You are right. It is just a convenience. Take the case of TensorFlow models defined here. If you take a specific look at InceptionV4.py, you will see that it has a scope argument in its definition. Just below you will see that InceptionV4 has been passed as a default scope. Therefore it was entirely not required to even has a scope argument in the definition. But it makes sense, if somebody gives scope=None. Think about it. Model definitions can get very comples very quickly. Therefore, a default_scope argument, helps in reinforcing the wisdom of the model definition writer to introduce some sort of deliberate structure in the model definition, even if the end user is very naive about it. |
Get row value of maximum count after applying group by in pandas I have the following df>In [260]: df>Out[260]: size market vegetable confirm availability0 Large ABC Tomato NaN1 Large XYZ Tomato NaN2 Small ABC Tomato NaN3 Large ABC Onion NaN4 Small ABC Onion NaN5 Small XYZ Onion NaN6 Small XYZ Onion NaN7 Small XYZ Cabbage NaN8 Large XYZ Cabbage NaN9 Small ABC Cabbage NaN1) How to get the size of a vegetable whose size count is maximum?I used groupby on vegetable and size to get the following df But I need to get the rows which contain the maximum count of size with vegetable In [262]: df.groupby(['vegetable','size']).count()Out[262]: market confirm availabilityvegetable sizeCabbage Large 1 0 Small 2 0Onion Large 1 0 Small 3 0Tomato Large 2 0 Small 1 0df2['vegetable','size'] = df.groupby(['vegetable','size']).count().apply( some logic )Required Df : vegetable size max_count0 Cabbage Small 21 Onion Small 32 Tomato Large 22) Now I can say 'Small Cabbages' are available in huge quantity from df. So I need to populate the confirm availability column with small for all cabbage rowsHow to do this? size market vegetable confirm availability0 Large ABC Tomato Large1 Large XYZ Tomato Large2 Small ABC Tomato Large3 Large ABC Onion Small4 Small ABC Onion Small5 Small XYZ Onion Small6 Small XYZ Onion Small7 Small XYZ Cabbage Small 8 Large XYZ Cabbage Small 9 Small ABC Cabbage Small | 1)required_df = veg_df.groupby(['vegetable','size'], as_index=False)['market'].count()\ .sort_values(by=['vegetable', 'market'])\ .drop_duplicates(subset='vegetable', keep='last')2)merged_df = veg_df.merge(required_df, on='vegetable')cols = ['size_x', 'market_x', 'vegetable', 'size_y']dict_renaming_cols = {'size_x': 'size', 'market_x': 'market', 'size_y': 'confirm_availability'}merged_df = merged_df.loc[:,cols].rename(columns=dict_renaming_cols) |
How to continuously update the empty rows within specific columns using pandas and openpyxl Currently I'm running a live test that uses 3 variables data1, data2 and data 3. The Problem is that whenever I run my python code that it only writes to the first row within the respective columns and overwrites any previous data I had. import pandas as pdimport xlsxwriterfrom openpyxl import load_workbookdef dataholder(data1,data2,data3): df = pd.DataFrame({'Col1':[data1],'Col2':[data2],'Col3':[data3]}) with pd.ExcelWriter('data_hold.xlsx', engine='openpyxl') as writer: df.to_excel(writer,sheet_name='Sheet1') writer.save()Is what I'm trying to accomplish feasible? | Use startrow=... of to_excel to shift every subsequent update down. |
How to confirm convergence of LSTM network? I am using LSTM for time-series prediction using Keras. I am using 3 LSTM layers with dropout=0.3, hence my training loss is higher than validation loss. To monitor convergence, I using plotting training loss and validation loss together. Results looks like the following. After researching about the topic, I have seen multiple answers for example ([1][2] but I have found several contradictory arguments on various different places on the internet, which makes me a little confused. I am listing some of them below : 1) Article presented by Jason Brownlee suggests that validation and train data should meet for the convergence and if they don't, I might be under-fitting the data.https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/ 2) However, following answer on here suggest that my model is just converged : How do we analyse a loss vs epochs graph? Hence, I am just bit confused about the whole concept in general. Any help will be appreciated. | Convergence implies you have something to converge to. For a learning system to converge, you would need to know the right model beforehand. Then you would train your model until it was the same as the right model. At that point you could say the model converged! ... but the whole point of machine learning is that we don't know the right model to begin with.So when do you stop training? In practice, you stop when the model works well enough to do what you want it to do. This might be when validation error drops below a certain threshold. It might just be when you can't afford any more computing power. It's really up to you. |
Adding or replacing a Column based on values of a current Column I am attempting to add a new column and base its value from another column of a dataframe, on the following 2 conditions, which will not change and will be written to a file after.If number -> (##) (4 character string)If NaN -> (4 character string of white space)This is my dataframe. The column I am interested in is "Code" and that is of type float64.Current Data Frame Format| | Num | T(h) | T(m) | T(s) | Code ||:--:|:---:|:----:|:----:|:-------:|:----:|| 0 | 1 | 10 | 15 | 47.1234 | NaN || 1 | 2 | 10 | 15 | 48.1238 | 1.0 || 2 | 3 | 10 | 15 | 48.1364 | NaN || 3 | 4 | 10 | 15 | 49.0101 | 2.0 |Desired Data Frame Format| | Num | T(h) | T(m) | T(s) | Term Code ||:--:|:---:|:----:|:----:|:-------:|:---------:|| 0 | 1 | 10 | 15 | 47.1234 | || 1 | 2 | 10 | 15 | 48.1238 | ( 1) || 2 | 3 | 10 | 15 | 48.1364 | || 3 | 4 | 10 | 15 | 49.0101 | ( 2) |The function I wrote:def insertSoftbrace(tCode): value = [] for item in tCode: if str(tCode) == 'NaN': #Blank Line 4 characters newCode = ' ' value.append(newCode) else: fnum = tCode.astype(float) num = fnum.astype(int) #I also tried: num = int(fnum) numStr = str(num) newCode = '(' + numStr.rjust(2) + ')' value.append(newCode) return value#Changing the float64 to string object, so can use ( )df['Code'] = df['Code'].astype(str)#Inserting new columndf.insert(4, "Term Code", insertSoftbrace(df["Code"]))#I receive the error on: num = fnum.astype(int)# "IncastingNaNError: Cannot convert non-finite values (NA or inf) to intefer. (10 tracebacks)#When I replace "num = fnum.astype(int)" with " num = int(fnum)"# "TypeError: cannot convert the series to <class 'int'> (3 tracebacks)I also attempted this the following way, keeping the Code column as a float64def insertSoft(tCode): value = [] for item in tCode: if tCode > 0: #Format (##) num = int(tCode) newCode = '(' + numStr.rjust(2) + ')' value.append(newCode) else: #Format (4) Spaces newCode = ' ' value.append(newCode) return valuedf.insert(4, "Term Code", insertSoft(df["Code"]))#Error is given# ValueError: The truth value of a series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().What am I missing with the functions? And how can I produce the desired format? | In this solution, first use convert_dtypes which converts float into int. Then change to str. This is just to remove the decimal point. Change <NA> to 4 white spaces. The last step, if the the string isnumeric, use left padding which will ensure the string has length of 2 with white space as filling and add the parenthesis on both sides.df['term_code'] = df['code'].convert_dtypes().astype(str) df.loc[df['termcode'] == '<NA>', 'termcode'] = 4 * ' 'df.loc[df['termcode'].str.isnumeric(), 'termcode'] = '(' + df['data'].str.pad(2, 'left') + ')' |
Tensorboard: How to view pytorch model summary? I have the following network.import torchimport torch.nn as nnfrom torch.utils.tensorboard import SummaryWriterclass Net(nn.Module): def __init__(self,input_shape, num_classes): super(Net, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), nn.Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), ) x = self.conv(torch.rand(input_shape)) in_features = np.prod(x.shape) self.classifier = nn.Sequential( nn.Linear(in_features=in_features, out_features=num_classes), ) def forward(self, x): x = self.feature_extractor(x) x = x.view(x.size(0), -1) x = self.classifier(x) return xnet = Net(input_shape=(1,64,1292), num_classes=4)print(net)This prints the following:-Net( (conv): Sequential( (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) ) (classifier): Sequential( (0): Linear(in_features=320, out_features=4, bias=True) ))However, I am trying various experiments and I want to keep track of network architecture on Tensorboard. I know there is a function writer.add_graph(model, input_to_model) but it requires input, or at least its shape should be known.So, I tried writer.add_text("model", str(model)), but formatting is screwed up in tensorboard.My question is, is there a way to at least visualize the way I can see by using print function in the tensorboard? | I can see everything is going right but there is just a formatting issue. Tensorboard understands markdown so you can actually replace \n with <br/> and with &nbsp;.Here is a detailed walkthrough. Suppose you have the following model:-import torchimport torch.nn as nnfrom torch.utils.tensorboard import SummaryWriterclass Net(nn.Module): def __init__(self,input_shape, num_classes): super(Net, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), nn.Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(kernel_size=(4,4)), ) x = self.conv(torch.rand(input_shape)) in_features = np.prod(x.shape) self.classifier = nn.Sequential( nn.Linear(in_features=in_features, out_features=num_classes), ) def forward(self, x): x = self.feature_extractor(x) x = x.view(x.size(0), -1) x = self.classifier(x) return xnet = Net(input_shape=(1,64,1292), num_classes=4)print(net)This prints the following and if can actually show it in the Tensorboard.Net( (conv): Sequential( (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) (3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (4): ReLU(inplace=True) (5): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False) ) (classifier): Sequential( (0): Linear(in_features=320, out_features=4, bias=True) ))There is function in add_graph(model, input) in SummaryWriter but you must create dummy input and in some cases it is difficult of to always know them. Instead do following:-writer = SummaryWriter()model_summary = str(model).replace( '\n', '<br/>').replace(' ', '&nbsp;')writer.add_text("model", model_summary)writer.close()Above produces following text in tensorboard:- |
How to plot histogram for chosen cells using mean as condition in python? I have some data as x,y arrays and an array of v values corresponding to them, i.e for every x and y there is a v with matching index.What I have done: I am creating a grid on the x-y plane and then the v-values fall in cells of that grid. I am then taking mean of the v-values in each cell of the grid.Where I am stuck: Now, I want to identify the cells where the mean of v is greater than 2 and plot the histograms of those cells (histogram of original v values in that cell). Any ideas on how to do that? Thanks!EDIT: I am getting some histogram plots for mean>2 but it also includes histograms of empty cells. I want to get rid of the empty ones and just keep mean>2 cells. I tried print(mean_pix[(mean_pix!=[])]) but it returns errors.My full code is:import numpy as npimport matplotlib.pyplot as pltx=np.array([11,12,12,13,21,14])y=np.array([28,5,15,16,12,4])v=np.array([10,5,2,10,6,7])x = x // 4 y = y // 4 k=10cells = [[[] for y in range(k)] for x in range(k)] #creating cells or pixels on x-y plane#letting v values to fall into the grid cellsfor ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) & (x == xcell)] for ycell in range(k): for xcell in range(k): this = cells[ycell][xcell] #print(this) #fig, ax = plt.subplots() #plt.hist(this) #getting mean from velocity values in each cellmean_v = [[[] for y in range(k)] for x in range(k)]for ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) & (x == xcell)] this = cells[ycell][xcell] mean_v[ycell][xcell] = np.mean(cells[ycell][xcell]) mean_pix= mean_v[ycell][xcell] fig, ax = plt.subplots() plt.hist(this[(mean_pix>2)]) # this gives me histograms of cells that have mean>2 but it also gives histograms of empty cells. I want to avoid getting the empty histograms. | Maybe there is a better way, but you can create an empty list and append the lists that you want to plot:import numpy as npimport matplotlib.pyplot as pltx=np.array([11,12,12,13,21,14])y=np.array([28,5,15,16,12,4])v=np.array([10,5,2,10,6,7])x = x // 4 y = y // 4 k=10cells = [[[] for y in range(k)] for x in range(k)] #creating cells or pixels on x-y plane#letting v values to fall into the grid cellsfor ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) & (x == xcell)] for ycell in range(k): for xcell in range(k): this = cells[ycell][xcell] #getting mean from velocity values in each cellmean_v = [[[] for y in range(k)] for x in range(k)]to_plot = []for ycell in range(k): for xcell in range(k): cells[ycell][xcell] = v[(y == ycell) & (x == xcell)] mean_v[ycell][xcell] = np.mean(cells[ycell][xcell]) if mean_v[ycell][xcell]>2: to_plot.append(cells[ycell][xcell])for x in to_plot: fig, ax = plt.subplots() plt.hist(x)I also removed some unnecessary code. It should output something like this: |
Cannot export QNN brevitas to ONNX I have trained my model as QNN with brevitas. Basically my input shape is:torch.Size([1, 3, 1024])I have exported the .pt extended file. As I try my model and generate a confusion matrix I was able to observe everything that I want.So I believe that there is no problem about the model.On the other hand as I try to export the .onnx file to implement this brevitas trained model on FINN, I wrote the code given below:from brevitas.export import FINNManagerFINNManager.export(my_model, input_shape=(1, 3, 1024), export_path='myfinnmodel.onnx')But as I do that I get the error as:torch.onnx.export(module, input_t, export_target, **kwargs)TypeError: export() got an unexpected keyword argument'enable_onnx_checker'I do not think this is related with the version. But if you want me to be sure about the version, I can check these too.If you can help me I will be really appreciated.Sincerely; | The problem is related to pytorch version > 1.10. Where "enable_onnx_checker" is no more a parameter of torch.onnx.export function.This is the official solution from the repository.https://github.com/Xilinx/brevitas/pull/408/filesThe fix is not yet release. Is in dev branch.You need to compile brevitas by yourself or simply change the code in brevitas/export/onnx/manager.py following official solution.After that i am able to get onnx converted model. |
Remove values above/below standard deviation I have a database that is made out of 18 columns and 15 million rows, in each column there are outliers and I wanted to remove values above and below 2 standard deviations. My code doesn't seem to edit anything in the database though.Thank you.import pandas as pdimport random as rimport numpy as np df = pd.read_csv('D:\\Project\\database\\3-Last\\LastCombineHalf.csv')df[df.apply(lambda x :(x-x.mean()).abs()<(2*x.std()) ).all(1)]df.to_csv('D:\\Project\\database\\3-Last\\Removal.csv', index=False) | Perhaps because you didn't assign the results back to df?From:df[df.apply(lambda x :(x-x.mean()).abs()<(2*x.std()) ).all(1)]To:df = df[df.apply(lambda x :(x-x.mean()).abs()<(2*x.std()) ).all(1)] |
How to differentiate between trees and buildings in OpenCV and NumPy in Python I am trying to classify buildings and trees in digital elevation models. Trees normally look like this: Buildings normally look something like this: Note the increased disorder in trees compared to buildings. I originally tried to use np.var to differentiate between the two but I am getting inconsistent results. Is there any other non machine learning way to classify these two, preferably on the basis of increased disorder in trees? | Disclaimer: My answer might be super overfitted and wrong, as it is based on just the two sample imagesApproach 1 : Just classify based on the 'squareness' - delta_x = |x_min - x_max|, delta_y = |y_min - y_max|spread_ratio = delta_y/delta_xif spread_ratio > thresh: classify as treeelse: classify as buildingApproach 2: Your images have very different colors. If that corresponds to height, you can just find a thresholding based on average height of a tree and building |
Reshape input layer 'requested shape' size always 'input shape' size squared I am trying to run a SavedModel using the C-API.When it comes to running TF_SessionRun it always fails on various input nodes with the same error.TF_SessionRun status: 3:Input to reshape is a tensor with 6 values, but the requested shape has 36TF_SessionRun status: 3:Input to reshape is a tensor with 19 values, but the requested shape has 361TF_SessionRun status: 3:Input to reshape is a tensor with 3111 values, but the requested shape has 9678321...As can be seen, the number of requested shape values is always the square of the expected input size. It's quite odd.The model runs fine with the saved_model_cli command.The inputs are all either scalar DT_STRING or DT_FLOATs, I'm not doing image recogition.Here's the output of that command:signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['f1'] tensor_info: dtype: DT_STRING shape: (-1) name: f1:0 inputs['f2'] tensor_info: dtype: DT_STRING shape: (-1) name: f2:0 inputs['f3'] tensor_info: dtype: DT_STRING shape: (-1) name: f3:0 inputs['f4'] tensor_info: dtype: DT_FLOAT shape: (-1) name: f4:0 inputs['f5'] tensor_info: dtype: DT_STRING shape: (-1) name: f5:0 The given SavedModel SignatureDef contains the following output(s): outputs['o1_probs'] tensor_info: dtype: DT_DOUBLE shape: (-1, 2) name: output_probs:0 outputs['o1_values'] tensor_info: dtype: DT_STRING shape: (-1, 2) name: output_labels:0 outputs['predicted_o1'] tensor_info: dtype: DT_STRING shape: (-1, 1) name: output_class:0 Method name is: tensorflow/serving/predictAny clues into what's going on are much appreciated. The saved_model.pb file is coming from AutoML, my code is merely querying that model. I don't change the graph. | It turns out that the issue was caused by me not using the TF_AllocateTensor function correctly.The original code was like:TF_Tensor* t = TF_AllocateTensor(TF_STRING, nullptr, 0, sz);when it appears it should have been:int64_t dims = 0;TF_Tensor* t = TF_AllocateTensor(TF_STRING, &dims, 1, sz); |
Two questions on DCGAN: data normalization and fake/real batch I am analyzing a meta-learning class that uses DCGAN + Reptile within the image generation.I have two questions about this code. First question: why during DCGAN training (line 74)training_batch = torch.cat ([real_batch, fake_batch])is a training_batch made up of real examples (real_batch) and fake examples (fake_batch) created? Why is training done by mixing real and false images? I have seen many DCGANs, but never with training done in this way.The second question: why is the normalize_data function (line 49) and the unnormalize_data function (line 55) used during training?def normalize_data(data): data *= 2 data -= 1 return datadef unnormalize_data(data): data += 1 data /= 2 return dataThe project uses the Mnist dataset, if I wanted to use a color dataset like CIFAR10, do I have to modify those normalizations? | Training GANs involves giving the discriminator real and fake examples. Usually, you will see that they are given in two separate occasions. By default torch.cat concatenates the tensors on the first dimension (dim=0), which is the batch dimensions. Therefore it just doubled the batch size, where the first half are the real images and the second half the fake images. To calculate the loss, they adapt the targets, such that the first half (original batch size) is classified as real, and the second half is classified as fake. From initialize_gan:self.discriminator_targets = torch.tensor([1] * self.batch_size + [-1] * self.batch_size, dtype=torch.float, device=device).view(-1, 1)Images are represented with float values between [0, 1]. The normalisation changes that to produce values between [-1, 1]. GANs generally use tanh in the generator, therefore the fake images have values between [-1, 1], hence the real images should be in the same range, otherwise it would be trivial for the discriminator to distinguish the fake images from the real ones.If you want to display these images, you need to unnormalise them first, i.e. convert them to values between [0, 1]. The project uses the Mnist dataset, if I wanted to use a color dataset like CIFAR10, do I have to modify those normalizations?No, you don't need to change them, because images in colour also have their values between [0, 1], there are simply more values, representing the 3 channels (RGB). |
separate 2D gaussian kernel into two 1D kernels A gaussian kernel is calculated and checked that it can be separable by looking in to the rank of the kernel. kernel = gaussian_kernel(kernel_size,sigma)print(kernel)[[ 0.01054991 0.02267864 0.0292689 0.02267864 0.01054991] [ 0.02267864 0.04875119 0.06291796 0.04875119 0.02267864] [ 0.0292689 0.06291796 0.0812015 0.06291796 0.0292689 ] [ 0.02267864 0.04875119 0.06291796 0.04875119 0.02267864] [ 0.01054991 0.02267864 0.0292689 0.02267864 0.01054991]] rank = np.linalg.matrix_rank(kernel)if rank == 1: print('The Kernel is separable')else: print('The kernel is not separable')Now I believe the separation is not correct. I am doing it in the following manner: u,s,v = np.linalg.svd(kernel) k1 = (u[:,0] * np.sqrt(s[0]))[np.newaxis].T k2 = v[:,0] * np.sqrt(s[0]) Then I multiplied the above two kernels to get the original kernel back. But I did not get it.if not np.all(k1 * k2 == kernel): print('k1 * k2 is not equal to kernel')I assume that the separation that I am trying to do using svd and further is not correct. Some explanation would help. | matrix rank 1 means that all the rows are either zero or the same up to scaling and the same is true for columns. They are also up to scaling equal to the two factors.Therefore you can recover them using something likeI,J = np.unravel_index(np.abs(kernel).argmax(), kernel.shape)f1 = np.nansum(kernel / (kernel[None,:,J]@kernel),1,keepdims=True)f2 = np.nansum(kernel / (kernel@kernel[I,:,None]),0,keepdims=True)scaling = np.sqrt(np.abs(kernel).sum()/np.abs(f1*f2).sum())f1 *= scaling * np.sign(f1[I,0]) * np.sign(kernel[I,J])f2 *= scaling * np.sign(f2[0,J])Note that most of the complexity comes from my trying to average as many data as possible. A simpler but I'd assume numerically not quite as stable method would beI,J = np.unravel_index(np.abs(kernel).argmax(), kernel.shape)f1 = kernel[:,J,None]f2 = kernel[None,I,:] / kernel[I,J]Of course, your method also works once you get the indexing right:k1 = u[:,0,None] * np.sqrt(s[0])k2 = v[None,0,:] * np.sqrt(s[0])np.allclose(kernel, k1*k2)# True |
Add a fix value to a dataframe (accumulating to future ones) I am trying to simulate inventory level during the next 6 months:1- I have the expected accumulated demand for each day of next 6 months. So, with no reorder, my balance would be more negative everyday.2- My idea is: Everytime the inventory level is lower than 3000, I would send an order to buy 10000, and after 3 days, my level would increase again:How is the best way to add this value into all the future values ? ds saldo0 2019-01-01 10200.8398191 2019-01-02 5219.4129522 2019-01-03 3.1618763 2019-01-04 -5507.5062014 2019-01-05 -10730.2912215 2019-01-06 -14406.8335936 2019-01-07 -17781.5003967 2019-01-08 -21545.5030988 2019-01-09 -25394.427708I started doing like this :c = 0for index, row in forecast_data.iterrows(): if row['saldo'] < 3000: c += 1 if c == 3: row['saldo'] + 10000 c = 0But it just adds to the actual row, not for the accumulated future ones.print(row['ds'], row['saldo'])9 2019-01-10 -29277.647817 | You forgot to assign the value i think. use row['saldo'] += 10000 instead of row['saldo'] + 10000 |
How to get pandas to return datetime64 rather than Timestamp? How can I tell pandas to return datetime64 rather than Timestamp? For example, in the following code df['dates'][0] returns a pandas Timestamp object rather than the numpy datetime64 object that I put in.Yes, I can convert it after getting it, but is it possible to tell pandas to give me back exactly what I put in? >>> import numpy as np>>> import pandas as pd>>> np.__version__'1.10.4'>>> pd.__version__u'0.19.2'>>> df = pd.DataFrame()>>> df['dates'] = [np.datetime64('2019-02-15'), np.datetime64('2019-08-15')]>>> df.dtypesdates datetime64[ns]dtype: object>>> type(df['dates'][0])<class 'pandas.tslib.Timestamp'> | Adding values df.dates.values[0]Out[55]: numpy.datetime64('2019-02-15T00:00:00.000000000')type(df.dates.values[0])Out[56]: numpy.datetime64 |
In Pandas how can I use the values in one table as an index to extract data from another table? I feel like this should be really simple but I'm having a hard time with it. Suppose I have this:df1:ticker hhmm <--- The hhmm value corresponds to the column in df2====== ====AAPL 0931IBM 0930XRX 1559df2:ticker 0930 0931 0932 ... 1559 <<---- 390 columns====== ==== ==== ==== ... ====AAPL 4.56 4.57 ... ... IBM 7.98 ... ... ...XRX 3.33 ... ... 3.78The goal is to create a new column in df1 whose value is df2[df1['hhmm']].For example:df1:ticker hhmm df2val====== ==== ======AAPL 0931 4.57IBM 0930 7.98XRX 1559 3.78Both df's have 'ticker' as their index, so I could simply join them BUT assume that this uses too much memory (the dataframes I'm using are much larger than the examples shown here).I've tried apply and it's slooooow (15 minutes to run).What's the Pandas Way to do this? Thanks! | There is a function called lookupdf1['val']=df2.set_index('ticker').lookup(df1.ticker,df1.hhmm)df1Out[290]: ticker hhmm val0 AAPL 0931 4.571 IBM 0930 7.982 XRX 1559 33.00# I make up this number |
Is there a way in python to read a text block within a csv cell and only select cell data based on key word with in text block? I am working with a CSV file in Pandas/Python and I need to find when a supplier response was submitted.The column "time Line" contains the info I'm looking for and can vary on how much information was put into the response at the time but the keyword I am looking for is the same.Text block(This is the sub-section I need!)October 29, 2021 10:34:30 AM -05:00 - JimSupplier assignment notification sent to supplier "ALB-example" - Alex ([email protected])--------November 04, 2021 07:06:31 PM -05:00 - Levi A-Quality Dept assigned as approver--------November 01, 2021 05:11:19 PM -05:00 - Jim CAR #454 created from this record--------October 29, 2021 10:34:30 AM -05:00 - Jim Supplier assignment notification sent to supplier "ALB-Aeroexample" - Alex ([email protected])--------October 29, 2021 10:34:28 AM -05:00 - Jim NCP Updated with the following changes: + Supplier assigned changed from "False to TrueThis text block is in one cell and I haven't figured out how to go about it.Thank you in advance. | Assuming your dataframe has a "time line" column:new_df = df.loc[df['time Line'].str.contains('the string you are looking for')]this will create a new dataframe with all rows that contains the string you need, is this what you are looking for? |
How to replace a dataframe rows with other rows based on column values? I have a dataframe of this type: Time Copy_from_Time Rest_of_data0 1 1 foo11 2 1 foo22 3 3 foo33 4 4 foo44 5 4 foo55 6 4 foo6I want to update "Rest of data" with data associated at the Time specified by "Copy_from_Time". So it would look like: Time Copy_from_Time Rest_of_data0 1 1 foo11 2 1 foo12 3 3 foo33 4 4 foo44 5 4 foo45 6 4 foo4I can do it with iterrows(), but it is very slow. Is there a faster way with indexing tricks and maybe map()?(The real example has Time, Time2, Copy_from_Time and Copy_from_Time2, so I would need to match several fields, but I guess it would be easy to adapt it) | use map in updating the value in rest_of_data columndf['Rest_of_data']=df['Copy_from_Time'].map(df.set_index('Time')['Rest_of_data'])df Time Copy_from_Time Rest_of_data0 1 1 foo11 2 1 foo12 3 3 foo33 4 4 foo44 5 4 foo45 6 4 foo4 |
How to combine two columns I have a merged Pandas dataframe in the following formatindexvalue_xvalue_y0nan313nan2nannan3-1146nan5nan66-1nan7-168nannanSince the original dataframes have the value field, therefore value_x and value_y column is gnerated during the merge process. I would like to merge the two columns so the final column would look like:indexvalue_xvalue_yvalue0nan3313nan32nannannan3nan1146nan65nan666-1nan-17nan668nannannanIn addition, I would like to know if I could avoid the column combining process during the merge process?Thanks in advance | You can use maxdf["value"] = df[["value_x", "value_y"]].max(axis=1)as this will pick the non-nan value for each row. For this question:In addition, I would like to know if I could avoid the column combining process during the merge process?the answer depends on what the two dataframes were before the merge. |
Create a new dataframe from an old dataframe where the new dataframe contains row-wise avergae of columns at different locations in the old dataframe I have a dataframe called "frame" with 16 columns and 201 rows. A screenshot is attached that provides an example dataframeenter image description herePlease note the screenshot is just an example, the original dataframe is much larger.I would like to find an efficient way (maybe using for loop or writing a function) to row-wise average different columns in the dataframe. For instance, to find an average of column "rep" and "rep1" and column "repcycle" and "repcycle1" (similarly for set and setcycle) and save in a new dataframe with only averaged columns.I have tried writing a code using ilocnewdf= frame[['sample']].copy()newdf['rep_avg']=frame.iloc[:, [1,5]].mean(axis=1) #average row-wisenewdf['repcycle_avg']=frame.iloc[:, [2,6]].mean(axis=1)newdf['set_avg']=frame.iloc[:, [3,7]].mean(axis=1) #average row-wise newdf['setcycle_avg']=frame.iloc[:, [4,8]].mean(axis=1)newdf.columns = ['S', 'Re', 'Rec', 'Se', 'Sec']The above code does the job, but it is tedious to note the locations for every column. I would rather like to automate this process since this is repeated for other data files too. | based on your desire "I would rather like to automate this process since this is repeated for other data files too"what I can think of is this below:in [1]: frame = pd.read_csv('your path')result shown below, now as you can see what you want to average are columns 1,5 and 2,6 and so on.out [1]: sample rep repcycle set setcycle rep1 repcycle1 set1 setcycle10 66 40 4 5 3 40 4 5 31 78 20 5 6 3 20 5 6 32 90 50 6 9 4 50 6 9 43 45 70 7 3 2 70 7 7 2so, we need to create 2 listsin [2]: import numpy as np list_1 = np.arange(1,5,1).tolist()in [3]: list_1out[3]: [1,2,3,4]this for the first half you want to average[rep,repcycle,set,setcycle]in [4]: list_2 = [x+4 for x in list_1]in [5]: list_2out[5]: [5,6,7,8]this for the second half you want to average[rep1,repcycle1,set1,setcycle1]in [6]: result = pd.concat([frame.iloc[:, [x,y].mean(axis=1) for x, y in zip(list_1,list_2)],axis=1)in [7]: result.columns = ['Re', 'Rec', 'Se', 'Sec']and now you get what you want, and it's automate, all you need to do is change the two lists from above.in [8]: resultout[8]: Re Rec Se Sec0 40.0 4.0 5.0 3.01 20.0 5.0 6.0 3.02 50.0 6.0 9.0 4.03 70.0 7.0 5.0 2.0 |
Using tensorflow when a session is already running on the gpu I am training a neural network with tensorflow 2 (gpu) on my local machine, I'd like to do some tensorflow code in parallel (just loading a model and saving it's graph).When loading the model I get a cuda error. How can I use tensorflow 2 on cpu to load and save a model, when another instance of tensorflow is training on the gpu? 132 self._config = config 133 self._hyperparams['feature_extractor'] = self._get_feature_extractor(hyperparams['feature_extractor'])--> 134 self._input_shape_tensor = tf.constant([input_shape[0], input_shape[1]]) 135 self._build(**self._hyperparams) 136 # save parameter dict for serialization~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/framework/constant_op.py in constant(value, dtype, shape, name) 225 """ 226 return _constant_impl(value, dtype, shape, name, verify_shape=False,--> 227 allow_broadcast=True) 228 229 ~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/framework/constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast) 233 ctx = context.context() 234 if ctx.executing_eagerly():--> 235 t = convert_to_eager_tensor(value, ctx, dtype) 236 if shape is None: 237 return t~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 93 except AttributeError: 94 dtype = dtypes.as_dtype(dtype).as_datatype_enum---> 95 ctx.ensure_initialized() 96 return ops.EagerTensor(value, ctx.device_name, dtype) 97 ~/.anaconda3/envs/posenet2/lib/python3.7/site-packages/tensorflow_core/python/eager/context.py in ensure_initialized(self) 490 if self._default_is_async == ASYNC: 491 pywrap_tensorflow.TFE_ContextOptionsSetAsync(opts, True)--> 492 self._context_handle = pywrap_tensorflow.TFE_NewContext(opts) 493 finally: 494 pywrap_tensorflow.TFE_DeleteContextOptions(opts)InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory | It took me a while to find this answer:import osos.environ["CUDA_VISIBLE_DEVICES"] = "-1"import tensorflow as tfStarting your code with those lines allows you to run your tf code on CPU (avoid using CUDA is the solution, obviously) while at the same time running a heavy GPU loaded training. |
pandas multiindex - remove rows based on number of sub index Here is my dataframe :df = pd.DataFrame(pd.DataFrame({"C1" : [0.5, 0.9, 0.1, 0.2, 0.3, 0.5, 0.2], "C2" : [200, 158, 698, 666, 325, 224, 584], "C3" : [15, 99, 36, 14, 55, 62, 37]}, index = pd.MultiIndex.from_tuples([(0,0), (1,0), (1,1), (2,0), (2,1), (3,0), (4,0)], names=['L1','L2'])))df : C1 C2 C3L1 L2 0 0 0.5 200 151 0 0.9 158 99 1 0.1 698 362 0 0.2 666 14 1 0.3 325 553 0 0.5 224 624 0 0.2 584 37I would like to keep the rows that only have one value in L1 subindex (0 in that case) in order to get something like that : C1 C2 C3L1 L2 0 0 0.5 200 153 0 0.5 224 624 0 0.2 584 37Please, could you let me know if you have any clue to solve this problem ?Sincerely | Use GroupBy.transform by first level with any column with GroupBy.size and compare by Series.eq and filter by boolean indexing:df1 = df[df.groupby(level=0)['C1'].transform('size').eq(1)]Or extract index of first level by Index.get_level_values and filter with inverted mask by ~ with Index.duplicated and keep=False for all dupes:df1 = df[~df.index.get_level_values(0).duplicated(keep=False)] |
Optimize the Weight of a layer while training CNN I am trying to train a neural network whose last layer like this,add_5_proba = Add()([out_of_1,out_of_2,out_of_3,out_of_4, out_of_5 ])# Here I am adding 5 probability from 5 different layermodel = Model(inputs=inp, outputs=add_5_proba)But now I want to give weight to them ,Like[a * out_of_1, b* out_of_2, c * out_of_3, d * out_of_4, e * out_of_5]and optimize the weights (a,b,c,d,e) during training. How can I do that ? My idea is Using custom Lossfunction it can be done, but I have no idea how to implement this.Thanks in advance for your help. | Just create tf.Variables:a = tf.Variable(1.)b = tf.Variable(1.)c = tf.Variable(1.)d = tf.Variable(1.)e = tf.Variable(1.)add_5_proba = Add()([a * out_of_1, b * out_of_2, c * out_of_3, d * out_of_4, e * out_of_5 ])model = Model(inputs=inp, outputs=add_5_proba)These variables are trainable by default - https://www.tensorflow.org/api_docs/python/tf/Variable. They should be optimized during training. |
What does sess.run( LAYER ) return? I have tried to search around, but oddly enough, I can't find anything similar. Let's say I have a few fully connected layers:fc_1 = tf.contrib.layers.fully_connected(fc_input, 100)fc_2 = tf.contrib.layers.fully_connected(fc_1, 10)fc_3 = tf.contrib.layers.fully_connected(fc_2, 1)When I run these with sess.run(...) I get a tensor back. What is this tensor? Is it the weights? Gradients? Does sess.runreturn this for all types of layers we give it? | A fully-connected layer is a math operation that transforms an input tensor into an output tensor. The output tensor contains the values returned by the layer's activation function, which operates on the sum of the weighted values in the layer's input tensor.When you execute sess.run(fc_3), TensorFlow performs the transformations for the three layers and gives you the output tensor produced by the third layer. |
Bin using cumulative sum rather than observations in python Let's say that I have a data frame that has a column like this:Weight110.750.50.250.51111I want to create two bins and add a column to my data frame that shows which bin each row is in, but I don't want to bin on the observations (i.e. the first 5 observations got to bin 1 and the last five to bin 2). Instead, I want to bin such that the sum of weight for each bin is equal or as close to equal as possible without changing the order of the column.So, I want the result to beWeight I want Not this1 1 11 1 10.75 1 10.5 1 10.25 1 10.5 1 21 2 21 2 21 2 21 2 2Is there something built into Pandas that already does this, or can someone share any ideas on how to make this happen? Thanks! | This should do it:df = pd.DataFrame( {'Weight': [1, 1, 0.75, 0.5, 0.25, 0.5, 1, 1, 1, 1]})weight_sum = df.Weight.sum()df['bin'] = 1df.loc[df.Weight.cumsum() > weight_sum / 2, 'bin'] = 2print(df)Output: Weight bin0 1.00 11 1.00 12 0.75 13 0.50 14 0.25 15 0.50 16 1.00 27 1.00 28 1.00 29 1.00 2 |
Find the remainder mask between 2 masks in numpy for 2D array Let's say I have a 2D array:main = np.random.random((300, 200))And I have two masks for this array:e.g.,mask1 = list((np.random.randint((100), size = 50), np.random.randint((200), size = 50)))mask2 = list((np.random.randint((20), size = 10), np.random.randint((20), size = 10)))I want to substitute the main values in the 2D array like:main[mask1]=2main[mask2]=1which works great, but I also want to substitue all the indexes that are not mask 1 nor mask 2, by zero.I thought about something like:main[~mask1] & main[~mask2] = 0which is leading me nowhere, so any help is appreciated! | I think for your requirement a better approach is constructing a zero filled array same shape as main and assign 1 and 2 using mask1 and mask2main = np.zeros(main.shape)main[mask1]=2main[mask2]=1 |
How to color nodes within networkx using a column in Pandas I have this dataset: User Val Color 92 Laura NaN red100 Laura John red148 Laura Mike red168 Laura Mirk red293 Laura Sara red313 Laura Sim red440 Martyn Pierre orange440 Martyn Hugh orange440 Martyn Lauren orange440 Martyn Sim orangeI would like to assign to each User (no duplicates) the corresponding colour: in this example, the node called Laura should be red; the node called Martyn should be orange; the other nodes (John, Mike, Mirk, Sara, Sim, Pierrre, Hugh and Lauren) should be in green.I have tried to use this column (Color) to define a set of colours within my code by using networkx, but the approach seems to be wrong, since the nodes are not coloured as I previously described, i.e. as I would expect.Please see below the code I have used:I am using the following code:G = nx.from_pandas_edgelist(df, 'User', 'Val')labels = [i for i in dict(G.nodes).keys()]labels = {i:i for i in dict(G.nodes).keys()}colors = df[["User", "Color"]].drop_duplicates()["Color"]plt.figure(3,figsize=(30,50)) pos = nx.spring_layout(G) nx.draw(G, node_color = df.Color, pos = pos)net = nx.draw_networkx_labels(G, pos = pos) | Looks like you're in the right track, but got a couple of things wrong. Along with using drop_duplicates, build a dictionary and use it to lookup the color in nx.draw. Also, you don't need to construct a labels dictionary, nx.draw can handle that for you.G = nx.from_pandas_edgelist(df, 'User', 'Val')d = dict(df.drop_duplicates(subset=['User','Color'])[['User','Color']] .to_numpy().tolist())# {'Laura': 'red', 'Martyn': 'orange'}nodes = G.nodes()plt.figure(figsize=(10,6)) pos = nx.draw(G, with_labels=True, nodelist=nodes, node_color=[d.get(i,'lightgreen') for i in nodes], node_size=1000) |
Placing dataframes into excel sheets i have two dataframes; df and df2. I need to place them into an excel, with df being in one sheet and df2 being in another sheet. What would be the easiest way to do this in python? | Refer Documentation:with pd.ExcelWriter('output.xlsx') as writer: df.to_excel(writer, sheet_name='Sheet_name_1') df2.to_excel(writer, sheet_name='Sheet_name_2') |
Website crawling based on keyword in Excel file I would like to crawl the website price based on the search keyword on my keyword.xlsx file , the first input should be dyson, second is lego, third input should be sony, but my result in the attached image only has dyson, do you know why?image is hereimport timefrom random import randintimport astimport requestsfrom bs4 import BeautifulSoup #A python library to help you to exract HTML informationheaders = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}import xlrdimport pandas as pddf_keywords = pd.read_excel('keyword.xlsx', sheet_name='Sheet1', usecols="A")workbook = xlrd.open_workbook('keyword.xlsx')worksheet = workbook.sheet_by_name('Sheet1')index=df_keywords.indexnumber_of_row=len(index)print(number_of_row)#worksheet.cell(2,0).valuefor i in range (1,number_of_row+1): keyword_input=worksheet.cell(i,0).value print (keyword_input) prefix="https://tw.buy.yahoo.com/search/product?disp=list&p=" sortbyprice="&sort=price" url=prefix+keyword_input+sortbyprice r=requests.get(url) soup=BeautifulSoup(r.text) for i in soup.findAll("div", {"class":"ListItem_price_2CMKZ"}): lowest=i.find("span",{"class":"ListItem_priceContent_5WbI9"}).text.strip() print(lowest) lowest_first=lowest.split("",1)[0] print(lowest_first) | There's a few issues here. First, I'm not sure what lowest_first=lowest.split("",1)[0] is supposed to be doing in your code. It is throwing an error in your code preventing it from hitting the next iteration of your for loop. You can't split a string on nothing (""). If you are trying to get rid of the '$', you can just do lowest[1:].Second, you can accomplish your task directly from pandas without having to call xlrd (which is often used as the backend engine for reading excel files (along with openpyxl).import pandas as pddf_keywords = pd.read_excel('keyword.xlsx')for keyword in df_keywords['keyword'].to_list(): prefix="https://tw.buy.yahoo.com/search/product?disp=list&p=" print(prefix + keyword) Outputhttps://tw.buy.yahoo.com/search/product?disp=list&p=dysonhttps://tw.buy.yahoo.com/search/product?disp=list&p=legohttps://tw.buy.yahoo.com/search/product?disp=list&p=sony |
Pandas DataFrame create new columns based on a logic dependent on other columns with cumulative counting rule I have a DataFrame originally as follows:d1={'on':[0,1,0,1,0,0,0,1,0,0,0],'off':[0,0,0,0,0,0,1,0,1,0,1]}My end objective is to add a new column 'final' where it will show a value of '1' once an 'on' indicator' is triggered (ignoring any duplicate) but then 'final' is switched back to '0' if the 'off' indicator is triggered AND ONLY when the 'on' sign was triggered for 3 rows. I did try coming up with any code but failed to tackle it at all.My desired output is as follows:Column 'final' is first triggered in row 1 when the 'on' indicator is switched to 1. 'on' indictor in row 3 is ignored as it is just a redundant signal. 'off' indictor at row 6 is triggered and the 'final' value is switched back to 0 because it has been turned on for more than 3 rows already, unlike the case in row 8 where the 'off' indicator is triggered but the 'final' value cannot be switched off until encountering another 'off' indicator in row 10 because that was the time when the 'final' value has been switched off for > 3 rows.Thank you for assisting. Appreciate. | One solution using a "state machine" implemented with yield:def state_machine(): on, off = yield cnt, current = 0, on while True: current = int(on or current) cnt += current if off and cnt > 3: cnt = 0 current = 0 on, off = yield currentmachine = state_machine()next(machine)df = pd.DataFrame(d1)df['final'] = df.apply(lambda x: machine.send((x['on'], x['off'])), axis=1)print(df)Prints: on off final0 0 0 01 1 0 12 0 0 13 1 0 14 0 0 15 0 0 16 0 1 07 1 0 18 0 1 19 0 0 110 0 1 0 |
How to change a non top 3 values columns in a dataframe in Python I have a dataframe that was made out of BOW results called df_BOWdataframe looks like thisdf_BOWOut[42]: blue drama this ... book mask0 3 0 1 ... 1 01 0 1 0 ... 0 42 0 1 3 ... 6 03 6 0 0 ... 1 04 7 2 0 ... 0 0 ... ... ... ... ... ... ...81991 0 0 0 ... 0 181992 0 0 0 ... 0 181993 3 3 5 ... 4 181994 4 0 0 ... 0 081995 0 1 0 ... 9 2this data frame has around 12,000 column and 82,000 rowsI want to reduce the number of columns by doing thisfor each row keep only top 3 columns and make everything else 0so for row number 543 ( the original record looks like this) blue drama this ... book mask543 1 11 21 ... 7 4It should become like this blue drama this ... book mask543 0 11 21 ... 7 0only top 3 columns kept (drama, this, book) all other columns became zeros blue drama this ... book mask929 5 3 2 ... 4 3will become blue drama this ... book mask929 5 3 0 ... 4 0at the end of I should remove all columns that are zeros for all rowsI start putting this function to loop all rows and all columnsfor i in range(0, len(df_BOW.index)): Col1No = 0 Col1Val = 0 Col2No = 0 Col2Val = 0 Col3No = 0 Col3Val = 0 for j in range(0, len(df_BOW.columns)): if (df_BOW.iloc[i,j] > min(Col1Val, Col2Val, Col3Val)): if (Col1Val <= Col2Val) & (Col1Val <= Col3Val): df_BOW.iloc[i,Col1No] = 0 Col1Val = df_BOW.iloc[i,j] Col1No = j elif (Col2Val <= Col1Val) & (Col2Val <= Col3Val): df_BOW.iloc[i,Col2No] = 0 Col2Val = df_BOW.iloc[i,j] Col2No = j elif (Col3Val <= Col1Val) & (Col3Val <= Col2Val): df_BOW.iloc[i,Col3No] = 0 Col3Val = df_BOW.iloc[i,j] Col3No = j I don't think this loop is the best way to do that.beside it will become impossible to do for top 50 columns with this loop.is there a better way to do that? | You can use pandas.Series.nlargest, pass keep as first to include the first record only if multiple value exists for top 3 largest values. Finally use fillna(0) to fill all the NaN columns with 0df.apply(lambda row: row.nlargest(3, keep='first'), axis=1).fillna(0)OUTPUT: blue book drama mask this0 0.0 1.0 0.0 0.0 1.01 1.0 0.0 1.0 4.0 0.02 2.0 6.0 0.0 0.0 3.03 3.0 1.0 0.0 0.0 0.04 4.0 0.0 2.0 0.0 0.05 0.0 0.0 0.0 1.0 0.06 0.0 0.0 0.0 1.0 0.07 3.0 4.0 0.0 0.0 5.08 4.0 0.0 0.0 0.0 0.09 0.0 9.0 1.0 2.0 0.0 |
CNN-LSTM with TimeDistributed Layers behaving weirdly when trying to use tf.keras.utils.plot_model I have a CNN-LSTM that looks as follows;SEQUENCE_LENGTH = 32BATCH_SIZE = 32EPOCHS = 30n_filters = 64n_kernel = 1n_subsequences = 4n_steps = 8def DNN_Model(X_train): model = Sequential() model.add(TimeDistributed( Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu', input_shape=(n_subsequences, n_steps, X_train.shape[3])))) model.add(TimeDistributed(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'))) model.add(TimeDistributed(MaxPooling1D(pool_size=2))) model.add(TimeDistributed(Flatten())) model.add(LSTM(100, activation='relu')) model.add(Dense(100, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mse', optimizer='adam') return modelI'm using this CNN-LSTM for a multivariate time series forecasting problem. the CNN-LSTM input data comes in the 4D format: [samples, subsequences, timesteps, features]. For some reason, I need TimeDistributed Layers; or I get errors like ValueError: Input 0 of layer conv1d is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 4, 8, 35]. I think this has to do with the fact that Conv1D is officially not meant for time series, so to preserve time-series data shape we need to use a wrapper layer like TimeDistributed. I don't really mind using TimeDistributed layers - They're wrappers and if they make my model work I am happy. However, when I try to visualize my model with file = 'CNN_LSTM_Visualization.png' tf.keras.utils.plot_model(model, to_file=file, show_layer_names=False, show_shapes=False)The resulting visualization only shows the Sequential():I suspect this has to do with the TimeDistributed layers and the model not being built yet. I cannot call model.summary() either - it throws ValueError: This model has not yet been built. Build the model first by calling build()or callingfit()with some data, or specify aninput_shape argument in the first layer(s) for automatic build Which is strange because I have specified the input_shape, albeit in the Conv1D layer and not in the TimeDistributed wrapper.I would like a working model together with a working tf.keras.utils.plot_model function. Any explanation as to why I need TimeDistributed and why it makes the plot_model function behave weirdly would be greatly awesome. | An alternative to using an Input layer is to simply pass the input_shape to the TimeDistributed wrapper, and not the Conv1D layer:def DNN_Model(X_train): model = Sequential() model.add(TimeDistributed( Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'), input_shape=(n_subsequences, n_steps, X_train.shape[3]))) model.add(TimeDistributed(Conv1D(filters=n_filters, kernel_size=n_kernel, activation='relu'))) model.add(TimeDistributed(MaxPooling1D(pool_size=2))) model.add(TimeDistributed(Flatten())) model.add(LSTM(100, activation='relu')) model.add(Dense(100, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='mse', optimizer='adam') return model |
problem with pandas drop_duplicates removing empty values Im using drop_duplicates to remove duplicates from my dataframe based on a column, the problem is this column is empty for some entries and those ended being removed to is there a way to make the function ignore the empty value. here is an example Title summary 0 TITLE A summaryA 1 TITLE A summaryB 2 summaryC 3 summaryDusing this data.drop_duplicates(subset ="TITLE", keep = 'first', inplace = True)i get a result like this: Title summary 0 TITLE A summaryA 2 summaryC but since last two rows are not duplicates i want to keep them.. is there a ways for drop_duplicates to ignore empty values? | Fill missing values with the index number? Maybe not the prettiest way but it worksdf = pd.DataFrame( {'Title':['TITLE A', 'TITLE A', None, None], 'summary':['summaryA', 'summaryB', 'summaryC', 'summaryD']} )df['_id'] = df.indexdf['_id'] = df['_id'].apply(str)df['Title2'] = df['Title'].fillna(df['_id']) df.drop_duplicates(subset ="Title2", keep = 'first') |
Adding labels at end of line chart in Altair So I have been trying to get it so there is a label at the end of each line giving the name of the country, then I can remove the legend. Have tried playing with transform_filter but no luck.I used data from here https://ourworldindata.org/coronavirus-source-data I cleaned and reshaped the data so it looks like this:- index days date country value0 1219 0 2020-03-26 Australia 11.01 1220 1 2020-03-27 Australia 13.02 1221 2 2020-03-28 Australia 13.03 1222 3 2020-03-29 Australia 14.04 1223 4 2020-03-30 Australia 16.05 1224 5 2020-03-31 Australia 19.06 1225 6 2020-04-01 Australia 20.07 1226 7 2020-04-02 Australia 21.08 1227 8 2020-04-03 Australia 23.09 1228 9 2020-04-04 Australia 30.0import altair as altcountries_list = ['Australia', 'China', 'France', 'Germany', 'Iran', 'Italy','Japan', 'South Korea', 'Spain', 'United Kingdom', 'United States']chart = alt.Chart(data_core_sub).mark_line().encode( alt.X('days:Q'), alt.Y('value:Q', scale=alt.Scale(type='log')), alt.Color('country:N', scale=alt.Scale(domain=countries_list,type='ordinal')), )labels = alt.Chart(data_core_sub).mark_text().encode( alt.X('days:Q'), alt.Y('value:Q', scale=alt.Scale(type='log')), alt.Text('country'), alt.Color('country:N', legend=None, scale=alt.Scale(domain=countries_list,type='ordinal')), ).properties(title='COVID-19 total deaths', width=600) alt.layer(chart, labels).resolve_scale(color='independent')This is the current mess that the chart is in.How would I go about just showing the last 'country' name?EDITHere is the result. I might look at adjusting some of the countries separately as adjusting as a group means that some of the labels are always badly positioned no matter what I do with the dx and dy alignment. | You can do this by aggregating the x and y encodings. You want the text to be at the maximum x value, so you can use a 'max' aggregate in x. For the y-value, you want the y value associated with the max x-value, so you can use an {"argmax": "x"} aggregate.With a bit of adjustment of text alignment, the result looks like this:labels = alt.Chart(data_core_sub).mark_text(align='left', dx=3).encode( alt.X('days:Q', aggregate='max'), alt.Y('value:Q', aggregate={'argmax': 'days'}, scale=alt.Scale(type='log')), alt.Text('country'), alt.Color('country:N', legend=None, scale=alt.Scale(domain=countries_list,type='ordinal')), ).properties(title='COVID-19 total deaths', width=600) |
Using the Python WITH statement to create temporary variable Suppose I have Pandas data. Any data. I import seaborn to make a colored version of the correlation between varibales. Instead of passing the correlation expression into the heatmap fuction, and instead of creating a one-time variable to store the correlation output, how can I use the with statement to create temporary variable that no longer existss after the heatmap is plotted?Doesn't work# Assume: season = sns, Data is heatmapablewith mypandas_df.correlation(method="pearson") as heatmap_input: # possible other statements sns.heatmap(heatmap_input) # possible other statementsIf this exissted, then after seaborn plots the map, heatmap_input no longer exists as a variable. I would like tat functionality.Long way# this could be temporary but is now globaltcbtbing = mypandas_df.correlation(method="pearson")sns.heatmap(tcbtbing)Compact waysns.heatmap( mypandas_df.correlation(method="pearson") )I'd like to use the with statement (or similar short) construction to avoid the Long Way and the Compact way, but leave room for other manipulations, such as to the plot itself. | You need to implement enter and exit for the class you want to use it. see: Implementing use of 'with object() as f' in custom class in python |
Cross-validation of neural network: How to treat the number of epochs? I'm implementing a pytorch neural network (regression) and want to identify the best network topology, optimizer etc.. I use cross validation, because I have x databases of measurements and I want to evaluate whether I can train a neural network with a subset of the x databases and apply the neural network to the unseen databases. Therefore, I also introduce a test database, which I doesn't use in the phase of the hyperparameter identification.I am confused on how to treat the number of epochs in cross validation, e.g. I have a number of epochs = 100. There are two options:The number of epochs is a hyperparameter to tune. In each epoch, the mean error across all cross validation iterations is determined. After models are trained with all network topologies, optimizers etc. the model with the smallest mean error is determined and has parameters like: -network topology: 1-optimizer: SGD-number of epochs: 54To calculate the performance on the test set, a model is trained with exactly these parameters (number of epochs = 54) on the training and the validation data. Then it is applied and evaluated on the test set.The number of epochs is NOT a hyperparameter to tune. Models are trained with all the network topologies, optimizers etc. For each model, the number of epochs, where the error is the smallest, is used. The models are compared and the best model can be determined with parameters like:-network topology: 1 -optimizer: SGDTo calculate the performance on the test data, a “simple” training and validation split is used (e.g. 80-20). The model is trained with the above parameters and 100 epochs on the training and validation data. Finally, a model with a number of epochs yielding the smallest validation error, is evaluated on the test data.Which option is the correct or the better one? | The number of epochs is better not to be fine-tuned.Option 2 is a better option.Actually, if the # of epochs is fixed, you need not to have validation set. Validation set gives you the optimal epoch of the saved model. |
pandas concat two column into a new one I have a csv file with the following column:timestamp. message. name. DestinationUsername. sourceUsername13.05. hello. hello. name1. 13.05. hello. hello. name2. 43565 what I would like to achieve is to merge together DestinationUsername and SourceUsername into a new column called IDWhat I have done so far is the followingf=pd.read_csv('file.csv')f['userID'] = f.destinationUserName + f.sourceUserNamekeep_col = ['@timestamp', 'message', 'name', 'destinationUserName', 'sourceUserName', 'userID']new_f = f[keep_col]new_f.to_csv("newFile.csv", index=False)But this does not work as expected, because in the output I can see if one of the column destinationUserName or sourceUsername is empty, than the userID is empty, the userId get populated only id both destinationUserName and sourceUserName are populated already.Can anyone help me to understand how I can go over this problem please?And please if you need more infos just ask me | you can typecast the column to string and then remove 'nan' by replace() method:df['ID']=(df['DestinationUsername'].astype(str) + df['sourceUsername'].astype(str).replace('nan','',regex=True)) ORdf['ID']=df[['DestinationUsername','sourceUsername']].astype(str) .agg(''.join,1) .replace('nan','',regex=True)Note: you can also use apply() in place of agg() methodoutput of df['ID']:0 name1.1 name2.43565.0dtype: object |
Transform or change values of columns in based on values of others columns I have a dataframe that contains 5 columns. What I would like to do is to change the last 4 columns to the first column.Basically if the value of the first column is below a certain threshold, the following columns are modified and if this value is higher than the threshold there is no change.So I tried this :import pandas as pddf = pd.DataFrame({ 'col1' : [0.1, 0.3, 0.1, 0.2], 'col2' : [2,4,3,7], 'col3' : [3,4,4,9], 'col4' : [4,2,2,6], 'col5' : [0.3, 2.1, 1.0, .9],})def motif(col1, col2, col3, col4, col5): col2 = col2 col3 = col3 col4 = col4 col5 = col5 if col1 <=.15: col2 = col2 * .15 col3 = col3 * .15 col4 = col4 * .15 col5 = col5 * .15 return col2, col3, col4, col5 else: return col2, col3, col4, col5df.apply(lambda x: modify(x[col1], x[col2], x[col3], x[col4], x[col5]), axis=1)But this does not work.If you have any ideas I would be very grateful | We can use loc to select rows where col1 is less than or equal to .15 then multiply the rest of the columns by .15:df.loc[df['col1'] <= 0.15, 'col2':] *= 0.15df: col1 col2 col3 col4 col50 0.1 0.30 0.45 0.6 0.0451 0.3 4.00 4.00 2.0 2.1002 0.1 0.45 0.60 0.3 0.1503 0.2 7.00 9.00 6.0 0.900Naturally other column selections work if all columns after col2 is overly broad:df.loc[df['col1'] <= 0.15, ['col2', 'col3', 'col4', 'col5']] *= 0.15df.loc[df['col1'] <= 0.15, 'col2':'col5'] *= 0.15The mask can also be saved and reused if different columns need different modifications:m = df['col1'] <= 0.15df.loc[m, 'col2':'col4'] *= 0.15df.loc[m, 'col5'] *= 0.5 # col5 is different than col2-4df: col1 col2 col3 col4 col50 0.1 0.30 0.45 0.6 0.151 0.3 4.00 4.00 2.0 2.102 0.1 0.45 0.60 0.3 0.503 0.2 7.00 9.00 6.0 0.90The apply can work (although it is slower and a lot more code), but since apply can produce both aggregated and unaggregated results the overwritten columns will need explicitly defined and the result needs to be a Series not a tuple:def modify(col1, col2, col3, col4, col5): if col1 <= .15: col2 = col2 * .15 col3 = col3 * .15 col4 = col4 * .15 col5 = col5 * .15 return pd.Series([col2, col3, col4, col5])df[['col2', 'col3', 'col4', 'col5']] = df.apply(lambda x: modify( x['col1'], x['col2'], x['col3'], x['col4'], x['col5']), axis=1)df: col1 col2 col3 col4 col50 0.1 0.30 0.45 0.6 0.0451 0.3 4.00 4.00 2.0 2.1002 0.1 0.45 0.60 0.3 0.1503 0.2 7.00 9.00 6.0 0.900 |
KeyError: 'Failed to format this callback filepath: Reason: \'lr\'' I recently switched form Tensorflow 2.2.0 to 2.4.1 and now I have a problem with ModelCheckpoint callback path. This code works fine if I use an environment with tf 2.2 but get an error when I use tf 2.4.1.checkpoint_filepath = 'path_to/temp_checkpoints/model/epoch-{epoch}_loss-{lr:.2e}_loss-{val_loss:.3e}'checkpoint = ModelCheckpoint(checkpoint_filepath, monitor='val_loss')history = model.fit(training_data, training_data, epochs=10, batch_size=32, shuffle=True, validation_data=(validation_data, validation_data), verbose=verbose, callbacks=[checkpoint])Error:KeyError: 'Failed to format this callback filepath: "path_to/temp_checkpoints/model/epoch-{epoch}_loss-{lr:.2e}_loss-{val_loss:.3e}". Reason: 'lr'' | In ModelCheckpoint, formatted name of filepath argument, can only be contain: epoch + keys in logs after epoch ends.You can see available keys in logs like this:class CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): keys = list(logs.keys()) print("Log keys: {}".format(keys))model.fit(..., callbacks=[CustomCallback()])If you run code above, you will see something like this:Log keys: ['loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error']Which shows you available keys you can use (plus epoch) and lr is not available for you (You have used 3 keys: epoch, lr and val_loss in filepath name).Solution:You can add learning rate to logs yourself:import tensorflow.keras.backend as Kclass CustomCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): logs.update({'lr': K.eval(self.model.optimizer.lr)}) keys = list(logs.keys()) print("Log keys: {}".format(keys)) #you will see now `lr` availablecheckpoint_filepath = 'path_to/temp_checkpoints/model/epoch-{epoch}_loss-{lr:.2e}_loss-{val_loss:.3e}'checkpoint = ModelCheckpoint(checkpoint_filepath, monitor='val_loss')history = model.fit(training_data, training_data, epochs=10, batch_size=32, shuffle=True, validation_data=(validation_data, validation_data), verbose=verbose, callbacks=[checkpoint, CustomCallback()]) |
Reading a datafile (abalone) and converting to numpy array When I try to load the UCI abalone data file as follows:dattyp = [('sex',object),('length',float),('diameter',float),('height',float),('whole weight',float),('shucked weight',float),('viscera weight',float),('shell weight',float),('rings',int)]abalone_data = np.loadtxt('C:/path/abalone.dat',dtype = dattyp, delimiter = ',')print(abalone_data.shape)print(abalone_data[0])>>(4177,) ('M', 0.455, 0.365, 0.095, 0.514, 0.2245, 0.101, 0.15, 15)Abalone_data is an array with 1 column instead of 9. Later on, when I want to add other data as extra columns, this gives me problems. Is there any way to transform this data to a (4177, 9) matrix where I can do the usual adding of columns etc?Thanks! | You can use pandas:import pandas as pdabalone_data = pd.read_csv('C:/path/abalone.dat', header=None).valuesabalone_data.shapeOUtput:(4177, 9) |
Python error messages including "ImportError: cannot import name 'string_int_label_map_pb2'" So I have been trying to get a captcha solver I found here to work for quite some time now. I have fixed many weird problems with that time, but I honestly don't know what's wrong this time. So I am starting the program and I get some error messages. I am using python 3.6.2 and tensorflow 1.15 for this and this is the whole message:Traceback (most recent call last): File "C:\Users\Linus\Desktop\captcha solver\main_.py", line 1, in <module> from CAPTCHA_object_detection import * File "C:\Users\Linus\Desktop\captcha solver\CAPTCHA_object_detection.py", line 19, in <module> from object_detection.utils import label_map_util File "C:\Users\Linus\AppData\Local\Programs\Python\Python36\lib\site-packages\object_detection\utils\label_map_util.py", line 21, in <module> from object_detection.protos import string_int_label_map_pb2ImportError: cannot import name 'string_int_label_map_pb2'I have been focusing on the last line from object_detection.protos import string_int_label_map_I think there is a stackoverflow regarding this last line already, but I have been trying to fix this in different ways already. I somehow came to the idea of installing protoc but ig the installation didn't even work. Can someone help me and/or bring me on the right track? I guess I should also mention that I am quite new to this. | Read through the answer, few them contains step to step guide on installing protoc,many useful answers on issues thread.https://github.com/tensorflow/models/issues/1595 |
How am I able to separate a DataFrame into many DataFrames, based on a label and then do computation for each DataFrame? I have the following DataFrame:I am trying to make one DataFrame for each unique value in df1['Tub']. Right now I am creating a dictionary and trying to append to each new DataFrame instances where there is a matching Tub. I think my logic is on the right track.tub_df = {}tubs = []for tub in df1['Tub']: if tub not in tubs: tubs.append(tub)#['Tub 1', 'Tub 2', 'Tub 3']for tub_name in tubs: for tub_row in df1['Tub']: if tub_row == tub_name: tub_df[tub] = pd.DataFrame.copy(df1.loc[tub_row])Thank you for any help. | Here is a shorter version, identify unique values in Tub & use dict comprehension to create a filtered dict{tub: df1[df1.Tub.eq(tub)] for tub in df1.Tub.unique()} |
how to i change the format of date from dd-mm-yyyy to dd/mm/yyyy in a csv file imageI have CSV with date in this format which is to be changed? how can I do that? | import pandas as pd# I have taken an example. You could do a pd.read_csv(filename) to read from file#Input in dd-mm-yyyy formatdf = pd.DataFrame({'DOB': {0: '26-01-2016', 1: '26-01-2016'}})#Convert to pandas datetime objectdf['DOB'] = pd.to_datetime(df.DOB)#Convert to dd/mm/yyyy format('%d/%m/%Y')df['DOB'] = df['DOB'].dt.strftime('%d/%m/%Y')You can read more here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html |
Equalizing indexes of Pandas Series to fit into Dataframe I have a pandas Dataframe that uses a datetime index. I want to add a column onto the dataframe that returns an average of a particular slice of the data. This column does not always include the entire index, I need a way to fill in the missing portions with zeros.Dataframe:[2020-7-26 | 29.3] [2020-8-02 | 28.2] [2020-8-09 | 26.7] [2020-8-16 | 24.1] [2020-8-30 | 23.2] Series I wish to append: Note the missing august 16th[2020-7-26 | 20.3] [2020-8-02 | 21.2] [2020-8-09 | 23.7] [2020-8-30 | 22.2] Is there a way to transform this series into:[2020-7-26 | 20.3] [2020-8-02 | 21.2] [2020-8-09 | 23.7] [2020-8-16 | 0.0] [2020-8-30 | 22.2] In order to be able to form this Dataframe:[2020-7-26 | 29.3 | 20.3] [2020-8-02 | 28.2 | 21.2] [2020-8-09 | 26.7 | 23.7] [2020-8-16 | 24.1 | 0.0] [2020-8-30 | 23.2 | 22.2] Thanks in advance! | If I'm understanding you correctly, you simply want to join the two together on their datetime index. Let df be your dataframe with more indices and ser be your series with missing indices.if df is: valdate 2019-08-01 12019-08-02 22019-08-03 3and ser is:date2019-08-01 42019-08-03 5It should be simply:df.join(ser,how='left').fillna(0)which yields: val val2date 2019-08-01 1 4.02019-08-02 2 0.02019-08-03 3 5.0as the left join would fill any missing on the right with nans, which fillna() would impute with 0.Make sure your series has a name however otherwise the join doesn't know how to name your new column. You can do so by settingser.name = 'column_name' before you call join, which in my case here is 'val2'.Also if you don't understand why I'm calling how='left' I would recommend you take some time to read into what left,right,outer,inner joins are as it is quite essential to not just preprocessing in python but sql as well. Good luck! |
Locating columns values in pandas dataframe with conditions We have a dataframe (df_source):Unnamed: 0 DATETIME DEVICE_ID COD_1 DAT_1 COD_2 DAT_2 COD_3 DAT_3 COD_4 DAT_4 COD_5 DAT_5 COD_6 DAT_6 COD_7 DAT_70 0 200520160941 002222111188 35 200408100500.0 12 200408100400 16 200408100300 11 200408100200 19 200408100100 35 200408100000 43 1 19 200507173541 000049000110 00 190904192701.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 2 20 200507173547 000049000110 00 190908185501.0 08 190908185501 NaN NaN NaN NaN NaN NaN NaN NaN NaN 3 21 200507173547 000049000110 00 190908205601.0 08 190908205601 NaN NaN NaN NaN NaN NaN NaN NaN NaN 4 22 200507173547 000049000110 00 190909005800.0 08 190909005800 NaN NaN NaN NaN NaN NaN NaN NaN NaN ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 159 775 200529000843 000049768051 40 200529000601.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 160 776 200529000843 000049015792 00 200529000701.0 33 200529000701 NaN NaN NaN NaN NaN NaN NaN NaN NaN 161 779 200529000843 000049180500 00 200529000601.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 162 784 200529000843 000049089310 00 200529000201.0 03 200529000201 61 200529000201 NaN NaN NaN NaN NaN NaN NaN 163 786 200529000843 000049768051 40 200529000401.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN We calculated values_cont, a dict, for a subset:v_subset = ['COD_1', 'COD_2', 'COD_3', 'COD_4', 'COD_5', 'COD_6', 'COD_7']values_cont = pd.value_counts(df_source[v_subset].values.ravel())We obtained as result (values, counter):00 13408 3742 1240 1233 311 303 235 243 244 161 104 112 160 105 119 134 116 1Now, the question is:How to locate values in columns corresponding to counter, for instance:How to locate: df['DEVICE_ID'] # corresponding with values ('00') and counter ('134') df['DEVICE_ID'] # corresponding with values ('08') and counter ('37') ... df['DEVICE_ID'] # corresponding with values ('16') and counter ('1') | I believe you need DataFrame.melt with aggregate join for ID and GroupBy.size for counts.This implementation will result in a dataframe with a column (value) for the CODES, all the associated DEVICE_IDs, and the count of ids associated with each code.This is an alternative to values_cont in the question.v_subset = ['COD_1', 'COD_2', 'COD_3', 'COD_4', 'COD_5', 'COD_6', 'COD_7']df = (df_source.melt(id_vars='DEVICE_ID', value_vars=v_subset) .dropna(subset=['value']) .groupby('value') .agg(DEVICE_ID = ('DEVICE_ID', ','.join), count= ('value','size')) .reset_index())print (df) value DEVICE_ID count0 00 000049000110,000049000110,000049000110,0000490... 71 03 000049089310 12 08 000049000110,000049000110,000049000110 33 11 002222111188 14 12 002222111188 15 16 002222111188 16 19 002222111188 17 33 000049015792 18 35 002222111188,002222111188 29 40 000049768051,000049768051 210 43 002222111188 111 61 000049089310 1# print DEVICE_ID for CODES == '03'print(df.DEVICE_ID[df.value == '03'])[out]:1 000049089310Name: DEVICE_ID, dtype: objectGiven the question as related to df_source, to select specific parts of the dataframe, use Pandas: Boolean Indexing# to return all rows where COD_1 is '00'df_source[df_source.COD_1 == '00']# to return only the DEVICE_ID column where COD_1 is '00'df_source['DEVICE_ID'][df_source.COD_1 == '00'] |
How to count the values of multiple '0' and '1' columns and group by another binary column ('Male' and Female')? I'd like to group the binary information by 'Gender' and count the values of the other/ following fields 'Married', 'Citizen' and 'License'The below code was my attempt, but it was unsucessful.dmo_df.groupby(['Gender'], as_index = True)['Married', 'Citizen','License'].apply(pd.Series.value_counts)The resulting data frame/ output should look as such:Sorry for the poor quality photos. | I think you're trying to get sum and not value_counts:>>> df.groupby('Gender')[["Married","Citizen","License"]].sum() Married Citizen LicenseGender Female 3 3 0Male 5 7 4If you want value_count, try:>>> df.groupby('Gender').agg({i:"value_counts" for i in ["Married", "Citizen", "License"]}).fillna(0) Married Citizen LicenseGender Female 0 0.0 0.0 3.0 1 3.0 3.0 0.0Male 0 2.0 0.0 3.0 1 5.0 7.0 4.0 |
Is there a function to split rows in the dataframe if one of the column contains more than one keyword? My dataset contains the column "High-Level-Keyword(s)" and it contains more than one keywords separated by '\n'. I want to group the data on the basis of these Keywords.I tried using function unique() but it treats 'Multilangant Systems', 'Multilangant Systems\nMachine Learning' and 'Machine Learning' differently. I want the output to be like:Multilangant - 2Machine Learning -2 but what I'm getting isMultilangant - 1 Machine Learning - 1Multilangant\nMachine Learning - 1Can you suggest some way to do the same? | You should .split on the separator, then count.from collections import Counterfrom itertools import chainCounter(chain.from_iterable(df["High-Level-Keyword(s)"].str.split('\n')))#Counter({'Machine Learning': 2, 'Multilangant': 2})Or make it a Series:import pandas as pdpd.Series(Counter(chain.from_iterable(df["High-Level-Keyword(s)"].str.split('\n'))))#Multilangant 2#Machine Learning 2#dtype: int64 |
Which way is right in tf-idf? Fit all then transform train set and test set or fit train set then transform test set 1.Fit train set then transform test setscikit-learn provide this examplefrom sklearn.feature_extraction.text import TfidfVectorizervectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')X_train = vectorizer.fit_transform(data_train.data)X_test = vectorizer.transform(data_test.data)2.Fit all then transform train set and test set which I've seen in many casesimport numpy as npfrom sklearn.feature_extraction.text import TfidfVectorizervectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5, stop_words='english')X_all = np.append(train_x, test_x, axis=0)vectorizer.fit(X_all)X_train = vectorizer.transform(train_x)X_test = vectorizer.transform(test_x)So, I'm confused which way is right and why | It really depends on your use case.In the first situation, your test set TF-IDF values are only based on the frequencies in the train set. This allows you to control the "reference" corpus and decorrelates your results to data in the testing set which makes sense when data in your test set is sampled from a data distribution that is very different from what you could expect in a normal situation. Note that this only works because scikit implements TF-IDF in a way that is robust to previously unseen words.In the second situation, when you use the test set for training, your frequencies are also going to be based on what is in your test set. This allows for more representative frequency values for data in your test set domain which can lead to performance improvements on your downstream task, and also ensures no new unseen words appear at test time.tl;dr both work |
How to group and pivot(?) dataframe I have a dataframe looking like this:ID Species Count1 Pine 10001 Spruce 10002 Pine 20003 Pine 10003 Spruce 5003 Birch 500What i want is this: Pine Spruce BirchID Count Count Count1 1000 1000 2 2000 3 1000 500 500So im trying:a = df.groupby(['ID']).cumcount().astype(str)newdf = df.set_index(['ID', a]).unstack(fill_value=0).sort_index(level=1, axis=1)Which gives me:ID Count Species Count Species Count Species1 1000 Pine 1000 Spruce 2 2000 Pine 3 1000 Pine 500 Spruce 500 SpruceHow can i fix this? | Simple pivot df.pivot('ID','Species','Count')Out[493]: Species Birch Pine SpruceID 1 NaN 1000.0 1000.02 NaN 2000.0 NaN3 500.0 1000.0 500.0 |
Does `tf.data.Dataset.repeat()` buffer the entire dataset in memory? Looking at this code example from the TF documentation:filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"]dataset = tf.data.TFRecordDataset(filenames)dataset = dataset.map(...)dataset = dataset.shuffle(buffer_size=10000)dataset = dataset.batch(32)dataset = dataset.repeat(num_epochs)iterator = dataset.make_one_shot_iterator()Does the dataset.repeat(num_epochs) require that the entire dataset be loaded into memory? Or is it re-initializing the dataset(s) that came before it when it receives an end-of-dataset exception?The documentation is ambiguous about this point. | Based on this simple test it appears that repeat does not buffer the dataset, it must be re-initializing the upstream datasets.n = tf.data.Dataset.range(5).shuffle(buffer_size=5).repeat(2).make_one_shot_iterator().get_next()[sess.run(n) for _ in range(10)]Out[83]: [2, 0, 3, 1, 4, 3, 1, 0, 2, 4]Logic suggests that if repeat were buffering its input, the same random shuffle pattern would have have been repeated in this simple experiment. |
Creating a df with index = years and columns = mean length of an event occurring in a year, from previous df columns I have the df_winter as viewed below, and would like to create a new df that displays the year, and the average length of storms occuring in a 1 year, so I can visualize the change in length of storms over time.I thought I could use groupby like this:df_winter_length= df_winter.groupby(['Start_year', 'Disaster_Length']).mean()However, I receive the error : DataError: No Numeric types to aggregateso I printed:print(df_winter.Disaster_Length.dtype)return: int64Which as an int64, I thought could be calculated. What am I missing? Thanks!Entire code:import numpy as npimport matplotlib.pyplot as plt import pandas as pd import seaborn as sns df_time = pd.read_pickle('df_time.pkl')df_winter = df_time[(df_time['Disaster_Type'] == 'Winter') | (df_time['Disaster_Type'] == 'Snow') | (df_time['Disaster_Type'] == 'Ice')]df_winter.drop(columns=['Start_Date_A', 'End_Date_A'], axis=1, inplace=True)df_winter.drop_duplicates(keep='first')df_winter = df_winter.reset_index(drop=True, inplace=False)#Change in Average Length of Winter Weather Events from 1965 - 2017df_winter_length= df_winter.groupby(['Start_year', 'Disaster_Length']).mean()df_winter : County Disaster_Type Disaster_Length Start_year2400 Perry County Snow 7 19962401 Pike County Snow 7 19962402 Powell County Snow 7 19962403 Pulaski County Snow 7 19962404 Robertson County Snow 7 19962405 Rockcastle County Snow 7 19962406 Rowan County Snow 7 19962407 Russell County Snow 7 19962408 Scott County Snow 7 19962409 Shelby County Snow 7 19962410 Simpson County Snow 7 19962411 Spencer County Snow 7 19962412 Taylor County Snow 7 19962413 Todd County Snow 7 19962414 Trigg County Snow 7 19962415 Trimble County Snow 7 19962416 Union County Snow 7 19962417 Warren County Snow 7 19962418 Washington County Snow 7 19962419 Wayne County Snow 7 19962420 Webster County Snow 7 19962421 Whitley County Snow 7 19962422 Wolfe County Snow 7 19962423 Woodford County Snow 7 19962424 Barnstable County Snow 6 19962425 Berkshire County Snow 6 19962426 Bristol County(in PMSA 1120,1200,2480,5400,6060 Snow 6 19962427 Dukes County Snow 6 19962428 Essex County(in PMSA 1120,4160,7090 Snow 6 19962429 Franklin County Snow 6 19962430 Hampden County Snow 6 19962431 Hampshire County Snow 6 19962432 Middlesex County(in PMSA 1120,2600,4560 Snow 6 19962433 Nantucket County Snow 6 19962434 Norfolk County(in PMSA 1120,1200,6060 Snow 6 19962435 Plymouth County(in PMSA 1120,1200,5400 Snow 6 19962436 Suffolk County Snow 6 19962437 Worcester County in PMSA 1120,2600,9240 Snow 6 19962438 Bristol County Snow 6 19962439 Kent County Snow 6 19962440 Newport County(in PMSA 2480,6480 Snow 6 19962441 Providence County(in PMSA 6060,6480 Snow 6 19962442 Washington County(in PMSA 5520,6480 Snow 6 19962443 Fairfield County(in PMSA 1160,1930,5760,8040 Snow 6 19962444 Hartford County(in PMSA 1170,3280,5440 Snow 6 19962445 Litchfield County(in PMSA 1170,1930,3280,8880 Snow 6 19962446 Middlesex County(in PMSA 3280,5020,5480 Snow 6 19962447 New Haven County(in PMSA 1160,5480,8880 Snow 6 19962448 New London County(in PMSA 3280,5520 Snow 6 19962449 Tolland County Snow 6 19962450 Windham County Snow 6 19962451 Alexander County Snow 7 19962452 Burke County Snow 7 19962453 Caldwell County Snow 7 19962454 Caswell County Snow 7 19962455 Catawba County Snow 7 19962456 Cherokee County Snow 7 19962457 Cleveland County Snow 7 19962458 Davidson County Snow 7 19962459 Davie County Snow 7 19962460 Forsyth County Snow 7 19962461 Gaston County Snow 7 19962462 Gates County Snow 7 19962463 Guilford County Snow 7 19962464 Halifax County Snow 7 19962465 Haywood County Snow 7 19962466 Henderson County Snow 7 19962467 Hertford County Snow 7 19962468 Iredell County Snow 7 19962469 Lincoln County Snow 7 19962470 McDowell County Snow 7 19962471 Madison County Snow 7 19962472 Montgomery County Snow 7 19962473 Northampton County Snow 7 19962474 Polk County Snow 7 19962475 Randolph County Snow 7 19962476 Rockingham County Snow 7 19962477 Rutherford County Snow 7 19962478 Stokes County Snow 7 19962479 Surry County Snow 7 19962480 Warren County Snow 7 19962481 Watauga County Snow 7 19962482 Wilkes County Snow 7 19962483 Yadkin County Snow 7 19962484 Yancey County Snow 7 19962485 Klickitat County Ice 15 19962486 Pend Oreille County Ice 15 19962487 Spokane County Ice 15 19962488 Cass County Snow 2 19972489 Clarke County Snow 2 19972490 Iowa County Snow 2 19972491 Jasper County Snow 2 19972492 Madison County Snow 2 19972493 Mahaska County Snow 2 19972494 Marion County Snow 2 19972495 Mills County Snow 2 19972496 Polk County Snow 2 19972497 Pottawattamie County Snow 2 19972498 Poweshiek County Snow 2 19972499 Union County Snow 2 19972500 Warren County Snow 2 19972501 Clinton County Snow 12 19982502 Essex County Snow 12 19982503 Franklin County Snow 12 19982504 Genesee County Snow 12 19982505 Jefferson County Snow 12 19982506 Lewis County Snow 12 19982507 Monroe County Snow 12 19982508 Niagara County Snow 12 19982509 St. Lawrence County Snow 12 19982510 Saratoga County Snow 12 19982511 Adams County Snow 14 19992512 Brown County Snow 14 19992513 Bureau County Snow 14 19992514 Calhoun County Snow 14 19992515 Cass County Snow 14 19992516 Champaign County Snow 14 19992517 Christian County Snow 14 19992518 Cook County Snow 14 19992519 De Witt County Snow 14 19992520 Douglas County Snow 14 19992521 DuPage County Snow 14 19992522 Ford County Snow 14 19992523 Fulton County Snow 14 19992524 Greene County Snow 14 19992525 Grundy County Snow 14 19992526 Hancock County Snow 14 19992527 Henderson County Snow 14 19992528 Henry County Snow 14 19992529 Iroquois County Snow 14 19992530 Kane County Snow 14 19992531 Kankakee County Snow 14 19992532 Kendall County Snow 14 19992533 Knox County Snow 14 19992534 Lake County Snow 14 19992535 La Salle County Snow 14 19992536 Livingston County Snow 14 19992537 Logan County Snow 14 19992538 McDonough County Snow 14 19992539 McHenry County Snow 14 19992540 McLean County Snow 14 19992541 Macon County Snow 14 19992542 Marshall County Snow 14 19992543 Mason County Snow 14 19992544 Menard County Snow 14 19992545 Mercer County Snow 14 19992546 Morgan County Snow 14 19992547 Moultrie County Snow 14 19992548 Peoria County Snow 14 19992549 Piatt County Snow 14 19992550 Pike County Snow 14 19992551 Putnam County Snow 14 19992552 Sangamon County Snow 14 19992553 Schuyler County Snow 14 19992554 Scott County Snow 14 19992555 Shelby County Snow 14 19992556 Stark County Snow 14 19992557 Tazewell County Snow 14 19992558 Vermilion County Snow 14 19992559 Warren County Snow 14 19992560 Will County Snow 14 19992561 Winnebago County Snow 14 19992562 Woodford County Snow 14 19992563 Cattaraugus County Snow 14 19992564 Chautauqua County Snow 14 19992565 Erie County Snow 14 19992566 Genesee County Snow 14 19992567 Jefferson County Snow 14 19992568 Lewis County Snow 14 19992569 Niagara County Snow 14 19992570 Orleans County Snow 14 19992571 St. Lawrence County Snow 14 19992572 Wyoming County Snow 14 19992573 Adams County Snow 14 19992574 Allen County Snow 14 19992575 Benton County Snow 14 19992576 Blackford County Snow 14 19992577 Boone County Snow 14 19992578 Carroll County Snow 14 19992579 Cass County Snow 14 19992580 Clay County Snow 14 19992581 Clinton County Snow 14 19992582 DeKalb County Snow 14 19992583 Delaware County Snow 14 19992584 Elkhart County Snow 14 19992585 Fayette County Snow 14 19992586 Fountain County Snow 14 19992587 Fulton County Snow 14 19992588 Grant County Snow 14 19992589 Hamilton County Snow 14 19992590 Hancock County Snow 14 19992591 Hendricks County Snow 14 19992592 Henry County Snow 14 19992593 Howard County Snow 14 19992594 Huntington County Snow 14 19992595 Jasper County Snow 14 19992596 Jay County Snow 14 19992597 Johnson County Snow 14 19992598 Kosciusko County Snow 14 19992599 LaGrange County Snow 14 1999 | There may be an easier route but I ended up dropping the unused columns from df_Winter -- ei county and disaster type, and then using .groupby and .mean like so:df_winter_length = df_winter.drop(columns=['County','Disaster_Type'])df_winter_length = df_winter_length.groupby(['Start_year']).mean() |
"Invalid argument: indices[0,0,0,0] = 30 is not in [0, 30)" Error:InvalidArgumentError: indices[0,0,0,0] = 30 is not in [0, 30) [[{{node GatherV2}}]] [Op:IteratorGetNext]History:I have a custom data loader for a tf.keras based U-Net for semantic segmentation, based on this example. It is written as follows:def parse_image(img_path: str) -> dict: # read image image = tf.io.read_file(img_path) #image = tfio.experimental.image.decode_tiff(image) if xf == "png": image = tf.image.decode_png(image, channels = 3) else: image = tf.image.decode_jpeg(image, channels = 3) image = tf.image.convert_image_dtype(image, tf.uint8) #image = image[:, :, :-1] # read mask mask_path = tf.strings.regex_replace(img_path, "X", "y") mask_path = tf.strings.regex_replace(mask_path, "X." + xf, "y." + yf) mask = tf.io.read_file(mask_path) #mask = tfio.experimental.image.decode_tiff(mask) mask = tf.image.decode_png(mask, channels = 1) #mask = mask[:, :, :-1] mask = tf.where(mask == 255, np.dtype("uint8").type(NoDataValue), mask) return {"image": image, "segmentation_mask": mask}train_dataset = tf.data.Dataset.list_files( dir_tls(myear = year, dset = "X") + "/*." + xf, seed = zeed)train_dataset = train_dataset.map(parse_image)val_dataset = tf.data.Dataset.list_files( dir_tls(myear = year, dset = "X_val") + "/*." + xf, seed = zeed)val_dataset = val_dataset.map(parse_image)## data transformations--------------------------------------------------------@tf.functiondef normalise(input_image: tf.Tensor, input_mask: tf.Tensor) -> tuple: input_image = tf.cast(input_image, tf.float32) / 255.0 return input_image, [email protected] load_image_train(datapoint: dict) -> tuple: input_image = tf.image.resize(datapoint["image"], (imgr, imgc)) input_mask = tf.image.resize(datapoint["segmentation_mask"], (imgr, imgc)) if tf.random.uniform(()) > 0.5: input_image = tf.image.flip_left_right(input_image) input_mask = tf.image.flip_left_right(input_mask) input_image, input_mask = normalise(input_image, input_mask) return input_image, [email protected] load_image_test(datapoint: dict) -> tuple: input_image = tf.image.resize(datapoint["image"], (imgr, imgc)) input_mask = tf.image.resize(datapoint["segmentation_mask"], (imgr, imgc)) input_image, input_mask = normalise(input_image, input_mask) return input_image, input_mask## create datasets-------------------------------------------------------------buff_size = 1000dataset = {"train": train_dataset, "val": val_dataset}# -- Train Dataset --#dataset["train"] = dataset["train"]\ .map(load_image_train, num_parallel_calls = tf.data.experimental.AUTOTUNE)dataset["train"] = dataset["train"].shuffle(buffer_size = buff_size, seed = zeed)dataset["train"] = dataset["train"].repeat()dataset["train"] = dataset["train"].batch(bs)dataset["train"] = dataset["train"].prefetch(buffer_size = AUTOTUNE)#-- Validation Dataset --#dataset["val"] = dataset["val"].map(load_image_test)dataset["val"] = dataset["val"].repeat()dataset["val"] = dataset["val"].batch(bs)dataset["val"] = dataset["val"].prefetch(buffer_size = AUTOTUNE)print(dataset["train"])print(dataset["val"])Now I wanted to use a weighted version of tf.keras.losses.SparseCategoricalCrossentropy for my model and I found this tutorial, which is rather similar to the example above.However, they also offered a weighted version of the loss, using:def add_sample_weights(image, label): # The weights for each class, with the constraint that: # sum(class_weights) == 1.0 class_weights = tf.constant([2.0, 2.0, 1.0]) class_weights = class_weights/tf.reduce_sum(class_weights) # Create an image of `sample_weights` by using the label at each pixel as an # index into the `class weights` . sample_weights = tf.gather(class_weights, indices=tf.cast(label, tf.int32)) return image, label, sample_weightsandweighted_model.fit( train_dataset.map(add_sample_weights), epochs=1, steps_per_epoch=10)I combined those approaches since the latter tutorial uses previously loaded data, while I want to draw the images from disc (not enough RAM to load all at once).Resulting in the code from the first example (long code block above) followed bydef add_sample_weights(image, segmentation_mask): class_weights = tf.constant(inv_weights, dtype = tf.float32) class_weights = class_weights/tf.reduce_sum(class_weights) sample_weights = tf.gather(class_weights, indices = tf.cast(segmentation_mask, tf.int32)) return image, segmentation_mask, sample_weights(inv_weights are my weights, an array of 30 float64 values) and model.fit(dataset["train"].map(add_sample_weights), epochs = 45, steps_per_epoch = np.ceil(N_img/bs), validation_data = dataset["val"], validation_steps = np.ceil(N_val/bs), callbacks = cllbs)When I rundataset["train"].map(add_sample_weights).element_specas in the second example, I get an output that looks reasonable to me (similar to the one in the example):Out[58]: (TensorSpec(shape=(None, 512, 512, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name=None), TensorSpec(shape=(None, 512, 512, 1), dtype=tf.float32, name=None))However, when I try to fit the model or run something likea, b, c = dataset["train"].map(add_sample_weights).take(1)I will receive the error mentioned above.So far, I have found quite some questions regarding this error (e.g., a, b, c, d), however, they all talk of "embedding layers" and things I am not aware of using.Where does this error come from and how can I solve it? | Picture tf.gather as a fancy way to do indexing. The error you get is akin to the following example in python:>>> my_list = [1,2,3]>>> my_list[3] IndexError: list index out of rangeIf you want to use tf.gather, then the range of value of your indices should not be bigger than the dimension size of the Tensor you are willing to index.In your case, in the call tf.gather(class_weights,indices = tf.cast(segmentation_mask, tf.int32)), with class_weights being a Tensor of dimension (30,), the range of values of segmentation_mask should be between 0 and 29. As far as I can tell from your data pipeline, segmentation_mask has a range of value between 0 and 255. The fix will be problem dependent. |
pandas multi index sort with several conditions I have a dataframe like below, MATERIALNAME CURINGMACHINE HEADERCOUNTER0 1015 PPU03R 15291 3005 PPY12L 3052 3005 PPY12R 3593 3005 PPY12R 4044 K843 PPZB06L 4355 K928 PPZ03L 1850I created a pivot table from this df,pivot = pd.pivot_table(df, index = ['MATERIALNAME', 'CURINGMACHINE'], values = ['HEADERCOUNTER'], aggfunc = 'count', fill_value = 0)pivot (output) HEADERCOUNTERMATERIALNAME CURINGMACHINE 1015 PPU03R 13005 PPY12L 1 PPY12R 2K843 PPZB06L 1K928 PPZ03L 1I add subtotals of each material name with the help of this post 'pandas.concat' Pivot table subtotals in Pandaspivot = pd.concat([ d.append(d.sum().rename((k, 'Total'))) for k, d in pivot.groupby(level=0)]).append(pivot.sum().rename(('Grand', 'Total')))My final df is, HEADERCOUNTERMATERIALNAME CURINGMACHINE 1015 PPU03R 1 Total 13005 PPY12L 1 PPY12R 2 Total 3K843 PPZB06L 1 Total 1K928 PPZ03L 1 Total 1Grand Total 6I want to sort according to 'HEADERCOUNTER' column. I' m using this code,sorted_df = pivot.sort_values(by =['HEADERCOUNTER'], ascending = False)When I sort it, 'MATERIALNAME' column is effecting like below, 'MATERIALNAME' is broken as you can see from 3005 code. HEADERCOUNTERMATERIALNAME CURINGMACHINE Grand Total 63005 Total 3 PPY12R 21015 PPU03R 1 Total 13005 PPY12L 1K843 PPZB06L 1 Total 1K928 PPZ03L 1 Total 1When I sort it, I want to see in that order; HEADERCOUNTERMATERIALNAME CURINGMACHINE Grand Total 63005 Total 3 PPY12R 2 PPY12L 11015 PPU03R 1 Total 1K843 PPZB06L 1 Total 1K928 PPZ03L 1 Total 1If you have any suggestions to change process, I can try it also.Edit:I tried BENY's way, but it doesn' t work when data increases.You can see the not ok result below; | Fix it by adding argsortpivot = pivot.sort_values('HEADERCOUNTER',ascending=False)out = pivot.iloc[(-pivot.groupby(level=0)['HEADERCOUNTER'].transform('max')).argsort()]Out[136]: HEADERCOUNTERMATERIALNAME CURINGMACHINE Grand Total 63005 Total 3 PPY12R 2 PPY12L 11015 PPU03R 1 Total 1K843 PPZB06L 1 Total 1K928 PPZ03L 1 Total 1 |
saving the numpy image datasets. without increase in size and easy to save and load data i have saved my train test val array into pickle file. but the size of images is 1.5GB ,pickle file is 16GB i.e size increased. is another any another way to save those numpy images array without increase in size? | Use numpy.save function (documentation) or numpy.savez_compresion function (documentation). Read documentation before ask question. Sample code:import numpy as np image = np.random.randint(0, 200, (199818,50,50,3), dtype=np.uint8)image2 = np.random.randint(0, 200, (1998,50,50,3), dtype=np.uint8)np.savez_compressed("test.npz", image, img=image2)img_dkt = np.load("test.npz")print("First_array:", img_dkt["arr_0"].shape, "equality", np.all(image == img_dkt["arr_0"]))print("second_array:", img_dkt["img"].shape, "equality", np.all(image2 == img_dkt["img"])) |
TypeErorr: 'Tensor' object cannot be interpreted as an integer I want to make the some dynamic shape weight matrix.The matrix has 3-dimension, [x, y, z].So I define some function.x = tf.reduce_max(some_tensor_x_length)y = tf.reduce_max(some_tensor_y_length)z = tf.reduce_max(some_tensor_z_length)w = self._get_weight(x,y,z)def _get_weight(self, x, y, z): W = np.zeros(x, y, z) for x in range(W.shape[0]): for y in range(W.shape[1]): for z in range(W.shape[2]): W[x,y,z] = some_eq_output_number return WBut I got the below error.TypeError: 'Tensor' object cannot be interpreted as an integerI guess the error caused by length tensor is not integer type. | The calls:x = tf.reduce_max(some_tensor_x_length)y = tf.reduce_max(some_tensor_y_length)z = tf.reduce_max(some_tensor_z_length)return scalar tensors, and integers. As such, when you call:W = np.zeros(x, y, z)you're passing tensors as arguments, while numpy expects integers. If you're using TensorFlow v1, you can get the value of a tensor with a session by running:session.run(x)Also, you're using references to x, y and z twice in your code: one for tensors, and the other during the for loops. I suggest changing the loops to:for i in range(W.shape[0]): for j in range(W.shape[1]): for k in range(W.shape[2]): W[i,j,k] = some_eq_output_number |
Pandas groupby: combine distinct values into another column I need to group by a subset of columns and count the number of distinct combinations of their values. However, there are other columns that may or may not have distinct values, and I want to somehow retain this information in my output. Here is an example: gb1 gb2 text1 text2bebop skeletor blue fisherbebop skeletor blue wrightrocksteady beast_man orange haldanerocksteady beast_man orange haldanetokka kobra_khan green landetokka kobra_khan red arnoldI only want to group by gb1 and gb2. Here is what I need:gb1 gb2 count text1 text2bebop skeletor 2 blue fisher, wrightrocksteady beast_man 2 orange haldanetokka kobra_khan 2 green, red lande, arnoldI've got everything working except for handling the text1 and text2 columns.Thanks in advance. | You can check with s=df.assign(count=1).groupby(['gb1','gb2']).agg({'count':'sum','text1':lambda x : ','.join(set(x)),'text2':lambda x : ','.join(set(x))}).reset_index()s gb1 gb2 count text1 text20 bebop skeletor 2 blue wright,fisher1 rocksteady beast_man 2 orange haldane2 tokka kobra_khan 2 green,red lande,arnold |
Empty results from concurrent psycopg2 postgres select queries I am attempting to retrieve my label and feature datasets from a postgres database using the getitem method from a custom pytorch dataset. When I attempt to sample with random indexes my queries return no resultsI have checked to see if my queries work directly on the psql cli. They do. I have checked my database connection pool for issues. Does not seem to be any. I have reverted back to sequential sampling and it is still fully functional so it is the random index values that seem to be an issue for query.The getitem method which performs the queries is place below. This shows both the sequential and attempt to shuffle queries. Both of these are clearly labeled via variable name. def __getitem__(self, idx): query = """SELECT ls.taxonomic_id, it.tensor FROM genomics.tensors2 AS it INNER JOIN genomics.labeled_sequences AS ls ON ls.accession_number = it.accession_number WHERE (%s) <= it.index AND CARDINALITY(tensor) = 89 LIMIT (%s) OFFSET (%s)""" shuffle_query = """BEGIN SELECT ls.taxonomic_id, it.tensor FROM genomics.tensors2 AS it INNER JOIN genomics.labeled_sequences AS ls ON ls.accession_number = it.accession_number WHERE it.index BETWEEN (%s) AND (%s) END""" batch_size = 500 upper_bound = idx + batch_size query_data = (idx, batch_size, batch_size) shuffle_query_data = (idx, upper_bound) result = None results = None conn = self.conn_pool.getconn() try: conn.set_session(readonly=True, autocommit=True) cursor = conn.cursor() cursor.execute(query, query_data) results = cursor.fetchall() self.conn_pool.putconn(conn) print(idx) print(results) except Error as conn_pool_error: print('Multithreaded __getitem__ query error') print(conn_pool_error) label_list = [] sequence_list = [] for (i,result) in enumerate(results): if result is not None: (label, sequence) = self.create_batch_stack_element(result) label_list.append(label) sequence_list.append(sequence) label_stack = torch.stack(label_list).to('cuda') sequence_stack = torch.stack(sequence_list).to('cuda') return (label_stack, sequence_stack) def create_batch_stack_element(self, result): if result is not None: label = np.array(result[0], dtype=np.int64) sequence = np.array(result[1], dtype=np.int64) label = torch.from_numpy(label) sequence = torch.from_numpy(sequence) return (label, sequence) else: return NoneThe error I receive comes from my attempt to stack my list of tensors after the for loop. This fails because the lists are empty. Since the lists are filled in the loop based off the results of the query. It points to the query being the issue. I would like some help with my source code to solve this issue and possibly an explanation as to why my concurrent queries with random indexes are failing. Thanks. Any help is appreciated.E: I believe I have found the source of the issue and it comes from the pytorch RandomSampler source code. I believe it is providing indexed out of the range of my database keys. This explains why I have no results from the queries. I will have to write my own sampler class to limit this value to the length of my dataset. What an oversight on my part. E2: The random sampling now works with a customized sampler class but prevents mutlithreaded querying. E3: I now have the entire problem solved. Using multiple processes to load data to the GPU with a custom random sampler. Will post applicable code when I get a chance and accept it as an answer to close out the thread. | This is a properly constructed getitem for pytorch from a postgres table with indexable keys. def __getitem__(self, idx: int) -> tuple: query = """SELECT ls.taxonomic_id, it.tensor FROM genomics.tensors2 AS it INNER JOIN genomics.labeled_sequences AS ls ON ls.accession_number = it.accession_number WHERE (%s) = it.index""" query_data = (idx,) result = None conn = self.conn_pool.getconn() try: conn.set_session(readonly=True, autocommit=True) cursor = conn.cursor() cursor.execute(query, query_data) result = cursor.fetchone() self.conn_pool.putconn(conn) except Error as conn_pool_error: print('Multithreaded __getitem__ query error') print(conn_pool_error) return resultdef collate(self, results: list) -> tuple: label_list = [] sequence_list = [] for result in results: if result is not None: print(result) result = self.create_batch_stack_element(result) if result is not None: label_list.append(result[0]) sequence_list.append(result[1]) label_stack = torch.stack(label_list) sequence_stack = torch.stack(sequence_list) return (label_stack, sequence_stack)def create_batch_stack_element(self, result: tuple) -> tuple: if result is not None: label = np.array(result[0], dtype=np.int64) sequence = np.array(result[1], dtype=np.int64) label = torch.from_numpy(label) sequence = torch.from_numpy(sequence) return (label, sequence) return NoneThen I called my training function with:for rank in range(num_processes): p = mp.Process(target=train, args=(dataloader,)) p.start() processes.append(p)for p in processes: p.join() |
Interpolate values in one column of a dataframe (python) I have a dataframe with three columns (timestamp, temperature and waterlevel).What I want to do is to replace all NaN values in the waterlevel column with interpolated values. For example: The waterlevel value is always decreasing till it is 0. Therefore, the waterlevel cannot be negative. Also, if the waterlevel is staying the same, the interpolated values should also be the same. Ideally, the stepsize between the interpolated values (within two available waterlevel values) should be the same.What I have tried so far was:df['waterlevel'].interpolate(method ='linear', limit_direction ='backward') # backwards because the waterlevel value is always decreasing.This does not work. After executing this line, every NaN value has turned to a 0 with the parameter 'forward' and stays NaN with the parameter 'backward'.and df = df['waterlevel'].assign(InterpolateLinear=df.target.interpolate(method='linear'))Any suggestions on how to solve this? | I assume NaN is np.nan Object import pandas as pdimport numpy as npdf = pd.DataFrame({"waterlevel": ['A',np.nan,np.nan,'D'],"interpolated values":['Ai','Bi','Ci','D']})print(df)df.loc[df['waterlevel'].isnull(),'waterlevel'] = df['interpolated values']print(df)O/P: waterlevel interpolated values0 A Ai1 NaN Bi2 NaN Ci3 D D waterlevel interpolated values0 A Ai1 Bi Bi2 Ci Ci3 D D |
How to merge DataFrame in for loop? Am trying to merge the multiindexed dataframe in a for loop into a single dataframe on index.i have a reproducible code at https://gist.github.com/RJUNS/f4ad32d9b6da8cf4bedde0046a26f368#file-prices-pyI wanted to post the code here, but i got an error 'your post has lots of code' therefore i posted it on gist.But it produces this: CLOSE HIGH LOW OPEN VOLUME2017-09-08 09:30:00 VEDL 330.2 330.40 328.3 329.10 18732612017-09-08 09:45:00 VEDL 333.1 333.15 329.5 330.15 16439702017-09-08 10:00:00 VEDL 332.4 333.20 331.4 333.10 767922 CLOSE HIGH LOW OPEN VOLUME2017-09-08 09:30:00 INFY 892.65 898.6 892.6 898.05 1630202017-09-08 09:45:00 INFY 892.45 893.6 891.4 892.80 1521792017-09-08 10:00:00 INFY 891.55 892.5 891.1 892.40 104931Am expecting the following output: CLOSE HIGH LOW OPEN VOLUME2017-09-08 09:30:00 VEDL 330.2 330.40 328.3 329.10 1873261 INFY 892.65 898.6 892.6 898.05 1630202017-09-08 09:45:00 VEDL 333.1 333.15 329.5 330.15 1643970 INFY 892.45 893.6 891.4 892.80 1521792017-09-08 10:00:00 VEDL 332.4 333.20 331.4 333.10 767922 INFY 891.55 892.5 891.1 892.40 104931I tried using .join method, but i couldn't make it work.does anyone have any solution please? | I think you need append df to list of DataFrames and then use concat with sort_index:dfs =[]for security in stocks: dfs.append(get_google_data(security,900, 1))df = pd.concat(dfs).sort_index()print(df) CLOSE HIGH LOW OPEN VOLUME2017-09-08 06:00:00 INFY 892.65 898.60 892.60 898.05 163020 VEDL 330.20 330.40 328.30 329.10 18732612017-09-08 06:15:00 INFY 892.45 893.60 891.40 892.80 152179 VEDL 333.10 333.15 329.50 330.15 16439702017-09-08 06:30:00 INFY 891.55 892.50 891.10 892.40 104931 VEDL 332.40 333.20 331.40 333.10 7679222017-09-08 06:45:00 INFY 891.10 891.55 889.55 891.55 282589 VEDL 332.10 332.80 331.30 332.40 3844172017-09-08 07:00:00 INFY 890.90 891.60 890.25 891.10 119252 VEDL 332.15 332.70 331.65 332.05 345358List comprehension version for create list of DataFrames:df = pd.concat([get_google_data(x,900, 1) for x in stocks]).sort_index()print(df) CLOSE HIGH LOW OPEN VOLUME2017-09-08 06:00:00 INFY 892.65 898.60 892.60 898.05 163020 VEDL 330.20 330.40 328.30 329.10 18732612017-09-08 06:15:00 INFY 892.45 893.60 891.40 892.80 152179 VEDL 333.10 333.15 329.50 330.15 16439702017-09-08 06:30:00 INFY 891.55 892.50 891.10 892.40 104931 VEDL 332.40 333.20 331.40 333.10 7679222017-09-08 06:45:00 INFY 891.10 891.55 889.55 891.55 282589 VEDL 332.10 332.80 331.30 332.40 3844172017-09-08 07:00:00 INFY 890.90 891.60 890.25 891.10 119252 VEDL 332.15 332.70 331.65 332.05 345358 |
Add a number to a element in a tensor rank 1 if a condition is met in tensorflow I have a tensor rank 1, which may look like this: [-1,2,3,-2,5] now I want to add a constant to the absolut value of an element, if the element is negative. If the element is positive, nothing shall happen.I know how to do this with a scalar like:res = tf.cond(tensor < 0,\lambda: tf.add(tf.constant(m.pi),\tf.abs(tensor)),lambda: tf.constant(tensor)Furthermore, I know how to iterate over a tensor with tf.scan , like here in the fibonacci example:elems = np.array([1, 0, 0, 0, 0, 0])initializer = (np.array(0), np.array(1))fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer)But how can I combine the tf.condition with tf.scan? | you can just use tf.wherea = tf.Variable([-1,2,3,-2,5])b = tf.where(tf.less(a, 0), tf.abs(a)+tf.constant(m.pi), a) |
How to plot in Wireframe with CSV file - Numpy / Matplotlib I would like to plot in 3D with Pandas / MatplotLib / Numpy as a WireframeI'm using RFID sensors and I'm trying to record the signal I receive at different distance + different angles. And I want to see the correlation between the rising of the distance and the angle.I've already a full CSV file which looks like this :Distance;0 ;23 ;45 ;900 ;-33.24 ;-36.72;-39.335;-35.215 ;-31.73 ;-35.26;-41.56 ;-27.4115 ;-31.175;-36.91;-40.74 ;-44.61525 ;-35.305;-51.13;-45.515;-50.48540 ;-35.205;-49.27;-55.565;-53.6460 ;-41.8 ;-62.19;-58.14 ;-54.68580 ;-47.79 ;-64.24;-58.285;-56.08100 ;-48.43 ;-63.37;-64.595;-60.0120 ;-49.07 ;-66.07;-63.475;-76.0140 ;-50.405;-61.43;-62.635;-76.5160 ;-52.805;-69.25;-71.0 ;-77.0180 ;-59.697;-66.45;-70.1 ;nan200 ;-56.515;-68.60;-73.4 ;nanSo that's why I want to plot in 3D :X Axis : AngleY Axis : DistanceZ Axis : signal (for each couple angle/distance)On the first row we have the name of the index : Distanceand the different angles : 0°, 23°, 45°, 90°And on the first column we have the different distances which represent the Y axis.And the matrix inside represents the signal, so, values of Z Axis...I loaded my rawdata with Numpy :raw_data = np.loadtxt('data/finalData.csv', delimiter=';', dtype=np.string_)Then I used matplotlib to generate my wireframe :angle = raw_data[0 , 1:].astype(float)distance = raw_data[1:, 0 ].astype(float)data = ???? fig = plt.figure()ax = fig.add_subplot(111, projection='3d')Z = dataX, Y = np.meshgrid(angle, distance)ax.plot_wireframe(X, Y, Z)ax.set_xticks(angle)ax.set_yticks(distance[::2])ax.set_xlabel('angle')ax.set_ylabel('distance')plt.title('RSSI/angle/distance in wireframe')plt.savefig('data/3d/3d.png')plt.show()But I don't know how to extract the signal for each couple angle/distance and put it in data. I would like to know how to select the data to create the wireframe or to find another way to extract the data.Thank you ! | I read the data in with pandas then grabbed the numpy arrays. Note the use of .values.import pandas as pdimport matplotlib.pylab as pltimport numpy as npfrom mpl_toolkits.mplot3d import axes3ddf= pd.read_csv('test.txt', sep=';')df.index = df.Distancedel df['Distance']raw_data = dfangle = raw_data.columns.to_numpy().astype(float)distance = raw_data.index.to_numpy().astype(float)data = raw_data.to_numpy()fig = plt.figure()ax = fig.add_subplot(111, projection='3d')Z = dataX, Y = np.meshgrid(angle, distance)ax.plot_wireframe(X, Y, Z)ax.set_xticks(angle)ax.set_yticks(distance[::2])ax.set_xlabel('angle')ax.set_ylabel('distance')plt.title('RSSI/angle/distance in wireframe')plt.savefig('data/3d/3d.png')plt.show()Edit Jan 2021: Pandas recommends user use to_numpy() instead of values now. see: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html |
Append dataframes in a loop from files located in different directories? I want to create one pandas dataframe from files which are in different directories. In this directories are also other files and I want to read only .parquet files.I created a function but it returns nothing:def all_files(root, extensions): files = pd.DataFrame() for dir_path, dir_names, file_names in os.walk(root): for file in file_names: if os.path.splitext(file)[1] in extensions: data = pd.read_parquet(os.path.join(dir_path, file)) files.append(data) return filesIm calling this functions like this:one_file = all_files(".", [".parquet"])While Im replacing return files for return data it returns correctly one from the files so the issue may lay in line files.append(data). I would be happy with any advice. | pandas.DataFrame.append does not work in place, it is returning a new object (unlike append method of built-in python list), try replacingfiles.append(data)usingfiles = files.append(data) |
Groupby and how value_counts work I've got a dataframe with the following data idpresm teamid competicion fecha local \0 12345 dummy1 ECU D1 2018-07-07 Deportivo Cuenca 1 12345 dummy1 ECU D1 2018-07-03 Liga Dep. Universitaria Quito 2 12345 dummy1 ECU D1 2018-06-24 Universidad Catolica 3 12345 dummy1 ECU D1 2018-06-18 Club Sport Emelec 4 12345 dummy1 ECU D1 2018-06-12 Universidad Catolica 5 12345 dummy1 ECU D1 2018-06-05 Delfin SC 6 12345 dummy1 ECU D1 2018-05-31 Sociedad Deportiva Aucas 7 12345 dummy1 ECU D1 2018-05-26 Universidad Catolica 8 12345 dummy1 ECU D1 2018-05-12 Universidad Catolica 9 12345 dummy1 ECU D1 2018-05-05 Macara 10 12345 dummy1 ECU D1 2018-04-28 Universidad Catolica 11 12345 dummy1 ECU D1 2018-04-21 Guayaquil City 12 12345 dummy1 ECU D1 2018-04-14 Universidad Catolica 13 12345 dummy1 ECU D1 2018-04-07 CD El Nacional 14 12345 dummy1 ECU D1 2018-03-31 Universidad Catolica 15 12345 dummy1 ECU D1 2018-03-25 Independiente Jose Teran 16 12345 dummy1 ECU D1 2018-03-20 Universidad Catolica 17 12345 dummy1 ECU D1 2018-03-10 Tecnico Universitario 18 12345 dummy1 INT CF 2018-03-09 Colchagua CD 19 12345 dummy1 ECU D1 2018-03-04 Universidad Catolica aw homeha line awayha r1 r3 0 2.39 0.96 0 0.80 1 1 1 3.79 0.85 0.5 0.91 2 1 2 9.32 1.00 1.5 0.84 4 0 3 5.80 0.99 1 0.85 2 3 4 2.93 0.85 0/0.5 0.97 1 1 5 3.86 1.04 0.5 0.80 5 2 6 2.61 0.85 0 0.99 0 1 7 3.32 1.04 0/0.5 0.80 1 1 8 5.56 0.90 1 0.94 2 1 9 2.82 0.70 0 1.16 1 2 10 3.60 1.00 0.5 0.84 3 1 11 2.20 1.04 0 0.80 1 1 12 4.07 0.99 0.5 0.85 2 0 13 2.77 0.97 0/0.5 0.85 0 0 14 3.36 0.80 0.5 1.02 3 1 15 6.11 0.97 0.5 0.85 2 1 16 2.03 0.91 0/-0.5 0.85 2 0 17 2.21 0.70 0/-0.5 1.13 0 2 18 1.44 NaN NaN NaN 0 0 19 2.76 0.80 0 1.02 1 2 what I do is I gruopby by local column, and then I intend to get the average of the column r1, for that I do the followinghomedata.groupby('local')['r1'].agg({'media':np.average,'contador': lambda x: x.value_counts()})I would expect a column of integers in 'contador'. what I get is this media contadorlocal CD El Nacional 0.000000 1Club Sport Emelec 2.000000 1Colchagua CD 0.000000 1Delfin SC 5.000000 1Deportivo Cuenca 1.000000 1Guayaquil City 1.000000 1Independiente Jose Teran 2.000000 1Liga Dep. Universitaria Quito 2.000000 1Macara 1.000000 1Sociedad Deportiva Aucas 0.000000 1Tecnico Universitario 0.000000 1Universidad Catolica 2.111111 [3, 3, 2, 1]Why do I get a list instead of a 9? | You are looking for 'size'. For common functions, you should trust strings are mapped to efficient algorithms. For example:d = {'media': 'mean', 'contador': 'size'}res = homedata.groupby('local')['r1'].agg(d) I would expect a column of integers in 'contador'.This is not what you should expect. First note that pd.Series.value_counts returns a pd.Series object of counts, not an integer. It's unclear what integers you expect this method to return.The reason why some values are integers and others lists indicates that groupby is performing some transformation: it assumes that if value_counts returns a series of length 1 you are only interested in the first value of that series.To illustrate, let's look at a minimal example of what you're seeing:import pandas as pddf = pd.DataFrame([['A', 1], ['B', 2], ['B', 2], ['C', 4], ['B', 2], ['B', 6]], columns=['Group', 'Value'])res = df.groupby('Group')['Value'].agg({'counts': lambda x: x.value_counts()})print(res) countsGroup A 1B [3, 1]C 1 |
I have a worksheet of multiple sheets and i want each of them to be get assigned as individual dataframe in python Example :- Example_workbook has 20 sheets.I want each of them to get assign as individual dataframe in python.I have tried as below but this would be only helpful to get single sheet at a time.Do anyone know how can we use "Def" function to iterate through sheets and assign each of them as new dataframe.e.gdf = pd.read_excel("practice1.xlsx",sheet_name=0) | The read_excel method reads all the sheets at once if you set the sheet_name kwarg to be None.sheets = pd.read_excel("practice1.xlsx",sheet_name=None) # this is a dictfor sheet_name, df in sheets.items(): "calculations on the dataframe df"you can read more info about the sheet_name kwarg here |
Python Pandas | Create separate lists for each of the columns I am not sure how to use tolist to achieve the following. I have a dataframe like this:Param_1 Param_2 Param_3-0.171321 0.0118587 -0.1487521.93377 0.011752 1.97074.10144 0.0112963 4.068616.25064 0.0103071 5.83927What I want is to create separate lists for each of the columns, the list name being the column label.I don't want to keep doing:Param_1 = df["Param_1"].values.tolist()Please let me know if there's a way to do this. Thanks. | Adding .Tdf.values.T.tolist()Out[465]: [[-0.171321, 1.93377, 4.10144, 6.25064], [0.0118587, 0.011752, 0.011296299999999999, 0.0103071], [-0.148752, 1.9707, 4.06861, 5.83927]]Or we can create the dict {x:df[x].tolist() for x in df.columns}Out[489]: {'Param_1': [-0.171321, 1.93377, 4.10144, 6.25064], 'Param_2': [0.0118587, 0.011752, 0.011296299999999999, 0.0103071], 'Param_3': [-0.148752, 1.9707, 4.06861, 5.83927]}Or using locals (Not recommended but seems like what you need)variables = locals()for key in df.columns: variables["{0}".format(key)]= df[key].tolist()Param_1Out[501]: [-0.171321, 1.93377, 4.10144, 6.25064] |
Python-Pandas: How do I create a create columns from rows in a DataFrame without redundancy? I Joined multiple DataFrames and now I got only one DataFrame. Now I want to make the same ID rows to columns without redundancy. To make it clear:The DataFrame that I have now: column1 column2 column3row1 2 4 8row2 1 18 7row3 54 24 69row3 54 24 10row4 26 32 8row4 26 28 8You can see that I have two row3 and row4 but they are different in column2 and column3This is the DataFrame that I would like to get: column1 column2 column3 row3_a row4_arow1 2 4 8 NULL NUllrow2 1 18 7 NULL NULLrow3 54 24 69 10 NULLrow4 26 28 8 NULL 28Any ideas how should I solve this? | This is a weird reshaping as you will have ambiguity if there are also duplicates in column1 or column2. Thus having a MultiIndex is probably a good solution.This solution reshapes using a combination of melt + drop_duplicates and pivotfrom string import ascii_lowercaseletters = dict(enumerate(ascii_lowercase, start=1))# add a/b/c to duplicated rowssuffix = df.groupby(level=0).cumcount().map(letters)idx2 = (df.index+suffix).fillna('')df2 = ( df.assign(row=idx2) .reset_index() .melt(id_vars=['index', 'row']) .drop_duplicates(['variable', 'value']) .pivot(index='index', columns=['variable', 'row'], values='value') .rename_axis(columns=(None, None), index=None) # cleanup index names)output: column1 column2 column3 row4a row3arow1 2.0 4.0 NaN 8.0 NaNrow2 1.0 18.0 NaN 7.0 NaNrow3 54.0 24.0 NaN 69.0 10.0row4 26.0 32.0 28.0 NaN NaNYou can flatten the multiindex if you want: df2.columns = df2.columns.map(''.join), of if really you want your ambiguous names: df2.columns = df2.columns.map(max) |
How could I detect subtypes in pandas object columns? I have the next DataFrame:df = pd.DataFrame({'a': [100, 3,4], 'b': [20.1, 2.3,45.3], 'c': [datetime.time(23,52), 30,1.00]})and I would like to detect subtypes in columns without explicit programming a loop, if possible.I am looking for the next output:column a = [int]column b = [float]column c = [datetime.time, int, float] | You should appreciate that with Pandas you can have 2 broad types of series:Optimised structures: Usually numeric data, this includes np.datetime64 and bool.object dtype: Used for series with mixed types or types which cannot be held natively in a NumPy array. The series is structured as a sequence of pointers to arbitrary Python objects and is generally inefficient.The reason for this preamble is you should only ever need to apply element-wise logic to the second type. Data in the first category is homogeneous by nature.So you should separate your logic accordingly.Regular dtypesUse pd.DataFrame.dtypes:print(df.dtypes)a int64b float64c objectdtype: objectobject dtypeIsolate these series via pd.DataFrame.select_dtypes and then use a dictionary comprehension:obj_types = {col: set(map(type, df[col])) for col in df.select_dtypes(include=[object])}print(obj_types){'c': {int, datetime.time, float}}You will need to do a little more work to get the exact format you require, but the above should be your plan of attack. |
How to check if a file contains email addresses or md5 using python How to check if a source_file contains email addresses or md5 once you downloaddata2 = pd.read_csv(source_file, header=None)tried using regrex and str.contains...but not able to figure out how to proceedif that is checked then according to that i need to proceed for rest of the scriptsource_file1:[email protected]@gmail.comsource_file2:d131dd02c5e6vrc455ad340609f4fw02So far, I have tried:if(data2['email/md5'].str.contains(r'[a-zA-Z0-9._-]+@[a-zA-Z.]+')==1): print "yes" | Try this pattern r'@\w+\.com'.Ex:import pandas as pddf1 = pd.read_csv(filename1, names=['email/md5'])if df1['email/md5'].str.contains(r'@\w+\.com').all(): print("Email")else: print("md5") |
Finding hierarchical structure in messy energy data I have energy profile data (sampled at 3 hour intervals) for about 25 electricity meters in a building as pandas dataframe time series.The meters form a hierarchical structure where the top level meters include consumption data for the lower level meters.For example , ( a possible layered structure )total - A - A1 - A2 - B - C - C1 - C2 - C21 - C22where the lower levels add up to higher level consumption.(eg. C = C1 + C2)Now the task is to identify the inherent structure present in the data to use for other energy data analysis.Is there any algorithm that can be used to detect this layered structure from messy data?Must I exhaustively try all possible combinations for lets say 4 level structures to identify a possible match ( with some tolerance since the data is messy)?Kindly advise certain strategies to think about this problem differently from an algorithmic perspective.Note: The meter names are numbers and can not be interpreted to be different levels directly. I do not have a metering strategy . The magnitude of energy consumption varies (for eg. it may well be the case that A2 > C (in the above fig.)) Put in a better way , the hierarchy can only represent relative magnitudes between levels. | This general problem is very close to 3SUM, unfortunately a solution has not been found with a complexity less than quadratic. It is likely that your best solution won't be much better than exhaustively trying combinations, however with n = 25 that shouldn't be too much of an issue. |
How to use tensorflow-wavenet I am trying to use the tensorflow-wavenet program for text to speech.These are the steps:Download TensorflowDownload librosaInstall requirements pip install -r requirements.txtDownload corpus and put into directory named "corpus"Train the machine python train.py --data_dir=corpusGenerate audio python generate.py --wav_out_path=generated.wav --samples 16000 model.ckpt-1000After doing this, how can I generate a voice read-out of a text file? | According to the tensorflow-wavenet page: Currently there is no local conditioning on extra information which would allow context stacks or controlling what speech is generated.You can find more information about current development of the project by reading the issues on the repository (local conditioning is a desired feature!)The Wavenet paper compares Wavenet to two TTS baselines, one of which appears to have code for training available online: http://hts.sp.nitech.ac.jp |
tensorflow evalutaion and earlystopping gives infinity overflow error I a model as seen in the code below, but when trying to evaluate it or using earlystopping on it it gives me the following error: numdigits = int(np.log10(self.target)) + 1OverflowError: cannot convert float infinity to integerI must state that without using .EarlyStopping or model.evaluate everything works well.I know that np.log10(0) gives -inf so that could be a potential cause, but why is there a 0 there in the first place and how can it be prevented? How can this problem be fixed?NOTESthis is the code I use:import tensorflow as tffrom tensorflow import kerasTRAIN_PERCENT = 0.9model = keras.Sequential([ keras.layers.Dense(128, input_shape=(100,), activation='relu'), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(100)])earlystop_callback = keras.callbacks.EarlyStopping(min_delta=0.0001, patience=1 , monitor='accuracy' )optimizer = keras.optimizers.Adam(lr=0.01)model.compile(optimizer=optimizer, loss="mse", metrics=['accuracy'])X_set, Y_set = some_get_data_function()sep = int(len(X_set)/TRAIN_PERCENT)X_train, Y_train = X_set[:sep], Y_set[:sep]X_test, Y_test = X_set[sep:], Y_set[sep:]model.fit(X_train, Y_train, batch_size=16, epochs=5, callbacks=[earlystop_callback])ev = model.evaluate(X_test, Y_test)print(ev)X,Y sets are np arrays. X is an array of arrays of 100 integers between 0 and 10. Y is an array of arrays of 100 integers, all of them are either 0 or 1. | Well it's hard to tell exactly as I can't run code without some_get_data_function() realization but recently I've got same error when mistakenly passed EMPTY array to model.evaluate. Taking into account that @meTchaikovsky comment solved your issue it's certainly due to messed up input arrays. |
Two different numpy arrays are being assigned the same values when only one array is being referenced I'm trying to write some code to carry out the Jacobi method for solving linear equations (I realise my method is not the most efficient way to do this but I am trying to figure out why it's not working).I have tried to debug the problem and noticed the following issue.The code finishes after 2 iterations because on the second iteration on line 32 when xnew[i] is assigned a new value, the same value is also assigned to x[i], even though x[i] is not referenced. Why is this happening on the second iteration and not the first time the for loop is run and is there a way to fix this?Thanks in advanceimport numpy as npA = np.array( [[0.93, 0.24, 0], [0.04, 0.54, 0.26], [1, 1, 1]])b = np.array([[6.0], [2.0], [10.0]])n , m = np.shape(A) x = np.zeros(shape=(n,1))xnew = np.zeros(shape=(n,1))iterlimit = 100 tol = 0.0000001 for iteration in range(iterlimit): convergence = True for i in range(n): sum=0 for j in range(n): if j != i: sum = sum + (A[i,j] * x[j]) #on second iteration (iteration =1) below line begins to #assign x[i] the same values as it assigns xnew[i] causing the #convergence check below to not run and results in a premature break xnew[i] = 1/A[i,i] * (b[i] - sum) if abs(xnew[i]-x[i]) > tol: convergence = False if convergence: break x = xnewprint("Iteration:", iteration+1)print("Solution:")print(np.matrix(xnew)) | x = xnewThis line assigns xnew to x. Not the contents of xnew, but the array itself. So after your first iteration, x and xnew reference the same array in memory.Try instead x[:] = xnew[:] |
Python 2.7 - pandas.read_table - how to import quadruple-pipe-separated fields from flat file I am a decent SAS programmer, but I am quite new in Python. Now, I have been given Twitter feeds, each saved into very large flat files, with headers in row #1 and a data structure like the below:CREATED_AT||||ID||||TEXT||||IN_REPLY_TO_USER_ID||||NAME||||SCREEN_NAME||||DESCRIPTION||||FOLLOWERS_COUNT||||TIME_ZONE||||QUOTE_COUNT||||REPLY_COUNT||||RETWEET_COUNT||||FAVORITE_COUNTTue Nov 14 12:33:00 +0000 2017||||930413253766791168||||ICYMI: Football clubs join the craft beer revolution! A good read|||| ||||BAB||||BABBrewers||||Monthly homebrew meet-up at 1000 Trades, Jewellery Quarter. First Tuesday of the month. All welcome, even if you've never brewed before.||||95|||| ||||0||||0||||0||||0Tue Nov 14 12:34:00 +0000 2017||||930413253766821456||||I'm up for it|||| ||||Misty||||MistyGrl||||You CAN DO it!||||45|||| ||||0||||0||||0||||0I guess it's like that because any sort of characters can be found in a Twitter feed, but a quadruple pipe is unlikely enough. I know some people use JSON for that, but I've got these files as such: lots of them. I could use SAS to easily transform these files, but I prefer to "go pythonic", this time.Now, I cannot seem to find a way to make Python (2.7) understand that the quadruple pipe is the actual separator. The output from the code below:import pandas as pdwith open('C:/Users/myname.mysurname/Desktop/my_twitter_flow_1.txt') as theInFile: inTbl = pd.read_table(theInFile, engine='python', sep='||||', header=1) print inTbl.head()seem to suggest that Python does not see the distinct fields as distinct but, simply, brings in each of the first 5 rows, up to the line feed character, ignoring the |||| separator. Basically, I am getting an output like the one I wrote above to show you the data structure. Any hints? | Using just the data in your question:>>> df = pd.read_csv('rio.txt', sep='\|{4}', skip_blank_lines=True, engine='python')>>> df CREATED_AT ID \0 Tue Nov 14 12:33:00 +0000 2017 930413253766791168 1 Tue Nov 14 12:34:00 +0000 2017 930413253766821456 TEXT IN_REPLY_TO_USER_ID \0 ICYMI: Football clubs join the craft beer revo... 1 I'm up for it NAME SCREEN_NAME DESCRIPTION \0 BAB BABBrewers Monthly homebrew meet-up at 1000 Trades, Jewel... 1 Misty MistyGrl You CAN DO it! FOLLOWERS_COUNT TIME_ZONE QUOTE_COUNT REPLY_COUNT RETWEET_COUNT \0 95 0 0 0 1 45 0 0 0 FAVORITE_COUNT 0 0 1 0 Notice the sep parameter. When it's more than one character long and not equal to '\s+' it's interpreted as a regular expression. But the '|' character has special meaning in a regex, hence it must be escaped, using the '\' character. I could simply have written sep='\|\|\|\|'; however, I've used an abbreviation. |
Keras - method on_batch_end is slow but only callback I have is checkpoint I set up a network with keras using TensorFlow backend.When I train my network I often times keep getting message:UserWarning: Method on_batch_end() is slow compared to the batch update (0.195523). Check your callbacks. % delta_t_median)The issue is that my network is set up with only checkpoint callback:checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')callbacks_list = [checkpoint]As far as I see in documentation this method is called only on epoch end, so it can't slow down on_batch_end method. Can anyone provide some information on what is the issue? | This is most probably a Generator (fit_generator()) issue. When using a generator as data source it has to be called at the end of a batch. Consider revisiting your generator code, using multiprocessing (workers > 1) or a higher batchsize (if possible) |
How can create an empty arrayin python like a C++ array I need to create an empty nd-array in python without zeros or ones functions looks like in c++ with this command for array 3*4 for integers:int x[3][4]Please help me | Numpy has a function for that. empty |
Why might pandas resort the dataframe after joining? I am writing an application where I need to pull in a single column from another dataframe. I'm getting some strange behavior. When I run the function using one dataset, everything works great. When it executes on a secondary dataset, the same code resorts the data based on the index. I'm pulling my hair out trying to figure out why the very same code is producing two different results. Here's the code. I realize this isn't MCVE but I have verified this is exactly where the resorting is happening. I'm hoping someone knows in general why pandas might resort or not resort in various circumstances.def new_curr_need(self, need): self.main_df.drop('Curr_need', axis=1, inplace=True) self.main_df = ( self.main_df.join(self.need_df[need], how='left')) #if it resorts, happens after the join self.main_df.rename({need:'Curr_need'}, axis='columns', inplace=True)Potentially relevant info on the datasets:The main_df and need_df index is a string (customer name) and is essentially the same in both datasetsThe only major difference between the two datasets is that the resorting one is a good bit widerElsewhere in my code is the ability for the user to sort the data in a customized way. That sorting will hold after running the function above using dataset 1 but not dataset 2. | Pandas' left join operation reorders the index of the right dataframe so that it matches the index of the left dataframe.For example, the following code produces a dataframe where the index of b is rearranged to match the index of a:a = pd.DataFrame({'x':[1,2,3]})b = pd.DataFrame({'y':[1,2,3]})a.index = [2,0,1]a.join(b, how='left') x y2 1 30 2 11 3 2If the indices of the dataframes you join are the same, the values will remain in the same order; if the index of the right dataframe is resorted, the values will be resorted. |
Concat 2 columns in a new phrase column using pandas DataFrame I have a DataFrame like this:>>> df = pd.DataFrame({'id_sin':['s123','s124','s125','s126','s127'], 'num1':[12,10,23,6,np.nan], 'num2':['BG','TC','AB','RC',np.nan], 'fr':[1,1,1,1,0],})>>> df fr id_sin num1 num20 1 s123 12 BG1 1 s124 10 TC2 1 s125 23 AB3 1 s126 6 RC4 0 s127 NaN NaNI want to concatenate the columns num1 & num2 (num2 is num1) in a phrase like this with fr being 1: fr id_sin num1 num2 phrase0 1 s123 12 BG BG is 121 1 s124 10 TC TC is 102 1 s125 23 AB AB is 233 1 s126 6 RC RC is 6I tried this but doesn't work:df['phrase'] = str(df['num2']) + ' is ' + str(df['num1']) | Edit:if you want num1 has no decimal .0, convert it to Int64:df.num1 = df.num1.astype('Int64')Out[32]: id_sin num1 num2 fr0 s123 12 BG 11 s124 10 TC 12 s125 23 AB 13 s126 6 RC 14 s127 NaN NaN 0Try Series.str.catdf.num2.str.cat(df.num1.astype(str), sep=' is ')Out[2055]:0 BG is 121 TC is 102 AB is 233 RC is 64 NaNName: num2, dtype: objectOn @rafael comment. His works, Just the typo in it causing error. It is:dn['num2'].astype(str) + ' is ' + dn['num1'].astype(str) |
Transform time series data set to supervised learning data set I have a data set with time series (on daily basis) for multiple items (e.g. users).The data looks simlified like this:https://i.ibb.co/Pj4TnHW/trans-original.jpg (I can't post images, because of missing rep. points, sorry)This data set has all the same attributes (e.g. measures) for each user. Those measures are taken over a time window on daily basis. Every user has its own "event date".My goal is to transform this time series (row-oriented) data set to a dataset, which could be used for supervised learning.My desired layout would look like this:https://i.ibb.co/8DxYpCy/Unbenannt.jpgCurrently, I apply my solution on a dataset with ~60 measures.So far I achieved this, by using an iteration over "user_id" and applying multiple steps with pandas.melt(), pandas.transpose() functions.But this requires a lot of preformatting, and becomes slower with larger data sets.Is there a better way to do my transformation? I read about this https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/ but this seems to be another type of problem...//EDIT #1: As requested, I created the smallest possible notebook / python script, with a simplified dataset to demonstrate, what I'm doing: https://www.file-upload.net/download-13590592/timeseries_to_supervised.zip.html(Jupyter Notebook, exported HTML-Version, sample input dataset) | I used to do stuff like this with R, it's a language well designed to manipulate rows (functional programming). You can use the library datatable, it's very fast. If I may ask wich column are you trying to predict ? Be careful to not predict an outcome based on present or futur data, you can only use the past :) |
Pandas - Alternative to rank() function that gives unique ordinal ranks for a column At this moment I am writing a Python script that aggregates data from multiple Excel sheets. The module I choose to use is Pandas, because of its speed and ease of use with Excel files. The question is only related to the use of Pandas and me trying to create a additional column that contains unique, integer-only, ordinal ranks within a group.My Python and Pandas knowledge is limited as I am just a beginner.The GoalI am trying to achieve the following data structure. Where the top 10 adwords ads are ranked vertically on the basis of their position in Google. In order to do this I need to create a column in the original data (see Table 2 & 3) with a integer-only ranking that contains no duplicate values. Table 1: Data structure I am trying to achieve device , weeks , rank_1 , rank_2 , rank_3 , rank_4 , rank_5 mobile , wk 1 , string , string , string , string , string mobile , wk 2 , string , string , string , string , string computer, wk 1 , string , string , string , string , string computer, wk 2 , string , string , string , string , stringThe ProblemThe exact problem I run into is not being able to efficiently rank the rows with pandas. I have tried a number of things, but I cannot seem to get it ranked in this way. Table 2: Data structure I have weeks device , website , ranking , adtext wk 1 mobile , url1 , *2.1 , string wk 1 mobile , url2 , *2.1 , string wk 1 mobile , url3 , 1.0 , string wk 1 mobile , url4 , 2.9 , string wk 1 desktop , *url5 , 2.1 , string wk 1 desktop , url2 , *1.5 , string wk 1 desktop , url3 , *1.5 , string wk 1 desktop , url4 , 2.9 , string wk 2 mobile , url1 , 2.0 , string wk 2 mobile , *url6 , 2.1 , string wk 2 mobile , url3 , 1.0 , string wk 2 mobile , url4 , 2.9 , string wk 2 desktop , *url5 , 2.1 , string wk 2 desktop , url2 , *2.9 , string wk 2 desktop , url3 , 1.0 , string wk 2 desktop , url4 , *2.9 , stringTable 3: The table I cannot seem to create weeks device , website , ranking , adtext , ranking wk 1 mobile , url1 , *2.1 , string , 2 wk 1 mobile , url2 , *2.1 , string , 3 wk 1 mobile , url3 , 1.0 , string , 1 wk 1 mobile , url4 , 2.9 , string , 4 wk 1 desktop , *url5 , 2.1 , string , 3 wk 1 desktop , url2 , *1.5 , string , 1 wk 1 desktop , url3 , *1.5 , string , 2 wk 1 desktop , url4 , 2.9 , string , 4 wk 2 mobile , url1 , 2.0 , string , 2 wk 2 mobile , *url6 , 2.1 , string , 3 wk 2 mobile , url3 , 1.0 , string , 1 wk 2 mobile , url4 , 2.9 , string , 4 wk 2 desktop , *url5 , 2.1 , string , 2 wk 2 desktop , url2 , *2.9 , string , 3 wk 2 desktop , url3 , 1.0 , string , 1 wk 2 desktop , url4 , *2.9 , string , 4The standard .rank(ascending=True), gives averages on duplicate values. But since I use these ranks to organize them vertically this does not work out.df = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True])df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( ascending=True)The .rank(method="dense", ascending=True) maintains duplicate values and also does not solve my problemdf = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True])df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( method="dense", ascending=True)The .rank(method="first", ascending=True) throws a ValueErrordf = df.sort_values(['device', 'weeks', 'ranking'], ascending=[True, True, True])df['newrank'] = df.groupby(['device', 'week'])['ranking'].rank( method="first", ascending=True)ADDENDUM: If I would find a way to add the rankings in a column, I would then use pivot to transpose the table in the following way.df = pd.pivot_table(df, index = ['device', 'weeks'], columns='website', values='adtext', aggfunc=lambda x: ' '.join(x))My question to youI was hoping any of you could help me find a solution for this problem. This could either an efficient ranking script or something else to help me reach the final data structure.Thank you!SebastiaanEDIT: Unfortunately, I think I was not clear in my original post. I am looking for a ordinal ranking that only gives integers and has no duplicate values. This means that when there is a duplicate value it will randomly give one a higher ranking than the other.So what I would like to do is generate a ranking that labels each row with an ordinal value per group. The groups are based on the week number and device. The reason I want to create a new column with this ranking is so that I can make top 10s per week and device.Also Steven G asked me for an example to play around with. I have provided that here. Example data can be pasted directly into python! IMPORTANT: The names are different in this sample. The dataframe is called placeholder, the column names are as follows: 'week', 'website', 'share', 'rank_google', 'device'. data = {u'week': [u'WK 1', u'WK 2', u'WK 3', u'WK 4', u'WK 2', u'WK 2', u'WK 1',u'WK 3', u'WK 4', u'WK 3', u'WK 3', u'WK 4', u'WK 2', u'WK 4', u'WK 1', u'WK 1',u'WK3', u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 2', u'WK 1', u'WK 4', u'WK 4',u'WK 4', u'WK 4', u'WK 2', u'WK 3', u'WK 4', u'WK 3', u'WK 4', u'WK 3', u'WK 2',u'WK 2', u'WK 4', u'WK 1', u'WK 1', u'WK 4', u'WK 4', u'WK 2', u'WK 1', u'WK 3',u'WK 1', u'WK 4', u'WK 1', u'WK 4', u'WK 2', u'WK 2', u'WK 2', u'WK 4', u'WK 4',u'WK 4', u'WK 1', u'WK 3', u'WK 4', u'WK 4', u'WK 1', u'WK 4', u'WK 3', u'WK 2',u'WK 4', u'WK 4', u'WK 4', u'WK 4', u'WK 1'],u'website': [u'site1.nl', u'website2.de', u'site1.nl', u'site1.nl', u'anothersite.com',u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at',u'url2.at', u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at',u'url2.at', u'url2.at', u'url2.at', u'anothersite.com', u'url2.at', u'url2.at',u'anothersite.com', u'site2.co.uk', u'sitename2.com', u'sitename.co.uk', u'sitename.co.uk',u'sitename2.com', u'sitename2.com', u'sitename2.com', u'url3.fi', u'sitename.co.uk',u'sitename2.com', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'ulr2.se',u'sitename2.com', u'sitename.co.uk', u'sitename2.com', u'sitename2.com', u'sitename2.com',u'sitename2.com', u'sitename2.com', u'sitename.co.uk', u'sitename.co.uk', u'sitename2.com',u'facebook.com', u'alsoasite.com', u'ello.com', u'instagram.com', u'alsoasite.com', u'facebook.com',u'facebook.com', u'singleboersen-vergleich.at', u'facebook.com', u'anothername.com', u'twitter.com',u'alsoasite.com', u'alsoasite.com', u'alsoasite.com', u'alsoasite.com', u'facebook.com', u'alsoasite.com',u'alsoasite.com'],'adtext': [u'site1.nl 3,9 | < 10\xa0%', u'website2.de 1,4 | < 10\xa0%', u'site1.nl 4,3 | < 10\xa0%',u'site1.nl 3,8 | < 10\xa0%', u'anothersite.com 2,5 | 12,36 %', u'url2.at 1,3 | 78,68 %', u'url2.at 1,2 | 92,58 %',u'url2.at 1,1 | 85,47 %', u'url2.at 1,2 | 79,56 %', u'anothersite.com 2,8 | < 10\xa0%', u'url2.at 1,2 | 80,48 %',u'url2.at 1,2 | 85,63 %', u'url2.at 1,1 | 88,36 %', u'url2.at 1,3 | 87,90 %', u'url2.at 1,1 | 83,70 %',u'anothersite.com 3,1 | < 10\xa0%', u'url2.at 1,2 | 91,00 %', u'url2.at 1,1 | 92,11 %', u'url2.at 1,2 | 81,28 %', u'url2.at 1,1 | 86,49 %', u'anothersite.com 2,7 | < 10\xa0%', u'url2.at 1,2 | 83,96 %', u'url2.at 1,2 | 75,48 %', u'anothersite.com 3,0 | < 10\xa0%', u'site2.co.uk 3,1 | 16,24 %', u'sitename2.com 2,3 | 34,85 %',u'sitename.co.uk 3,5 | < 10\xa0%', u'sitename.co.uk 3,6 | < 10\xa0%', u'sitename2.com 2,1 | < 10\xa0%',u'sitename2.com 2,2 | 13,55 %', u'sitename2.com 2,1 | 47,91 %', u'url3.fi 3,4 | < 10\xa0%',u'sitename.co.uk 3,1 | 14,15 %', u'sitename2.com 2,4 | 28,77 %', u'sitename.co.uk 3,1 | 22,55 %',u'sitename2.com 2,1 | 17,03 %', u'sitename2.com 2,1 | 24,46 %', u'ulr2.se 2,7 | < 10\xa0%',u'sitename2.com 2,0 | 49,12 %', u'sitename.co.uk 3,0 | < 10\xa0%', u'sitename2.com 2,1 | 40,00 %',u'sitename2.com 2,1 | < 10\xa0%', u'sitename2.com 2,2 | 30,29 %', u'sitename2.com 2,0 |47,48 %',u'sitename2.com 2,1 | 32,17 %', u'sitename.co.uk 3,2 | < 10\xa0%', u'sitename.co.uk 3,1 | 12,77 %',u'sitename2.com 2,6 | < 10\xa0%', u'facebook.com 3,2 | < 10\xa0%', u'alsoasite.com 2,3 | < 10\xa0%',u'ello.com 1,8 | < 10\xa0%',u'instagram.com 5,0 | < 10\xa0%', u'alsoasite.com 2,2 | < 10\xa0%',u'facebook.com 3,0 | < 10\xa0%', u'facebook.com 3,2 | < 10\xa0%', u'singleboersen-vergleich.at 2,6 | < 10\xa0%',u'facebook.com 3,4 | < 10\xa0%', u'anothername.com 1,9 | <10\xa0%', u'twitter.com 4,4 | < 10\xa0%',u'alsoasite.com 1,1 | 12,35 %', u'alsoasite.com 1,1 | 11,22 %', u'alsoasite.com 2,0 | < 10\xa0%',u'alsoasite.com 1,1| 10,86 %', u'facebook.com 3,4 | < 10\xa0%', u'alsoasite.com 1,1 | 10,82 %',u'alsoasite.com 1,1 | < 10\xa0%'],u'share': [u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'12,36 %', u'78,68 %',u'92,58 %', u'85,47 %', u'79,56 %', u'< 10\xa0%', u'80,48 %', u'85,63 %', u'88,36 %',u'87,90 %', u'83,70 %', u'< 10\xa0%', u'91,00 %', u'92,11 %', u'81,28 %', u'86,49 %',u'< 10\xa0%', u'83,96 %', u'75,48 %', u'< 10\xa0%', u'16,24 %', u'34,85 %', u'< 10\xa0%',u'< 10\xa0%', u'< 10\xa0%', u'13,55 %', u'47,91 %', u'< 10\xa0%', u'14,15 %', u'28,77 %',u'22,55 %', u'17,03 %', u'24,46 %', u'< 10\xa0%', u'49,12 %', u'< 10\xa0%', u'40,00 %',u'< 10\xa0%', u'30,29 %', u'47,48 %', u'32,17 %', u'< 10\xa0%', u'12,77 %', u'< 10\xa0%',u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%',u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'< 10\xa0%', u'12,35 %', u'11,22 %', u'< 10\xa0%',u'10,86 %', u'< 10\xa0%', u'10,82 %', u'< 10\xa0%'],u'rank_google': [u'3,9', u'1,4', u'4,3', u'3,8', u'2,5', u'1,3', u'1,2', u'1,1', u'1,2', u'2,8',u'1,2', u'1,2', u'1,1', u'1,3', u'1,1', u'3,1', u'1,2', u'1,1', u'1,2', u'1,1', u'2,7', u'1,2',u'1,2', u'3,0', u'3,1', u'2,3', u'3,5', u'3,6', u'2,1', u'2,2', u'2,1', u'3,4', u'3,1', u'2,4',u'3,1', u'2,1', u'2,1', u'2,7', u'2,0', u'3,0', u'2,1', u'2,1', u'2,2', u'2,0', u'2,1', u'3,2',u'3,1', u'2,6', u'3,2', u'2,3', u'1,8', u'5,0', u'2,2', u'3,0', u'3,2', u'2,6', u'3,4', u'1,9',u'4,4', u'1,1', u'1,1', u'2,0', u'1,1', u'3,4', u'1,1', u'1,1'],u'device': [u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Tablet', u'Computer',u'Mobile', u'Tablet', u'Mobile', u'Computer', u'Tablet', u'Tablet', u'Computer', u'Tablet', u'Tablet',u'Tablet', u'Mobile', u'Computer', u'Tablet', u'Computer', u'Mobile', u'Tablet', u'Tablet', u'Mobile',u'Tablet', u'Mobile', u'Computer', u'Computer', u'Tablet', u'Mobile', u'Tablet', u'Mobile', u'Tablet',u'Mobile', u'Mobile', u'Mobile', u'Tablet', u'Computer', u'Tablet', u'Computer', u'Mobile', u'Tablet',u'Tablet', u'Tablet', u'Mobile', u'Computer', u'Mobile', u'Computer', u'Tablet', u'Tablet', u'Tablet',u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Tablet', u'Mobile', u'Mobile', u'Computer',u'Mobile', u'Tablet', u'Mobile', u'Mobile']}placeholder = pd.DataFrame(data)Error I receive when I use the rank() function with method='first'C:\Users\username\code\report-creator>python recomp-report-04.pyTraceback (most recent call last): File "recomp-report-04.py", line 71, in <module> placeholder['ranking'] = placeholder.groupby(['week', 'device'])['rank_google'].rank(method='first').astype(int) File "<string>", line 35, in rank File "C:\Users\sthuis\AppData\Local\Continuum\Anaconda2\lib\site-packages\pandas\core\groupby.py", line 561, in wrapper raise ValueErrorValueErrorMy solutionEffectively, the answer is given by @Nickil Maveli. A huge thank you! Nevertheless, I thought it might be smart to outline how I finally incorporated the solution.Rank(method='first') is a good way to get an ordinal ranking. But since I was working with numbers that were formatted in the European way, pandas interpreted them as strings and could not rank them this way. I came to this conclusion by the reaction of Nickil Maveli and trying to rank each group individually. I did that through the following code.for name, group in df.sort_values(by='rank_google').groupby(['weeks', 'device']): df['new_rank'] = group['ranking'].rank(method='first').astype(int)This gave me the following error:ValueError: first not supported for non-numeric dataSo this helped me realize that I should convert the column to floats. This is how I did it.# Converting the ranking column to a floatdf['ranking'] = df['ranking'].apply(lambda x: float(unicode(x.replace(',','.'))))# Creating a new column with a rankdf['new_rank'] = df.groupby(['weeks', 'device'])['ranking'].rank(method='first').astype(int)# Dropping all ranks after the 10df = df.sort_values('new_rank').groupby(['weeks', 'device']).head(n=10)# Pivotting the columndf = pd.pivot_table(df, index = ['device', 'weeks'], columns='new_rank', values='adtext', aggfunc=lambda x: ' '.join(x))# Naming the columns with 'top' + numberdf.columns = ['top ' + str(i) for i in list(df.columns.values)]So this worked for me. Thank you guys! | I think the way you were trying to use the method=first to rank them after sorting were causing problems. You could simply use the rank method with first arg on the grouped object itself giving you the desired unique ranks per group.df['new_rank'] = df.groupby(['weeks','device'])['ranking'].rank(method='first').astype(int)print (df['new_rank'])0 21 32 13 44 35 16 27 48 29 310 111 412 213 314 115 4Name: new_rank, dtype: int32Perform pivot operation:df = df.pivot_table(index=['weeks', 'device'], columns=['new_rank'], values=['adtext'], aggfunc=lambda x: ' '.join(x))Choose the second level of the multiindex columns which pertain to the rank numbers:df.columns = ['rank_' + str(i) for i in df.columns.get_level_values(1)]dfData:(to replicate)df = pd.DataFrame({'weeks': ['wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 1', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2', 'wk 2'], 'device': ['mobile', 'mobile', 'mobile', 'mobile', 'desktop', 'desktop', 'desktop', 'desktop', 'mobile', 'mobile', 'mobile', 'mobile', 'desktop', 'desktop', 'desktop', 'desktop'], 'website': ['url1', 'url2', 'url3', 'url4', 'url5', 'url2', 'url3', 'url4', 'url1', 'url16', 'url3', 'url4', 'url5', 'url2', 'url3', 'url4'], 'ranking': [2.1, 2.1, 1.0, 2.9, 2.1, 1.5, 1.5, 2.9, 2.0, 2.1, 1.0, 2.9, 2.1, 2.9, 1.0, 2.9], 'adtext': ['string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string', 'string']})Note: method=first assigns ranks in the order they appear in the array/series. |
Speed-up cython code I have code that is working in python and want to use cython to speed up the calculation. The function that I've copied is in a .pyx file and gets called from my python code. V, C, train, I_k are 2-d numpy arrays and lambda_u, user, hidden are ints. I don't have any experience in using C or cython. What is an efficient way to make this code faster.Using cython -a for compiling shows me that the code is flawed but how can I improve it. Using for i in prange (user_size, nogil=True):results in Constructing Python slice object not allowed without gil.How has the code to be modified to harvest the power of cython? @cython.boundscheck(False) @cython.wraparound(False) def u_update(V, C, train, I_k, lambda_u, user, hidden): cdef int user_size = user cdef int hidden_dim = hidden cdef np.ndarray U = np.empty((hidden_dim,user_size), float) cdef int m = C.shape[1] for i in range(user_size): C_i = np.zeros((m, m), dtype=float) for j in range(m): C_i[j,j]=C[i,j] U[:,i] = np.dot(np.linalg.inv(np.dot(V, np.dot(C_i,V.T)) + lambda_u*I_k), np.dot(V, np.dot(C_i,train[i,:].T))) return U | You are trying to use cython by diving into the deep end of pool. You should start with something small, such as some of the numpy examples. Or even try to improve on np.diag. i = 0 C_i = np.zeros((m, m), dtype=float) for j in range(m): C_i[j,j]=C[i,j]v. C_i = diag(C[i,:])Can you improve the speed of this simple expression? diag is not compiled, but it does perform an efficient indexed assignment. res[:n-k].flat[i::n+1] = vBut the real problem for cython is this expression:U[:,i] = np.dot(np.linalg.inv(np.dot(V, np.dot(C_i,V.T)) + lambda_u*I_k), np.dot(V, np.dot(C_i,train[i,:].T)))np.dot is compiled. cython won't turn that in to c code, nor will it consolidate all 5 dots into one expression. It also won't touch the inv. So at best cython will speed up the iteration wrapper, but it will still call this Python expression m times.My guess is that this expression can be cleaned up. Replacing the inner dots with einsum can probably eliminate the need for C_i. The inv might make 'vectorizing' the whole thing difficult. But I'd have to study it more. But if you want to stick with the cython route, you need to transform that U expression into simple iterative code, without calls to numpy functions like dot and inv.===================I believe the following are equivalent:np.dot(C_i,V.T)C[i,:,None]*V.TIn:np.dot(C_i,train[i,:].T) if train is 2d, then train[i,:] is 1d, and the .T does nothing.In [289]: np.dot(np.diag([1,2,3]),np.arange(3))Out[289]: array([0, 2, 6])In [290]: np.array([1,2,3])*np.arange(3)Out[290]: array([0, 2, 6])If I got that right, you don't need C_i.======================Furthermore, these calculations can be moved outside the loop, with expressions like (not tested)CV1 = C[:,:,None]*V.T # a 3d arrayCV2 = C * train.T for i in range(user_size): U[:,i] = np.dot(np.linalg.inv(np.dot(V, CV1[i,...]) + lambda_u*I_k), np.dot(V, CV2[i,...]))A further step is to move both np.dot(V,CV...) out of the loop. That may require np.matmul (@) or np.einsum. Then we will havefor i... I = np.linalg.inv(VCV1[i,...]) U[:,i] = np.dot(I+ lambda_u), VCV2[i,])or evenfor i... I[...i] = np.linalg.inv(...) # if inv can't be vectorizedU = np.einsum(..., I+lambda_u, VCV2)This is a rough sketch, and details will need to be worked out. |
Fill gaps in Pandas multi index with start and end timestamp From a DataFrame like the following: value fillstart end2016-07-15 00:46:11 2016-07-19 03:35:34 1 a2016-08-21 07:55:31 2016-08-22 18:24:49 2 b2016-09-26 03:09:12 2016-09-26 06:06:12 3 cI'm looking for a way to add rows filling the gaps, each new row taking the fill column of the existing previous adjacent row as its new value.The output of the previous example would then be: value start end2016-07-15 00:46:11 2016-07-19 03:35:34 12016-07-19 03:35:34 2016-08-21 07:55:31 a2016-08-21 07:55:31 2016-08-22 18:24:49 22016-08-22 18:24:49 2016-09-26 03:09:12 b2016-09-26 03:09:12 2016-09-26 06:06:12 3A vectorized method, avoiding looping over the DataFrame in pure Python, would be heavily preferred as I have to deal with massive amounts of rows. | use DataFrame.stack() method:In [189]: df.stack().reset_index(level=2, drop=True).to_frame('value')Out[189]: valuestart end2016-07-15 00:46:11 2016-07-19 03:35:34 1 2016-07-19 03:35:34 a2016-08-21 07:55:31 2016-08-22 18:24:49 2 2016-08-22 18:24:49 b2016-09-26 03:09:12 2016-09-26 06:06:12 3 2016-09-26 06:06:12 c |
Pandas between range lookup filtering My data looks like this:import pandas as pdpd.DataFrame({ 'x_range':['101-200','101-200','201-300','201-300'], 'y':[5,6,5,6], 'z': ['Cat', 'Dog', 'Fish', 'Snake']})How might I filter on an x value (that fit's inside x_range) and a y value to return an appropriate z value? For instance, if x = 248 and y= 5, I'd like to return Fish... | Simple filtering exercise:Save your dataframe:df['x_range_start'] = [int(i.split('-')[0]) for i in df.x_range]Add two columns for range start and end:df['x_range_start'] = [int(i.split('-')[0]) for i in df.x_range]df['x_range_end'] = [int(i.split('-')[1]) for i in df.x_range]Filter to find values:x_value = 113y_value = 5df[(df.x_range_start <= x_value) &(x_value <= df.x_range_end)][df.y == y_value]['z'] |
Geopandas: How to associate a Point to a Linestring using the original Linestring order Using Geopandas, Shapelyimport geopandas as gpdfrom shapely.geometry import Point, LineStringstreet = gpd.GeoDataFrame({'street': ['st'], 'geometry': LineString([(1, 1), (2, 2), (3, 1)])})pp = gpd.GeoDataFrame({'geometry': [Point((1.9, 1.9)), Point((1.5, 1.5)), Point((2.5, 1.5)), Point((1.2, 1.2))]})print(street)print(pp)Suppose I have a Linestring that represents a (cornered) street:LineString([(1, 1), (2, 2), (3, 1)])Note that the order of points in this linestring matters because LineString([(1, 1), (3, 1), (2, 2)]) would represent a very different street.Now, suppose I have list of points that belong to my street:Point((1.9, 1.9))Point((1.5, 1.5))Point((2.5, 1.5))Point((1.2, 1.2))I want to create a new Linestring where all the Points are "merged" with the original street coordinates. This "merge" mechanism has to maintain the original street shape as follows:LineString([(1, 1), (1.2, 1.2), (1.5, 1.5), (1.9, 1.9), (2, 2), (2.5, 1.5). (3, 1)])Any ideas how to approach this? | Comment:I wouldn't know there's an existing function to do that. It seems as your challenge is to identify the segment of the street where you have to add a point. You can calculate the linear distance of the point to each segment. The segment with the min distance is the one you have to add it to ... btw all shapely object have the distance method already implemented |
reduce Pandas DataFrame by selecting specific rows (max/min) groupby I have a long pandas DataFrame and like to select a single row of a subset if a criterion applies (min of 'value' in my case).I have a dataframe that starts like this: time name_1 name_2 idx value0 0 A B 0 0.9273231 0 A B 1 0.417376 2 0 A B 2 0.167633 3 0 A B 3 0.4583074 0 A B 4 0.312337 5 0 A B 5 0.8768706 0 A B 6 0.096035 7 0 A B 7 0.6564548 0 A B 8 0.261049 9 0 A B 9 0.22029410 0 A C 0 0.902397 11 0 A C 1 0.88739412 0 A C 2 0.59368613 0 A C 3 0.394785 14 0 A C 4 0.56956615 0 A C 5 0.544009 16 0 A C 6 0.40480317 0 A C 7 0.209683 18 0 A C 8 0.30994619 0 A C 9 0.049598I like to select the rows with the minimum of 'value' to a given 'time','name_1' and 'idx'.This code does what I want:import pandas as pdimport numpy as npvalues = np.array([0.927323 , 0.41737634, 0.16763339, 0.45830677, 0.31233708, 0.87687015, 0.09603466, 0.65645383, 0.26104928, 0.22029422, 0.90239674, 0.88739363, 0.59368645, 0.39478497, 0.56956551, 0.54400922, 0.40480253, 0.20968343, 0.30994597, 0.04959793, 0.19251744, 0.52135761, 0.25858556, 0.21825577, 0.0371907 , 0.09493446, 0.11676115, 0.95710755, 0.20447907, 0.47587798, 0.51848566, 0.88683689, 0.33567338, 0.55024871, 0.90575771, 0.80171702, 0.09314208, 0.55236301, 0.84181111, 0.15364926, 0.98555741, 0.30371372, 0.05154821, 0.83176642, 0.32537832, 0.75952016, 0.85063717, 0.13447965, 0.2362897 , 0.51945735, 0.90693226, 0.85405705, 0.43393479, 0.91383604, 0.11018263, 0.01436286, 0.39829369, 0.66487798, 0.22727205, 0.13352898, 0.54781443, 0.60894777, 0.35963582, 0.12307987, 0.45876915, 0.02289212, 0.12621582, 0.42680046, 0.83070886, 0.40761464, 0.64063501, 0.20836704, 0.17291092, 0.75085509, 0.1570349 , 0.03859196, 0.6824537 , 0.84710239, 0.89886199, 0.2094902 , 0.58992632, 0.7078019 , 0.16779968, 0.2419259 , 0.73452264, 0.09091338, 0.10095228, 0.62192591, 0.20698809, 0.29000293, 0.20460181, 0.01493776, 0.52598607, 0.16651766, 0.89677289, 0.52880975, 0.67722748, 0.89929363, 0.30735003, 0.40878873, 0.66854908, 0.4131948 , 0.40704838, 0.59434805, 0.13346655, 0.47503708, 0.09459362, 0.48804776, 0.90442952, 0.81338104, 0.17684766, 0.19449489, 0.81657825, 0.76595993, 0.46624606, 0.27780779, 0.95146104, 0.37054388, 0.69655618, 0.39371977])df = pd.DataFrame({'time':[j for j in range(2) for i in range(60)], 'name_1':[j for j in ['A','B','C']*2 for i in range(20)], 'name_2':[j for j in ['B','C','A']*4 for i in range(10)], 'idx':[i for j in range(12) for i in range(10)], 'value':values})out_df = pd.DataFrame()for t in np.unique(df.time): a = df[df.time==t] for n1 in np.unique(df.name_1): b = a[a.name_1==n1] for idx in np.unique(df.idx): c = b[b.idx==idx] # find the minimum index in c of value min_idx = np.argmin(c.value) out_df=out_df.append(c.iloc[min_idx])out_df[:10] time name_1 name_2 idx value 10 0.0 A C 0.0 0.902397 1 0.0 A B 1.0 0.417376 2 0.0 A B 2.0 0.167633 13 0.0 A C 3.0 0.394785 4 0.0 A B 4.0 0.312337 15 0.0 A C 5.0 0.544009 6 0.0 A B 6.0 0.096035 17 0.0 A C 7.0 0.209683 8 0.0 A B 8.0 0.261049 19 0.0 A C 9.0 0.049598But this is really slow on the 4Million rows - of cause. How to speed this up?I tried groupby, but unfortunately this behaves not as expected:If I take this DataFrame c:print(c) time name_1 name_2 idx value0 0 A B 0 0.92732310 0 A C 0 0.902397groupby should select the second row since value is the minimum here. However groupby behaves different:c.groupby(by=['time','name_1','idx']).apply(np.min) time name_1 name_2 idx valuetime name_1 idx 0 A 0 0 A B 0 0.902397The minimum value is correct, but name_2 should be C not B.Any suggestions? | you could try to use idxmin() and use the following line of code:out_df = df.loc[df.loc[:,['time','name_1','idx','value']].groupby(by=['time','name_1','idx']).idxmin()['value'], :] |
Convert a data frame in which one column contains array of numbers as string to a json file I'd like to convert a data frame into a json file. One of the columns of the data frame contains time series as a string. Thus, the final json looks like this:[{"...":"...","Dauer":"24h","Wertereihe":"8619.0,9130.0,8302.0,8140.0"}, {...}, {...}]Is it possible to save the df to a json file in such a way that "Wertereihe" is an array of numbers? This would give: [{"...":"...","Dauer":"24h","Wertereihe":[8619.0,9130.0,8302.0,8140.0]}, {...}, {...}]I used the following snippet to save the df to a json file:df.to_json(jsonFile, orient = "records") | IIUC, you need:df['Wertereihe'] = df['Wertereihe'].apply(lambda x: list(map(float, x.split(','))))df.to_json(jsonFile, orient = "records") |
Comparing scalars to Numpy arrays What I am trying to do is make a table based on a piece-wise function in Python. For example, say I wrote this code:import numpy as npfrom astropy.table import Table, Columnfrom astropy.io import asciix = np.array([1, 2, 3, 4, 5])y = x * 2data = Table([x, y], names = ['x', 'y'])ascii.write(data, "xytable.dat")xytable = ascii.read("xytable.dat")print xytableThis works as expected, it prints a table that has x values 1 through 5 and y values 2, 4, 6, 8, 10. But, what if I instead want y to be x * 2 only if x is 3 or less, and y to be x + 2 otherwise? If I add:if x > 3: y = x + 2it says: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()How do I code my table so that it works as a piece-wise function? How do I compare scalars to Numpy arrays? | You can possibly use numpy.where():In [196]: y = np.where(x > 3, x + 2, y)In [197]: yOut[197]: array([2, 4, 6, 6, 7])The code above gets the job done in a fully vectorized manner. This approach is generally more efficient (and arguably more elegant) than using list comprehensions and type conversions. |
make correlation plot on time series data in python I want to see a correlation on a rolling week basis in time series data. The reason because I want to see how rolling correlation moves each year. To do so, I tried to use pandas.corr(), pandas.rolling_corr() built-in function for getting rolling correlation and tried to make line plot, but I couldn't correct the correlation line chart. I don't know how should I aggregate time series for getting rolling correlation line chart. Does anyone knows any way of doing this in python? Is there any workaround to get rolling correlation line chart from time series data in pandas? any idea?my attempt:I tried of using pandas.corr() to get correlation but it was not helpful to generate rolling correlation line chart. So, here is my new attempt but it is not working. I assume I should think about the right way of data aggregation to make rolling correlation line chart.import pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsurl = 'https://gist.githubusercontent.com/adamFlyn/eb784c86c44fd7ed3f2504157a33dc23/raw/79b6aa4f2e0ffd1eb626dffdcb609eb2cb8dae48/corr.csv'df = pd.read_csv(url)df['date'] = pd.to_datetime(df['date'])def get_corr(df, window=4): dfs = [] for key, value in df: value["ROLL_CORR"] = pd.rolling_corr(value["prod_A_price"],value["prod_B_price"], window) dfs.append(value) df_final = pd.concat(dfs) return df_finalcorr_df = get_corr(df, window=12)fig, ax = plt.subplots(figsize=(7, 4), dpi=144)sns.lineplot(x='week', y='ROLL_CORR', hue='year', data=corr_df,alpha=.8)plt.show()plt.close()doing this way is not working to me. By doing this, I want to see how the rolling correlations move each year. Can anyone point me out possible of doing rolling correlation line chart from time-series data in python? any thoughts?desired outputhere is the desired rolling correlation line chart that I want to get. Note that desired plot was generated from MS excel. I am wondering is there any possible way of doing this in python? Is there any workaround to get a rolling correlation line chart from time-series data in python? how should I correct my current attempt to get the desired output? any thoughts? | Using your code and description as a starting point.Panda's Rolling class has an apply function which can be leveraged (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.rolling.Rolling.apply.html#pandas.core.window.rolling.Rolling.apply)Two tricks are involved to make the code work:Accessing the whole row in the applied function (Pandas rolling apply using multiple columns)We call the rolling function on a pandas.Series (here df['week']) to avoid going the applied function once per columnimport pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsurl = 'https://gist.githubusercontent.com/adamFlyn/eb784c86c44fd7ed3f2504157a33dc23/raw/79b6aa4f2e0ffd1eb626dffdcb609eb2cb8dae48/corr.csv'df = pd.read_csv(url)def get_corr(ser): rolling_df = df.loc[ser.index] return rolling_df['prod_A_price'].corr(rolling_df['prod_B_price'])df['ROLL_CORR'] = df['week'].rolling(4).apply(get_corr)number_years = 3for week, df_week in df.groupby('week'): df = df.append({ 'week': week, 'year': f'{number_years} year avg', 'ROLL_CORR': df_week.sort_values(by='date').head(number_years)['ROLL_CORR'].mean() }, ignore_index=True)fig, ax = plt.subplots(figsize=(7, 4), dpi=144)sns.lineplot(x='week', y='ROLL_CORR', hue='year', data=df,alpha=.8)plt.show()plt.close()You'll find here the generated image by seabornWith the 3 year average |
Need to sort a nested tuple with numbers I trying to sort a tuple as belowinput: ROI: [[191 60 23 18] [143 60 23 19] [ 95 52 24 21] [237 51 24 21] [ 47 38 27 22] [281 35 25 22] [ 4 17 26 24] [324 13 22 21]]Expected Output = S_ROI: [[4 17 26 24] [47 38 27 22] [ 95 52 24 21] [143 60 23 19] [ 191 60 23 18] [237 51 24 21] [281 35 25 22] [324 13 22 21]]I have got intermediate arraycolumn=[191 143 95 237 47 281 4 324]I have tried this - But ROI is getting updated inside loopsort_index = np.argsort(column) column.sort()sorted_led_ROI=ROI;index=0 for y in sort_index: sorted_led_ROI[index]=ROI[y] index =index+1 print('sorted_led_ROI:', sorted_led_ROI)Result:sorted_led_ROI: [[ 4 17 26 24] [ 47 38 27 22] [ 95 52 24 21] [ 47 38 27 22] [ 4 17 26 24] [ 47 38 27 22] [ 47 38 27 22] [324 13 22 21]]help me out to sort this in python using np or cv | Do you mean just this:print(ROI[ROI[:,0].argsort()])Output:[[ 4 17 26 24] [ 47 38 27 22] [ 95 52 24 21] [143 60 23 19] [191 60 23 18] [237 51 24 21] [281 35 25 22] [324 13 22 21]] |
How to get x_train and y_train from ImageDataGenerator? I am working on some image classification problem and I made Y Network for it. Y Network is a type of Neural Network which has two inputs and one output. If we want to fit our Tensorflow model we have to feed x_train and y_train in model.fit().Like this -model.fit([x_train, x_train], y_train, epochs=100, batch_size=64)but how do I get x_train and y_train if I got my data from ImageDataGenerator ? Like this -train_generator = train_datagen.flow_from_dataframe(... , batch_size=64, ...)I tried getting x_train and y_train by this method:x_train, y_train = train_generator.next()but resulted x_train and y_train consist of only 64 images, I want all my 8644 images. I cannot increase batch_size to 8644 because it will need more memory and Google Colab will crash. What should I do ? | you can get the list of all images and labels fromclass_dict=train_generator.class_indiceslabels= train_generator.labelsfile_names= train_generator.filenames the class dictionary is useful to correlate the class index to the class name, it is of the form {class name, index} I find it useful to reverse the order to get a dictionary of the form {index, class name} using the code belowfor key,value in class_dict.items(): new_dict[value]=key So when you do predictions and get the index of the prediction using index= np.argmax(p) you can get the corresponding class name fromclass_name=new_dict[index] |
problem with importing @tensorflow/tfjs-node while working with face-api.js package (node.js) i use @tensorflow/tfjs-node package for face-api.js package to speed up things (as they said )that is my code : // import nodejs bindings to native tensorflow,// not required, but will speed up things drastically (python required)require('@tensorflow/tfjs-node');// implements nodejs wrappers for HTMLCanvasElement, HTMLImageElement, ImageDataconst { loadImage,Canvas, Image, ImageData } = require('canvas')const faceapi = require('face-api.js');// patch nodejs environment, we need to provide an implementation of// HTMLCanvasElement and HTMLImageElementfaceapi.env.monkeyPatch({ Canvas, Image, ImageData })// patch nodejs environment, we need to provide an implementation of// HTMLCanvasElement and HTMLImageElementfaceapi.env.monkeyPatch({ Canvas, Image, ImageData })Promise.all([ faceapi.nets.ssdMobilenetv1.loadFromDisk('./models'), faceapi.nets.faceRecognitionNet.loadFromDisk('./models'), faceapi.nets.faceLandmark68Net.loadFromDisk('./models')]).then(async () => { const image1= await loadImage("https://enigmatic-waters-76106.herokuapp.com/1.jpeg") const image2= await loadImage("https://enigmatic-waters-76106.herokuapp.com/8.jpeg") const result = await faceapi.detectSingleFace(image1).withFaceLandmarks() .withFaceDescriptor() const singleResult = await faceapi .detectSingleFace(image2) .withFaceLandmarks() .withFaceDescriptor() const labeledDescriptors = [ new faceapi.LabeledFaceDescriptors( 'saied', [result.descriptor] ) ] const faceMatcher = new faceapi.FaceMatcher(labeledDescriptors) const bestMatch = faceMatcher.findBestMatch(singleResult.descriptor) console.log(labeledDescriptors[0].descriptors)})and when i run the code i get this errorTypeError: forwardFunc_1 is not a functionat G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3166:55at G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2989:22at Engine.scopedRun (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2999:23)at Engine.tidy (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2988:21)at kernelFunc (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3166:29)at G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3187:27at Engine.scopedRun (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:2999:23)at Engine.runKernelFunc (G:\test\node_modules@tensorflow\tfjs-core\dist\tf-core.node.js:3183:14)at mul_ (G:\test\node_modules\face-api.js\node_modules@tensorflow\tfjs-core\dist\ops\binary_ops.js:327:28)at Object.mul (G:\test\node_modules\face-api.js\node_modules@tensorflow\tfjs-core\dist\ops\operation.js:46:29)(Use node --trace-warnings ... to show where the warning was created)(node:3496) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)(node:3496) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit codewhen i delete " require('@tensorflow/tfjs-node'); " the code run prefectly but i need to import @tensorflow/tfjs-node to make the proccess fasternode:v14.15.4npm:6.14.10@tensorflow/tfjs-node:v3.0.0Python 2.7.15 (required for @tensorflow/tfjs-node)face-api.js:v0.22.2thanks in advance for :) | As explained in this github issueThe version of face-api.js you are using is not compatible with tfjs 2.0+ or 3.0+, only obsolete 1.x.Why it worked before you added tfjs-node? because face-api.js actually includes bundled version of tfjs-core 1.x. Once you added tfjs-node, it overrode global tf namespace, but its a much newer version and not compatible.You must install obsolete tfjs-node 1.x OR follow the pointers they give to use a newer port of face-api.js that supports TF 2.0. |
Python + Pandas + dataframe : couldn't append one dataframe to another I have two big CSV files. I have converted them to Pandas dataframes. Both of them have columns of same names and in same order : event_name, category, category_id, description. I want to append one dataframe to another, and, finally want to write the resultant dataframe to a CSV. I wrote a code for that: #appendind a new dataframe to the older dataframe data = pd.read_csv("dataset.csv") data1 = pd.read_csv("dataset_new.csv") dfs = [data, data1] pd.concat([df.squeeze() for df in dfs], ignore_index=True) dfs = pd.DataFrame(columns=['event_name','category', 'category_id', 'description']) dfs.to_csv('dataset_append.csv', encoding='utf-8', index=False)I wanted to show you the output of print(dfs) but I couldn't because Stackoverflow is showing following error because the output is too long:Body is limited to 30000 characters; you entered 32132.Would you please tell me a code snippet which you use succesfully to append Pandas dataframe?Edit1:print(dfs)outout:---------------------------------------------------------[ Unnamed: 10 Unnamed: 100 Unnamed: 101 Unnamed: 102 Unnamed: 103 \0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN 5 NaN NaN NaN NaN NaN 6 NaN NaN NaN NaN NaN 7 NaN NaN NaN NaN NaN 8 NaN NaN NaN NaN NaN 9 NaN NaN NaN NaN NaN 10 NaN NaN NaN NaN NaN 11 NaN NaN NaN NaN NaN 12 NaN NaN NaN NaN NaN 13 NaN NaN NaN NaN NaN 14 NaN NaN NaN NaN NaN 15 NaN NaN NaN NaN NaN 16 NaN NaN NaN NaN NaN 17 NaN NaN NaN NaN NaN 18 NaN NaN NaN NaN NaN 19 NaN NaN NaN NaN NaN 20 NaN NaN NaN NaN NaN 21 NaN NaN NaN NaN NaN 22 NaN NaN NaN NaN NaN 23 NaN NaN NaN NaN NaN 24 NaN NaN NaN NaN NaN 25 NaN NaN NaN NaN NaN 26 NaN NaN NaN NaN NaN 27 NaN NaN NaN NaN NaN 28 NaN NaN NaN NaN NaN 29 NaN NaN NaN NaN NaN ... ... ... ... ... ... 1159 NaN NaN NaN NaN NaN 1160 NaN NaN NaN NaN NaN 1161 NaN NaN NaN NaN NaN 1162 NaN NaN NaN NaN NaN Unnamed: 104 Unnamed: 105 Unnamed: 106 Unnamed: 107 Unnamed: 108 \0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN 5 NaN NaN NaN NaN NaN 6 NaN NaN NaN NaN NaN 7 NaN NaN NaN NaN NaN ... ... ... ... ... ... 1161 NaN NaN NaN NaN NaN 1162 NaN NaN NaN NaN NaN ... Unnamed: 94 \0 ... NaN 1 ... NaN 2 ... NaN 3 ... NaN 4 ... NaN 5 ... NaN 6 ... NaN 7 ... NaN 8 ... NaN 9 ... NaN 10 ... NaN 11 ... NaN 12 ... NaN 13 ... NaN 14 ... NaN 15 ... NaN 16 ... NaN 17 ... NaN 18 ... NaN 19 ... NaN 20 ... NaN 21 ... NaN 22 ... NaN 23 ... NaN 24 ... NaN 25 ... NaN 26 ... NaN 27 ... NaN 28 ... NaN 29 ... NaN ... ... ... 1133 ... NaN 1134 ... NaN 1135 ... NaN 1136 ... NaN 1137 ... NaN 1138 ... NaN 1139 ... NaN 1140 ... NaN 1141 ... NaN 1142 ... NaN 1143 ... NaN 1144 ... NaN 1145 ... NaN 1146 ... NaN 1147 ... NaN 1148 ... NaN 1149 ... NaN 1150 ... NaN 1151 ... NaN 1152 ... NaN 1153 ... NaN 1154 ... NaN 1155 ... NaN 1156 ... NaN 1157 ... NaN 1158 ... NaN 1159 ... NaN 1160 ... NaN 1161 ... NaN 1162 ... NaN Unnamed: 95 Unnamed: 96 Unnamed: 97 Unnamed: 98 Unnamed: 99 \0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN 2 NaN NaN NaN NaN NaN 3 NaN NaN NaN NaN NaN 4 NaN NaN NaN NaN NaN ... ... ... ... ... ... 1133 NaN NaN NaN NaN NaN 1134 NaN NaN NaN NaN NaN 1135 NaN NaN NaN NaN NaN 1136 NaN NaN NaN NaN NaN category category_id \0 Business 2 1 stage shows 33 2 Literature 15 3 Science & Technology 22 4 health 11 5 Science & Technology 22 6 Outdoor 19 7 stage shows 33 8 nightlife 30 9 fashion & lifestyle 6 10 Government & Activism 25 11 stage shows 33 12 Religion & Spirituality 21 13 Outdoor 19 14 management 17 15 Science & Technology 22 16 nightlife 30 17 Outdoor 19 18 FAMILy & kids 5 19 fashion & lifestyle 6 20 FAMILy & kids 5 21 games 10 22 hobbies 32 23 hobbies 32 24 Religion & Spirituality 21 25 health 11 26 fashion & lifestyle 6 27 career & education 31 28 health 11 29 arts 1 ... ... ... 1133 Sports & Fitness 23 1134 Sports & Fitness 23 1135 Sports & Fitness 23 1136 Sports & Fitness 23 1137 Sports & Fitness 23 1138 Sports & Fitness 23 1139 Sports & Fitness 23 1140 Sports & Fitness 23 1141 Sports & Fitness 23 1142 Sports & Fitness 23 1143 Sports & Fitness 23 1144 Sports & Fitness 23 1145 Sports & Fitness 23 1146 Sports & Fitness 23 1147 Sports & Fitness 23 1148 Sports & Fitness 23 1149 Sports & Fitness 23 1150 Sports & Fitness 23 1151 Sports & Fitness 23 1152 Sports & Fitness 23 1153 Sports & Fitness 23 1154 Sports & Fitness 23 1155 Sports & Fitness 23 1156 Sports & Fitness 23 1157 Sports & Fitness 23 1158 Sports & Fitness 23 1159 Sports & Fitness 23 1160 Sports & Fitness 23 1161 Sports & Fitness 23 1162 Sports & Fitness 23 description \0 Josh Talks in partnership with Facebook is all... 1 Unwind on the strums of Guitar & immerse your... 2 Book review for grade 3 and above learners. 3 ... 3 ..About Organizer:.This is the official page f... 4 Blood Donation is organized under the banner o... 5 A day "Etched with Innovation and Learning" to... 6 Our next destination for Fun with us is "Goa" ... 7 Enjoy the Soulful and Unplugged Performance of... 8 Get ready with your dance shoes on as our favo... 9 FESTIVE HUES -- a fashion and lifestyle exhibi... 10 On Aug. 8, Dr. Ambedkar presides over the Depr... 11 It's A Rapper Boys..And M Write A New Rap song... 12 The Spiritual Makeover..A weekend workshop tha... 13 Our next destination for Fun with us is "Goa" ... 14 Project Management is all about getting the th... 15 World Conference Next Generation Testing 2018 ... 16 ..About Organizer:.Whitefield is now #Sherlocked! 17 On occasion of 72th Independence Day , Udaan O... 18 *Smilofy Special Superstar*.A Talent hunt for ... 19 ITEEHA is coming back to Bengaluru, after a fa... 20 This is an exciting course for kids to teach t... 21 ..About Organizer:.PPG Lounge is a next genera... 22 Touch Feel Try & Buy the latest #car and #bike... 23 Sniper Media is organising an exclusive semina... 24 He has all sorts of powers and able solve any ... 25 registration fee 50/₹ we r providing free c... 26 World Biggest Pageant Miss & Mrs World Queen a... 27 ..About Organizer:.Canam Consultants - India's... 28 Innopharm is an effort to bring innovations in... 29 The first Central India Art and Design Expo - ... ... ... 1133 As the cricket fever grips the country again, ... 1134 An evening of fun, food, drinks and rooting fo... 1135 The time has come, who will take their place S... 1136 Do you want to prove that Age is not a barrier... 1137 We Invite All The Corporate Companies To Be A ... 1138 PlayTM happy to announce you that conducting o... 1139 A Mix of fun rules and cricketing skills. Afte... 1140 Shuttle Swap presents Singles, Doubles and Mix... 1141 Yonex Mavis 350 Shuttle will be used State/Nat... 1142 Light up the FIFA World Cup with Bud90 Match S... 1143 We are charmed to launch the SVSEVENTZ.COM 5-A... 1144 We corephysio FC invite you for our first foot... 1145 After completing the 2nd season of Bangalore S... 1146 As the cricket fever grips the country again, ... 1147 Introducing BOX Cricket Super 6 Corporate Cric... 1148 After the sucess of '1st Matt & Mudd T20 Leagu... 1149 Hi All, It is my pleasure to officially announ... 1150 Sign up: Get early updates, free movie voucher... 1151 About VIVO Pro Kabaddi 2018: A new season of t... 1152 The Hero Indian Super League (ISL) is India's ... 1153 Limited time offer: Free Paytm Movie Voucher w... 1154 The 5th edition of the Indian Super League is ... 1155 Calling all Jamshedpur FC fans! Here's your ch... 1156 Empower yourself and progress towards a health... 1157 Making people happy when they feel that its en... 1158 LOVE YOGA ?- but too busy with work during the... 1159 The coolest way to tour the city ! Absorb the ... 1160 Ready to be a part of India's Biggest Walkatho... 1161 The event will comprise of the following Open ... 1162 RUN FOR CANCER CHILDREN On world Cancer Day 3r... event_name 0 Josh Talks Hyderabad 2018 1 Guitar Night With Ashmik Patil 2 Book Review - August 2018 - 2 3 Csaw'18 4 Blood donation camp 5 Rajasthan Youth Innovation and Technical Intel... 6 Goa – Fun All the Way!!! - Mom N Kids 7 The AnshUdhami Project LIVE at Tales & Spirits... 8 Friday Fiesta featuring Pearl 9 FESTIVE HUES 10 Nagpur 11 Yo Yo Deep SP The Rapper 12 The Spiritual Makeover 13 Goa Fun All the Way - Women Only group Tour 14 MS Project 2016 - A one day seminar 15 World Conference Next Generation Testing 16 Weekend Booster - Happy Hour 17 Ladies Only Camping : Freedom To Travel (Seaso... 18 Special superstar 19 Malaysian Batik Workshop 20 EQ Enhancement Course (5-10 years) 21 CS:GO Tournament 2018 - PPGL 22 Auto Mall at Mantri Square Bangalore 23 A Seminar by Ojas Rajani (Bollywood celebrity ... 24 rishikesh katti greatest Spirituality guru of ... 25 free BMD camp held on 26 jan 2018 26 Miss and Mrs Bhopal Madhya Pradesh India World... 27 USA, Canada & Singapore Application Days 2018 28 Innopharm 3 29 Kalasrishti Art and Design Expo ... ... 1133 Asia cup live screening at la casa Brewery+ ki... 1134 Asia Cup 2018 live screening at La Casa Brewer... 1135 FIFA FINAL AT KORAMANGALA TETTO - With #fifa#f... 1136 Womenasia Indoor Cricket Championship 1137 Switch Hit Corporate Cricket Tournament 1138 PlayTM Sports Arena Box Cricket league 1139 The Box Cricket League Edition II (16-17-18 No... 1140 Shuttle Swap Badminton Tournament - With Singl... 1141 SPARK BADMINTON LEAGUE - OCT 14th 2018 1142 Bud90 Match Screenings at Loft38 1143 5 A-Side Football Tournament 1144 5 vs 5 Football league - With Back 2 Track events 1145 Bangalore Sports Carnival Table Tennis Juniors... 1146 Asia cup live screening at la casa Brewery+ ki... 1147 Super 6 Corporate Cricket League 1148 Coolulu is organizing MATT & MUD T20 Cricket L... 1149 United Sportzs Pure Corporate Cricket season-10 1150 Sign up for updates on the VIVO Pro Kabaddi Se... 1151 VIVO Pro Kabaddi - UP Yoddha vs Patna Pirates ... 1152 HERO Indian Super League 2018-19: Kerala Blast... 1153 HERO ISL: FC Goa Memberships 1154 Hero Indian Super League 2018-19: Delhi Dynamo... 1155 HERO Indian Super League 2018-19: Jamshedpur F... 1156 Yoga Therapy Classes in Bangalore 1157 Saree Walkathon 1158 Weekend Yoga Teachers Training Program 1159 Bangalore Walks 1160 Oxfam Trailwalker Bengaluru 1161 TAD Pune 2018 (Triathlon Aquathlon Duathlon) 1162 RUN FOR CANCER CHILDREN [1163 rows x 241 columns], event_name category \0 Musical Camping at Dahanu Chiku farm outdoor 1 Adventure Camping at Wada outdoor 2 Kaas Plateau Tour outdoor 3 Pawna Lake Camping, kevre, Lonavala outdoor 4 Night Trek and Camping at Korigad Fort outdoor 5 PARAMOTORING outdoor 6 WATERFALL TREK & BEACH CAMPING (NAGALAPURAM: N... outdoor 7 Happiest Land On Earth - Bhutan outdoor 8 4 Days serial hiking in Sahyadris - Sep 29 to ... outdoor 9 Ride To Valparai outdoor 10 Dzongri Trek - Gateway to Kanchenjunga Mountain outdoor 11 Skandagiri Night Trek With Camping outdoor 12 Kalsubai Trek | Plan The Unplanned outdoor 13 Bike N Hike Skandagiri outdoor 14 Unplanned Stories - Episode 6 - Travel Tales outdoor 15 Feast on authentic flavors from Goa! outdoor 16 The Boot Camp outdoor 17 The HandleBards: Romeo and Juliet at Ranga Sha... outdoor 18 Workshop on Metagenomic Sequencing on the Grid... Science & Technology 19 Aerovision Science & Technology 20 Electric Vehicle Technology Workshop Science & Technology 21 BPM Strategy Summit Science & Technology 22 Summit of Interior Designers & Architecture Science & Technology 23 SMART ASIA India Expo& Summit Science & Technology 24 A Smart City Life Exhibition Science & Technology 25 OPEN SOURCE INDIA Science & Technology 26 SolarRoofs India Bangalore Science & Technology 27 International Conference on Innovative Researc... Science & Technology 28 International Conference on Business Managemen... Science & Technology 29 DevOn Summit Bangalore - Digital Transformations Science & Technology .. ... ... 144 Asia cup live screening at la casa Brewery+ ki... Sports & Fitness 145 Asia Cup 2018 live screening at La Casa Brewer... Sports & Fitness 146 FIFA FINAL AT KORAMANGALA TETTO - With #fifa#f... Sports & Fitness 147 Womenasia Indoor Cricket Championship Sports & Fitness 148 Switch Hit Corporate Cricket Tournament Sports & Fitness 149 PlayTM Sports Arena Box Cricket league Sports & Fitness 150 The Box Cricket League Edition II (16-17-18 No... Sports & Fitness 151 Shuttle Swap Badminton Tournament - With Singl... Sports & Fitness 152 SPARK BADMINTON LEAGUE - OCT 14th 2018 Sports & Fitness 153 Bud90 Match Screenings at Loft38 Sports & Fitness s 170 Bangalore Walks Sports & Fitness 171 Oxfam Trailwalker Bengaluru Sports & Fitness 172 TAD Pune 2018 (Triathlon Aquathlon Duathlon) Sports & Fitness 173 RUN FOR CANCER CHILDREN Sports & Fitness category_id description \0 19 Dear All Camping Lovers, Come take camping exp... 1 19 Our Adventure campsite at Wada is developed wi... 2 19 Type: Eco Tour Height: 3937 FT above MSL (Appr... 3 19 Our Pawna Lake Camping site is located near Ke... 4 19 Type: Hill Fort Height: 3050 Feet above MSL (A... 23 22 Making 'Smart Cities Mission' a Reality The SM... 24 22 A Smart City Life A Smart City Life Exhibition... 25 22 Asia's No. 1 Convention on Open Source Started... 26 22 The conference will offer an excellent platfor... 27 22 Provides a leading forum for the presentation ... 28 22 Provide opportunity for the global participant... 29 22 The biggest event about Digital Transformation... .. ... ... 144 23 As the cricket fever grips the country again, ... 145 23 An evening of fun, food, drinks and rooting fo... 146 23 The time has come, who will take their place S... 147 23 Do you want to prove that Age is not a barrier... 148 23 We Invite All The Corporate Companies To Be A ... 149 23 PlayTM happy to announce you that conducting o... 150 23 A Mix of fun rules and cricketing skills. Afte... 151 23 Shuttle Swap presents Singles, Doubles and Mix... 152 23 Yonex Mavis 350 Shuttle will be used State/Nat... 153 23 Light up the FIFA World Cup with Bud90 Match S... 154 23 We are charmed to launch the SVSEVENTZ.COM 5-A... 155 23 We corephysio FC invite you for our first foot... 156 23 After completing the 2nd season of Bangalore S... 157 23 As the cricket fever grips the country again, ... 158 23 Introducing BOX Cricket Super 6 Corporate Cric... 159 23 After the sucess of '1st Matt & Mudd T20 Leagu... 160 23 Hi All, It is my pleasure to officially announ... 161 23 Sign up: Get early updates, free movie voucher... 162 23 About VIVO Pro Kabaddi 2018: A new season of t... 163 23 The Hero Indian Super League (ISL) is India's ... 164 23 Limited time offer: Free Paytm Movie Voucher w... 165 23 The 5th edition of the Indian Super League is ... 166 23 Calling all Jamshedpur FC fans! Here's your ch... 167 23 Empower yourself and progress towards a health... 168 23 Making people happy when they feel that its en... 169 23 LOVE YOGA ?- but too busy with work during the... 170 23 The coolest way to tour the city ! Absorb the ... 171 23 Ready to be a part of India's Biggest Walkatho... 172 23 The event will comprise of the following Open ... 173 23 RUN FOR CANCER CHILDREN On world Cancer Day 3r... Unnamed: 4 Unnamed: 5 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 24 NaN NaN 25 NaN NaN 26 NaN NaN 27 NaN NaN 28 NaN NaN 29 NaN NaN .. ... ... 144 NaN NaN 145 NaN NaN 146 NaN NaN 147 NaN NaN 148 NaN NaN 149 NaN NaN [174 rows x 6 columns]] | Whats wrong with a simple:pd.concat([df1, df2], ignore_index=True)).to_csv('File.csv', index=False)this will work if they have the same columns.A more verbose way to extract specific columns would be:(pd.concat([df1[['event_name','category', 'category_id', 'description']], df2[['event_name','category', 'category_id', 'description']]], ignore_index=True)).to_csv('File.csv', index=False))Separate Notes: you are initializing a DF with just columns and then outputting that to a CSV.Why are you using .squeeze to convert it to 1-D dataset? |
How to write a program using Numpy to generate and print 5 random number between 0 & 1 How to write a program using Numpy to generate and print 5 random number between 0 and 1 | import numpy as npnumbers = np.random.rand(5)print(numbers)np.random.rand will produce a sample from the uniform distribution over [0,1]If you want to generate numbers from a different interval, let's say [a,b] , you can use:import numpy as npa = 2b = 4numbers = a + np.random.rand(5)*(b-a)print(numbers) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.