prompt
stringlengths
501
4.98M
target
stringclasses
1 value
chunk_prompt
bool
1 class
kind
stringclasses
2 values
prob
float64
0.2
0.97
path
stringlengths
10
394
quality_prob
float64
0.4
0.99
learning_prob
float64
0.15
1
filename
stringlengths
4
221
``` import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.mplot3d import Axes3D import scipy as sp import sympy as sy sy.init_printing() np.set_printoptions(precision=3) np.set_printoptions(suppress=True) from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # display multiple results def round_expr(expr, num_digits): return expr.xreplace({n : round(n, num_digits) for n in expr.atoms(sy.Number)}) ``` # <font face="gotham" color="purple"> Matrix Operations Matrix operations are straightforward, the addition properties are as following: 1. $\pmb{A}+\pmb B=\pmb B+\pmb A$ 2. $(\pmb{A}+\pmb{B})+\pmb C=\pmb{A}+(\pmb{B}+\pmb{C})$ 3. $c(\pmb{A}+\pmb{B})=c\pmb{A}+c\pmb{B}$ 4. $(c+d)\pmb{A}=c\pmb{A}+c\pmb{D}$ 5. $c(d\pmb{A})=(cd)\pmb{A}$ 6. $\pmb{A}+\pmb{0}=\pmb{A}$, where $\pmb{0}$ is the zero matrix 7. For any $\pmb{A}$, there exists an $-\pmb A$, such that $\pmb A+(-\pmb A)=\pmb0$. They are as obvious as it shows, so no proofs are provided here.And the matrix multiplication properties are: 1. $\pmb A(\pmb{BC})=(\pmb{AB})\pmb C$ 2. $c(\pmb{AB})=(c\pmb{A})\pmb{B}=\pmb{A}(c\pmb{B})$ 3. $\pmb{A}(\pmb{B}+\pmb C)=\pmb{AB}+\pmb{AC}$ 4. $(\pmb{B}+\pmb{C})\pmb{A}=\pmb{BA}+\pmb{CA}$ Note that we need to differentiate two kinds of multiplication, <font face="gotham" color="red">Hadamard multiplication</font> (element-wise multiplication) and <font face="gotham" color="red">matrix multiplication</font>: ``` A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8]]) A*B # this is Hadamard elementwise product A@B # this is matrix product ``` The matrix multipliation rule is ``` np.sum(A[0,:]*B[:,0]) # (1, 1) np.sum(A[1,:]*B[:,0]) # (2, 1) np.sum(A[0,:]*B[:,1]) # (1, 2) np.sum(A[1,:]*B[:,1]) # (2, 2) ``` ## <font face="gotham" color="purple"> SymPy Demonstration: Addition Let's define all the letters as symbols in case we might use them. ``` a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z = sy.symbols('a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z', real = True) A = sy.Matrix([[a, b, c], [d, e, f]]) A + A A - A B = sy.Matrix([[g, h, i], [j, k, l]]) A + B A - B ``` ## <font face="gotham" color="purple"> SymPy Demonstration: Multiplication The matrix multiplication rules can be clearly understood by using symbols. ``` A = sy.Matrix([[a, b, c], [d, e, f]]) B = sy.Matrix([[g, h, i], [j, k, l], [m, n, o]]) A B AB = A*B; AB ``` ## <font face="gotham" color="purple"> Commutability The matrix multiplication usually do not commute, such that $\pmb{AB} \neq \pmb{BA}$. For instance, consider $\pmb A$ and $\pmb B$: ``` A = sy.Matrix([[3, 4], [7, 8]]) B = sy.Matrix([[5, 3], [2, 1]]) A*B B*A ``` How do we find commutable matrices? ``` A = sy.Matrix([[a, b], [c, d]]) B = sy.Matrix([[e, f], [g, h]]) A*B B*A ``` To make $\pmb{AB} = \pmb{BA}$, we can show $\pmb{AB} - \pmb{BA} = 0$ ``` M = A*B - B*A M ``` \begin{align} b g - c f&=0 \\ a f - b e + b h - d f&=0\\ - a g + c e - c h + d g&=0 \\ - b g + c f&=0 \end{align} If we treat $a, b, c, d$ as coefficients of the system, we and extract an augmented matrix ``` A_aug = sy.Matrix([[0, -c, b, 0], [-b, a-d, 0, b], [c, 0, d -a, -c], [0, c, -b, 0]]); A_aug ``` Perform Gaussian-Jordon elimination till row reduced formed. ``` A_aug.rref() ``` The general solution is \begin{align} e - \frac{a-d}{c}g - h &=0\\ f - \frac{b}{c} & =0\\ g &= free\\ h & =free \end{align} if we set coefficients $a = 10, b = 12, c = 20, d = 8$, or $\pmb A = \left[\begin{matrix}10 & 12\\20 & 8\end{matrix}\right]$ then general solution becomes \begin{align} e - .1g - h &=0\\ f - .6 & =0\\ g &= free\\ h & =free \end{align} Then try a special solution when $g = h = 1$ \begin{align} e &=1.1\\ f & =.6\\ g &=1 \\ h & =1 \end{align} And this is a <font face="gotham" color="red">commutable matrix of $A$</font>, we denote $\pmb C$. ``` C = sy.Matrix([[1.1, .6], [1, 1]]);C ``` Now we can see that $\pmb{AB}=\pmb{BA}$. ``` A = sy.Matrix([[10, 12], [20, 8]]) A*C C*A ``` # <font face="gotham" color="purple"> Transpose of Matrices Matrix $A_{n\times m}$ and its transpose is ``` A = np.array([[1, 2, 3], [4, 5, 6]]); A A.T # transpose A = sy.Matrix([[1, 2, 3], [4, 5, 6]]); A A.transpose() ``` The properties of transpose are 1. $(A^T)^T$ 2. $(A+B)^T=A^T+B^T$ 3. $(cA)^T=cA^T$ 4. $(AB)^T=B^TA^T$ We can show why this holds with SymPy: ``` A = sy.Matrix([[a, b], [c, d], [e, f]]) B = sy.Matrix([[g, h, i], [j, k, l]]) AB = A*B AB_tr = AB.transpose(); AB_tr A_tr_B_tr = B.transpose()*A.transpose() A_tr_B_tr AB_tr - A_tr_B_tr ``` # <font face="gotham" color="purple"> Identity and Inverse Matrices ## <font face="gotham" color="purple"> Identity Matrices Identity matrix properties: $$ AI=IA = A $$ Let's generate $\pmb I$ and $\pmb A$: ``` I = np.eye(5); I A = np.around(np.random.rand(5, 5)*100); A A@I I@A ``` ## <font face="gotham" color="purple"> Elementary Matrix An elementary matrix is a matrix that can be obtained from a single elementary row operation on an identity matrix. Such as: $$ \left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\end{matrix}\right]\ \matrix{R_1\leftrightarrow R_2\cr ~\cr ~}\qquad\Longrightarrow\qquad \left[\begin{matrix}0 & 1 & 0\cr 1 & 0 & 0\cr 0 & 0 & 1\end{matrix}\right] $$ The elementary matrix above is created by switching row 1 and row 2, and we denote it as $\pmb{E}$, let's left multiply $\pmb E$ onto a matrix $\pmb A$. Generate $\pmb A$ ``` A = sy.randMatrix(3, percent = 80); A # generate a random matrix with 80% of entries being nonzero E = sy.Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]);E ``` It turns out that by multiplying $\pmb E$ onto $\pmb A$, $\pmb A$ also switches the row 1 and 2. ``` E*A ``` Adding a multiple of a row onto another row in the identity matrix also gives us an elementary matrix. $$ \left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\end{matrix}\right]\ \matrix{~\cr ~\cr R_3-7R_1}\qquad\longrightarrow\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr -7 & 0 & 1\end{matrix}\right] $$ Let's verify with SymPy. ``` A = sy.randMatrix(3, percent = 80); A E = sy.Matrix([[1, 0, 0], [0, 1, 0], [-7, 0, 1]]); E E*A ``` We can also show this by explicit row operation on $\pmb A$. ``` EA = sy.matrices.MatrixBase.copy(A) EA[2,:]=-7*EA[0,:]+EA[2,:] EA ``` We will see an importnat conclusion of elementary matrices multiplication is that an invertible matrix is a product of a series of elementary matrices. ## <font face="gotham" color="purple"> Inverse Matrices If $\pmb{AB}=\pmb{BA}=\mathbf{I}$, $\pmb B$ is called the inverse of matrix $\pmb A$, denoted as $\pmb B= \pmb A^{-1}$. NumPy has convenient function ```np.linalg.inv()``` for computing inverse matrices. Generate $\pmb A$ ``` A = np.round(10*np.random.randn(5,5)); A Ainv = np.linalg.inv(A) Ainv A@Ainv ``` The ```-0.``` means there are more digits after point, but omitted here. ### <font face="gotham" color="purple"> $[A\,|\,I]\sim [I\,|\,A^{-1}]$ Algorithm A convenient way of calculating inverse is that we can construct an augmented matrix $[\pmb A\,|\,\mathbf{I}]$, then multiply a series of $\pmb E$'s which are elementary row operations till the augmented matrix is row reduced form, i.e. $\pmb A \rightarrow \mathbf{I}$. Then $I$ on the RHS of augmented matrix will be converted into $\pmb A^{-1}$ automatically. We can show with SymPy's ```.rref()``` function on the augmented matrix $[A\,|\,I]$. ``` AI = np.hstack((A, I)) # stack the matrix A and I horizontally AI = sy.Matrix(AI); AI AI_rref = AI.rref(); AI_rref ``` Extract the RHS block, this is the $A^{-1}$. ``` Ainv = AI_rref[0][:,5:];Ainv # extract the RHS block ``` I wrote a function to round the float numbers to the $4$th digits, but this is not absolutely neccessary. ``` round_expr(Ainv, 4) ``` We can verify if $AA^{-1}=\mathbf{I}$ ``` A = sy.Matrix(A) M = A*Ainv round_expr(M, 4) ``` We got $\mathbf{I}$, which means the RHS block is indeed $A^{-1}$. ### <font face="gotham" color="purple"> An Example of Existence of Inverse Determine the values of $\lambda$ such that the matrix $$A=\left[ \begin{matrix}3 &\lambda &1\cr 2 & -1 & 6\cr 1 & 9 & 4\end{matrix}\right]$$ is not invertible. Still,we are using SymPy to solve the problem. ``` lamb = sy.symbols('lamda') # SymPy will automatically render into LaTeX greek letters A = np.array([[3, lamb, 1], [2, -1, 6], [1, 9, 4]]) I = np.eye(3) AI = np.hstack((A, I)) AI = sy.Matrix(AI) AI_rref = AI.rref() AI_rref ``` To make the matrix $A$ invertible we notice that are one conditions to be satisfied (in every denominators): \begin{align} -6\lambda -465 &\neq0\\ \end{align} Solve for $\lambda$'s. ``` sy.solvers.solve(-6*lamb-465, lamb) ``` Let's test with determinant. If $|\pmb A|=0$, then the matrix is not invertible. Don't worry, we will come back to this. ``` A = np.array([[3, -155/2, 1], [2, -1, 6], [1, 9, 4]]) np.linalg.det(A) ``` The $|\pmb A|$ is practically $0$. The condition is that as long as $\lambda \neq -\frac{155}{2}$, the matrix $A$ is invertible. ### <font face="gotham" color="purple"> Properties of Inverse Matrices 1. If $A$ and $B$ are both invertible, then $(AB)^{-1}=B^{-1}A^{-1}$. 2. If $A$ is invertible, then $(A^T)^{-1}=(A^{-1})^T$. 3. If $A$ and $B$ are both invertible and symmetric such that $AB=BA$, then $A^{-1}B$ is symmetric. The <font face="gotham" color="red"> first property</font> is straightforward \begin{align} ABB^{-1}A^{-1}=AIA^{-1}=I=AB(AB)^{-1} \end{align} The <font face="gotham" color="red"> second property</font> is to show $$ A^T(A^{-1})^T = I $$ We can use the property of transpose $$ A^T(A^{-1})^T=(A^{-1}A)^T = I^T = I $$ The <font face="gotham" color="red">third property</font> is to show $$ A^{-1}B = (A^{-1}B)^T $$ Again use the property of tranpose $$ (A^{-1}B)^{T}=B^T(A^{-1})^T=B(A^T)^{-1}=BA^{-1} $$ We use the $AB = BA$ condition to continue \begin{align} AB&=BA\\ A^{-1}ABA^{-1}&=A^{-1}BAA^{-1}\\ BA^{-1}&=A^{-1}B \end{align} The plug in the previous equation, we have $$ (A^{-1}B)^{T}=BA^{-1}=A^{-1}B $$
true
code
0.312314
null
null
null
null
# Exploring Neural Audio Synthesis with NSynth ## Parag Mital There is a lot to explore with NSynth. This notebook explores just a taste of what's possible including how to encode and decode, timestretch, and interpolate sounds. Also check out the [blog post](https://magenta.tensorflow.org/nsynth-fastgen) for more examples including two compositions created with Ableton Live. If you are interested in learning more, checkout my [online course on Kadenze](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) where we talk about Magenta and NSynth in more depth. ## Part 1: Encoding and Decoding We'll walkthrough using the source code to encode and decode some audio. This is the most basic thing we can do with NSynth, and it will take at least about 6 minutes per 1 second of audio to perform on a GPU, though this will get faster! I'll first show you how to encode some audio. This is basically saying, here is some audio, now put it into the trained model. It's like the encoding of an MP3 file. It takes some raw audio, and represents it using some really reduced down representation of the raw audio. NSynth works similarly, but we can actually mess with the encoding to do some awesome stuff. You can for instance, mix it with other encodings, or slow it down, or speed it up. You can potentially even remove parts of it, mix many different encodings together, and hopefully just explore ideas yet to be thought of. After you've created your encoding, you have to just generate, or decode it, just like what an audio player does to an MP3 file. First, to install Magenta, follow their setup guide here: https://github.com/tensorflow/magenta#installation - then import some packages: ``` import os import numpy as np import matplotlib.pyplot as plt from magenta.models.nsynth import utils from magenta.models.nsynth.wavenet import fastgen from IPython.display import Audio %matplotlib inline %config InlineBackend.figure_format = 'jpg' ``` Now we'll load up a sound I downloaded from freesound.org. The `utils.load_audio` method will resample this to the required sample rate of 16000. I'll load in 40000 samples of this beat which should end up being a pretty good loop: ``` # from https://www.freesound.org/people/MustardPlug/sounds/395058/ fname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav' sr = 16000 audio = utils.load_audio(fname, sample_length=40000, sr=sr) sample_length = audio.shape[0] print('{} samples, {} seconds'.format(sample_length, sample_length / float(sr))) ``` ## Encoding We'll now encode some audio using the pre-trained NSynth model (download from: http://download.magenta.tensorflow.org/models/nsynth/wavenet-ckpt.tar). This is pretty fast, and takes about 3 seconds per 1 second of audio on my NVidia 1080 GPU. This will give us a 125 x 16 dimension encoding for every 4 seconds of audio which we can then decode, or resynthesize. We'll try a few things, including just leaving it alone and reconstructing it as is. But then we'll also try some fun transformations of the encoding and see what's possible from there. ```help(fastgen.encode) Help on function encode in module magenta.models.nsynth.wavenet.fastgen: encode(wav_data, checkpoint_path, sample_length=64000) Generate an array of embeddings from an array of audio. Args: wav_data: Numpy array [batch_size, sample_length] checkpoint_path: Location of the pretrained model. sample_length: The total length of the final wave file, padded with 0s. Returns: encoding: a [mb, 125, 16] encoding (for 64000 sample audio file). ``` ``` %time encoding = fastgen.encode(audio, 'model.ckpt-200000', sample_length) ``` This returns a 3-dimensional tensor representing the encoding of the audio. The first dimension of the encoding represents the batch dimension. We could have passed in many audio files at once and the process would be much faster. For now we've just passed in one audio file. ``` print(encoding.shape) ``` We'll also save the encoding so that we can use it again later: ``` np.save(fname + '.npy', encoding) ``` Let's take a look at the encoding of this audio file. Think of these as 16 channels of sounds all mixed together (though with a lot of caveats): ``` fig, axs = plt.subplots(2, 1, figsize=(10, 5)) axs[0].plot(audio); axs[0].set_title('Audio Signal') axs[1].plot(encoding[0]); axs[1].set_title('NSynth Encoding') ``` You should be able to pretty clearly see a sort of beat like pattern in both the signal and the encoding. ## Decoding Now we can decode the encodings as is. This is the process that takes awhile, though it used to be so long that you wouldn't even dare trying it. There is still plenty of room for improvement and I'm sure it will get faster very soon. ``` help(fastgen.synthesize) Help on function synthesize in module magenta.models.nsynth.wavenet.fastgen: synthesize(encodings, save_paths, checkpoint_path='model.ckpt-200000', samples_per_save=1000) Synthesize audio from an array of embeddings. Args: encodings: Numpy array with shape [batch_size, time, dim]. save_paths: Iterable of output file names. checkpoint_path: Location of the pretrained model. [model.ckpt-200000] samples_per_save: Save files after every amount of generated samples. ``` ``` %time fastgen.synthesize(encoding, save_paths=['gen_' + fname], samples_per_save=sample_length) ``` After it's done synthesizing, we can see that takes about 6 minutes per 1 second of audio on a non-optimized version of Tensorflow for GPU on an NVidia 1080 GPU. We can speed things up considerably if we want to do multiple encodings at a time. We'll see that in just a moment. Let's first listen to the synthesized audio: ``` sr = 16000 synthesis = utils.load_audio('gen_' + fname, sample_length=sample_length, sr=sr) ``` Listening to the audio, the sounds are definitely different. NSynth seems to apply a sort of gobbly low-pass that also really doesn't know what to do with the high frequencies. It is really quite hard to describe, but that is what is so interesting about it. It has a recognizable, characteristic sound. Let's try another one. I'll put the whole workflow for synthesis in two cells, and we can listen to another synthesis of a vocalist singing, "Laaaa": ``` def load_encoding(fname, sample_length=None, sr=16000, ckpt='model.ckpt-200000'): audio = utils.load_audio(fname, sample_length=sample_length, sr=sr) encoding = fastgen.encode(audio, ckpt, sample_length) return audio, encoding # from https://www.freesound.org/people/maurolupo/sounds/213259/ fname = '213259__maurolupo__girl-sings-laa.wav' sample_length = 32000 audio, encoding = load_encoding(fname, sample_length) fastgen.synthesize( encoding, save_paths=['gen_' + fname], samples_per_save=sample_length) synthesis = utils.load_audio('gen_' + fname, sample_length=sample_length, sr=sr) ``` Aside from the quality of the reconstruction, what we're really after is what is possible with such a model. Let's look at two examples now. # Part 2: Timestretching Let's try something more fun. We'll stretch the encodings a bit and see what it sounds like. If you were to try and stretch audio directly, you'd hear a pitch shift. There are some other ways of stretching audio without shifting pitch, like granular synthesis. But it turns out that NSynth can also timestretch. Let's see how. First we'll use image interpolation to help stretch the encodings. ``` # use image interpolation to stretch the encoding: (pip install scikit-image) try: from skimage.transform import resize except ImportError: !pip install scikit-image from skimage.transform import resize ``` Here's a utility function to help you stretch your own encoding. It uses skimage.transform and will retain the range of values. Images typically only have a range of 0-1, but the encodings aren't actually images so we'll keep track of their min/max in order to stretch them like images. ``` def timestretch(encodings, factor): min_encoding, max_encoding = encoding.min(), encoding.max() encodings_norm = (encodings - min_encoding) / (max_encoding - min_encoding) timestretches = [] for encoding_i in encodings_norm: stretched = resize(encoding_i, (int(encoding_i.shape[0] * factor), encoding_i.shape[1]), mode='reflect') stretched = (stretched * (max_encoding - min_encoding)) + min_encoding timestretches.append(stretched) return np.array(timestretches) # from https://www.freesound.org/people/MustardPlug/sounds/395058/ fname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav' sample_length = 40000 audio, encoding = load_encoding(fname, sample_length) ``` Now let's stretch the encodings with a few different factors: ``` audio = utils.load_audio('gen_slower_' + fname, sample_length=None, sr=sr) Audio(audio, rate=sr) encoding_slower = timestretch(encoding, 1.5) encoding_faster = timestretch(encoding, 0.5) ``` Basically we've made a slower and faster version of the amen break's encodings. The original encoding is shown in black: ``` fig, axs = plt.subplots(3, 1, figsize=(10, 7), sharex=True, sharey=True) axs[0].plot(encoding[0]); axs[0].set_title('Encoding (Normal Speed)') axs[1].plot(encoding_faster[0]); axs[1].set_title('Encoding (Faster))') axs[2].plot(encoding_slower[0]); axs[2].set_title('Encoding (Slower)') ``` Now let's decode them: ``` fastgen.synthesize(encoding_faster, save_paths=['gen_faster_' + fname]) fastgen.synthesize(encoding_slower, save_paths=['gen_slower_' + fname]) ``` It seems to work pretty well and retains the pitch and timbre of the original sound. We could even quickly layer the sounds just by adding them. You might want to do this in a program like Logic or Ableton Live instead and explore more possiblities of these sounds! # Part 3: Interpolating Sounds Now let's try something more experimental. NSynth released plenty of great examples of what happens when you mix the embeddings of different sounds: https://magenta.tensorflow.org/nsynth-instrument - we're going to do the same but now with our own sounds! First let's load some encodings: ``` sample_length = 80000 # from https://www.freesound.org/people/MustardPlug/sounds/395058/ aud1, enc1 = load_encoding('395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav', sample_length) # from https://www.freesound.org/people/xserra/sounds/176098/ aud2, enc2 = load_encoding('176098__xserra__cello-cant-dels-ocells.wav', sample_length) ``` Now we'll mix the two audio signals together. But this is unlike adding the two signals together in a Ableton or simply hearing both sounds at the same time. Instead, we're averaging the representation of their timbres, tonality, change over time, and resulting audio signal. This is way more powerful than a simple averaging. ``` enc_mix = (enc1 + enc2) / 2.0 fig, axs = plt.subplots(3, 1, figsize=(10, 7)) axs[0].plot(enc1[0]); axs[0].set_title('Encoding 1') axs[1].plot(enc2[0]); axs[1].set_title('Encoding 2') axs[2].plot(enc_mix[0]); axs[2].set_title('Average') fastgen.synthesize(enc_mix, save_paths='mix.wav') ``` As another example of what's possible with interpolation of embeddings, we'll try crossfading between the two embeddings. To do this, we'll write a utility function which will use a hanning window to apply a fade in or out to the embeddings matrix: ``` def fade(encoding, mode='in'): length = encoding.shape[1] fadein = (0.5 * (1.0 - np.cos(3.1415 * np.arange(length) / float(length)))).reshape(1, -1, 1) if mode == 'in': return fadein * encoding else: return (1.0 - fadein) * encoding fig, axs = plt.subplots(3, 1, figsize=(10, 7)) axs[0].plot(enc1[0]); axs[0].set_title('Original Encoding') axs[1].plot(fade(enc1, 'in')[0]); axs[1].set_title('Fade In') axs[2].plot(fade(enc1, 'out')[0]); axs[2].set_title('Fade Out') ``` Now we can cross fade two different encodings by adding their repsective fade ins and out: ``` def crossfade(encoding1, encoding2): return fade(encoding1, 'out') + fade(encoding2, 'in') fig, axs = plt.subplots(3, 1, figsize=(10, 7)) axs[0].plot(enc1[0]); axs[0].set_title('Encoding 1') axs[1].plot(enc2[0]); axs[1].set_title('Encoding 2') axs[2].plot(crossfade(enc1, enc2)[0]); axs[2].set_title('Crossfade') ``` Now let's synthesize the resulting encodings: ``` fastgen.synthesize(crossfade(enc1, enc2), save_paths=['crossfade.wav']) ``` There is a lot to explore with NSynth. So far I've just shown you a taste of what's possible when you are able to generate your own sounds. I expect the generation process will soon get much faster, especially with help from the community, and for more unexpected and interesting applications to emerge. Please keep in touch with whatever you end up creating, either personally via [twitter](https://twitter.com/pkmital), in our [Creative Applications of Deep Learning](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) community on Kadenze, or the [Magenta Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss).
true
code
0.48749
null
null
null
null
# Comprehensive Example ``` # Enabling the `widget` backend. # This requires jupyter-matplotlib a.k.a. ipympl. # ipympl can be install via pip or conda. %matplotlib widget import matplotlib.pyplot as plt import numpy as np # Testing matplotlib interactions with a simple plot fig = plt.figure() plt.plot(np.sin(np.linspace(0, 20, 100))); # Always hide the toolbar fig.canvas.toolbar_visible = False # Put it back to its default fig.canvas.toolbar_visible = 'fade-in-fade-out' # Change the toolbar position fig.canvas.toolbar_position = 'top' # Hide the Figure name at the top of the figure fig.canvas.header_visible = False # Hide the footer fig.canvas.footer_visible = False # Disable the resizing feature fig.canvas.resizable = False # If true then scrolling while the mouse is over the canvas will not move the entire notebook fig.canvas.capture_scroll = True ``` You can also call `display` on `fig.canvas` to display the interactive plot anywhere in the notebooke ``` fig.canvas.toolbar_visible = True display(fig.canvas) ``` Or you can `display(fig)` to embed the current plot as a png ``` display(fig) ``` # 3D plotting ``` from mpl_toolkits.mplot3d import axes3d fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Grab some test data. X, Y, Z = axes3d.get_test_data(0.05) # Plot a basic wireframe. ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10) plt.show() ``` # Subplots ``` # A more complex example from the matplotlib gallery np.random.seed(0) n_bins = 10 x = np.random.randn(1000, 3) fig, axes = plt.subplots(nrows=2, ncols=2) ax0, ax1, ax2, ax3 = axes.flatten() colors = ['red', 'tan', 'lime'] ax0.hist(x, n_bins, density=1, histtype='bar', color=colors, label=colors) ax0.legend(prop={'size': 10}) ax0.set_title('bars with legend') ax1.hist(x, n_bins, density=1, histtype='bar', stacked=True) ax1.set_title('stacked bar') ax2.hist(x, n_bins, histtype='step', stacked=True, fill=False) ax2.set_title('stack step (unfilled)') # Make a multiple-histogram of data-sets with different length. x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]] ax3.hist(x_multi, n_bins, histtype='bar') ax3.set_title('different sample sizes') fig.tight_layout() plt.show() fig.canvas.toolbar_position = 'right' fig.canvas.toolbar_visible = False ``` # Interactions with other widgets and layouting When you want to embed the figure into a layout of other widgets you should call `plt.ioff()` before creating the figure otherwise `plt.figure()` will trigger a display of the canvas automatically and outside of your layout. ### Without using `ioff` Here we will end up with the figure being displayed twice. The button won't do anything it just placed as an example of layouting. ``` import ipywidgets as widgets # ensure we are interactive mode # this is default but if this notebook is executed out of order it may have been turned off plt.ion() fig = plt.figure() ax = fig.gca() ax.imshow(Z) widgets.AppLayout( center=fig.canvas, footer=widgets.Button(icon='check'), pane_heights=[0, 6, 1] ) ``` ### Fixing the double display with `ioff` If we make sure interactive mode is off when we create the figure then the figure will only display where we want it to. There is ongoing work to allow usage of `ioff` as a context manager, see the [ipympl issue](https://github.com/matplotlib/ipympl/issues/220) and the [matplotlib issue](https://github.com/matplotlib/matplotlib/issues/17013) ``` plt.ioff() fig = plt.figure() plt.ion() ax = fig.gca() ax.imshow(Z) widgets.AppLayout( center=fig.canvas, footer=widgets.Button(icon='check'), pane_heights=[0, 6, 1] ) ``` # Interacting with other widgets ## Changing a line plot with a slide ``` # When using the `widget` backend from ipympl, # fig.canvas is a proper Jupyter interactive widget, which can be embedded in # an ipywidgets layout. See https://ipywidgets.readthedocs.io/en/stable/examples/Layout%20Templates.html # One can bound figure attributes to other widget values. from ipywidgets import AppLayout, FloatSlider plt.ioff() slider = FloatSlider( orientation='horizontal', description='Factor:', value=1.0, min=0.02, max=2.0 ) slider.layout.margin = '0px 30% 0px 30%' slider.layout.width = '40%' fig = plt.figure() fig.canvas.header_visible = False fig.canvas.layout.min_height = '400px' plt.title('Plotting: y=sin({} * x)'.format(slider.value)) x = np.linspace(0, 20, 500) lines = plt.plot(x, np.sin(slider.value * x)) def update_lines(change): plt.title('Plotting: y=sin({} * x)'.format(change.new)) lines[0].set_data(x, np.sin(change.new * x)) fig.canvas.draw() fig.canvas.flush_events() slider.observe(update_lines, names='value') AppLayout( center=fig.canvas, footer=slider, pane_heights=[0, 6, 1] ) ``` ## Update image data in a performant manner Two useful tricks to improve performance when updating an image displayed with matplolib are to: 1. Use the `set_data` method instead of calling imshow 2. Precompute and then index the array ``` # precomputing all images x = np.linspace(0,np.pi,200) y = np.linspace(0,10,200) X,Y = np.meshgrid(x,y) parameter = np.linspace(-5,5) example_image_stack = np.sin(X)[None,:,:]+np.exp(np.cos(Y[None,:,:]*parameter[:,None,None])) plt.ioff() fig = plt.figure() plt.ion() im = plt.imshow(example_image_stack[0]) def update(change): im.set_data(example_image_stack[change['new']]) fig.canvas.draw_idle() slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1) slider.observe(update, names='value') widgets.VBox([slider, fig.canvas]) ``` ### Debugging widget updates and matplotlib callbacks If an error is raised in the `update` function then will not always display in the notebook which can make debugging difficult. This same issue is also true for matplotlib callbacks on user events such as mousemovement, for example see [issue](https://github.com/matplotlib/ipympl/issues/116). There are two ways to see the output: 1. In jupyterlab the output will show up in the Log Console (View > Show Log Console) 2. using `ipywidgets.Output` Here is an example of using an `Output` to capture errors in the update function from the previous example. To induce errors we changed the slider limits so that out of bounds errors will occur: From: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)` To: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)` If you move the slider all the way to the right you should see errors from the Output widget ``` plt.ioff() fig = plt.figure() plt.ion() im = plt.imshow(example_image_stack[0]) out = widgets.Output() @out.capture() def update(change): with out: if change['name'] == 'value': im.set_data(example_image_stack[change['new']]) fig.canvas.draw_idle slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10) slider.observe(update) display(widgets.VBox([slider, fig.canvas])) display(out) ```
true
code
0.661677
null
null
null
null
# Interactive single compartment HH example To run this interactive Jupyter Notebook, please click on the rocket icon 🚀 in the top panel. For more information, please see {ref}`how to use this documentation <userdocs:usage:jupyterbooks>`. Please uncomment the line below if you use the Google Colab. (It does not include these packages by default). ``` #%pip install pyneuroml neuromllite NEURON import math from neuroml import NeuroMLDocument from neuroml import Cell from neuroml import IonChannelHH from neuroml import GateHHRates from neuroml import BiophysicalProperties from neuroml import MembraneProperties from neuroml import ChannelDensity from neuroml import HHRate from neuroml import SpikeThresh from neuroml import SpecificCapacitance from neuroml import InitMembPotential from neuroml import IntracellularProperties from neuroml import IncludeType from neuroml import Resistivity from neuroml import Morphology, Segment, Point3DWithDiam from neuroml import Network, Population from neuroml import PulseGenerator, ExplicitInput import numpy as np from pyneuroml import pynml from pyneuroml.lems import LEMSSimulation ``` ## Declare the model ### Create ion channels ``` def create_na_channel(): """Create the Na channel. This will create the Na channel and save it to a file. It will also validate this file. returns: name of the created file """ na_channel = IonChannelHH(id="na_channel", notes="Sodium channel for HH cell", conductance="10pS", species="na") gate_m = GateHHRates(id="na_m", instances="3", notes="m gate for na channel") m_forward_rate = HHRate(type="HHExpLinearRate", rate="1per_ms", midpoint="-40mV", scale="10mV") m_reverse_rate = HHRate(type="HHExpRate", rate="4per_ms", midpoint="-65mV", scale="-18mV") gate_m.forward_rate = m_forward_rate gate_m.reverse_rate = m_reverse_rate na_channel.gate_hh_rates.append(gate_m) gate_h = GateHHRates(id="na_h", instances="1", notes="h gate for na channel") h_forward_rate = HHRate(type="HHExpRate", rate="0.07per_ms", midpoint="-65mV", scale="-20mV") h_reverse_rate = HHRate(type="HHSigmoidRate", rate="1per_ms", midpoint="-35mV", scale="10mV") gate_h.forward_rate = h_forward_rate gate_h.reverse_rate = h_reverse_rate na_channel.gate_hh_rates.append(gate_h) na_channel_doc = NeuroMLDocument(id="na_channel", notes="Na channel for HH neuron") na_channel_fn = "HH_example_na_channel.nml" na_channel_doc.ion_channel_hhs.append(na_channel) pynml.write_neuroml2_file(nml2_doc=na_channel_doc, nml2_file_name=na_channel_fn, validate=True) return na_channel_fn def create_k_channel(): """Create the K channel This will create the K channel and save it to a file. It will also validate this file. :returns: name of the K channel file """ k_channel = IonChannelHH(id="k_channel", notes="Potassium channel for HH cell", conductance="10pS", species="k") gate_n = GateHHRates(id="k_n", instances="4", notes="n gate for k channel") n_forward_rate = HHRate(type="HHExpLinearRate", rate="0.1per_ms", midpoint="-55mV", scale="10mV") n_reverse_rate = HHRate(type="HHExpRate", rate="0.125per_ms", midpoint="-65mV", scale="-80mV") gate_n.forward_rate = n_forward_rate gate_n.reverse_rate = n_reverse_rate k_channel.gate_hh_rates.append(gate_n) k_channel_doc = NeuroMLDocument(id="k_channel", notes="k channel for HH neuron") k_channel_fn = "HH_example_k_channel.nml" k_channel_doc.ion_channel_hhs.append(k_channel) pynml.write_neuroml2_file(nml2_doc=k_channel_doc, nml2_file_name=k_channel_fn, validate=True) return k_channel_fn def create_leak_channel(): """Create a leak channel This will create the leak channel and save it to a file. It will also validate this file. :returns: name of leak channel nml file """ leak_channel = IonChannelHH(id="leak_channel", conductance="10pS", notes="Leak conductance") leak_channel_doc = NeuroMLDocument(id="leak_channel", notes="leak channel for HH neuron") leak_channel_fn = "HH_example_leak_channel.nml" leak_channel_doc.ion_channel_hhs.append(leak_channel) pynml.write_neuroml2_file(nml2_doc=leak_channel_doc, nml2_file_name=leak_channel_fn, validate=True) return leak_channel_fn ``` ### Create cell ``` def create_cell(): """Create the cell. :returns: name of the cell nml file """ # Create the nml file and add the ion channels hh_cell_doc = NeuroMLDocument(id="cell", notes="HH cell") hh_cell_fn = "HH_example_cell.nml" hh_cell_doc.includes.append(IncludeType(href=create_na_channel())) hh_cell_doc.includes.append(IncludeType(href=create_k_channel())) hh_cell_doc.includes.append(IncludeType(href=create_leak_channel())) # Define a cell hh_cell = Cell(id="hh_cell", notes="A single compartment HH cell") # Define its biophysical properties bio_prop = BiophysicalProperties(id="hh_b_prop") # notes="Biophysical properties for HH cell") # Membrane properties are a type of biophysical properties mem_prop = MembraneProperties() # Add membrane properties to the biophysical properties bio_prop.membrane_properties = mem_prop # Append to cell hh_cell.biophysical_properties = bio_prop # Channel density for Na channel na_channel_density = ChannelDensity(id="na_channels", cond_density="120.0 mS_per_cm2", erev="50.0 mV", ion="na", ion_channel="na_channel") mem_prop.channel_densities.append(na_channel_density) # Channel density for k channel k_channel_density = ChannelDensity(id="k_channels", cond_density="360 S_per_m2", erev="-77mV", ion="k", ion_channel="k_channel") mem_prop.channel_densities.append(k_channel_density) # Leak channel leak_channel_density = ChannelDensity(id="leak_channels", cond_density="3.0 S_per_m2", erev="-54.3mV", ion="non_specific", ion_channel="leak_channel") mem_prop.channel_densities.append(leak_channel_density) # Other membrane properties mem_prop.spike_threshes.append(SpikeThresh(value="-20mV")) mem_prop.specific_capacitances.append(SpecificCapacitance(value="1.0 uF_per_cm2")) mem_prop.init_memb_potentials.append(InitMembPotential(value="-65mV")) intra_prop = IntracellularProperties() intra_prop.resistivities.append(Resistivity(value="0.03 kohm_cm")) # Add to biological properties bio_prop.intracellular_properties = intra_prop # Morphology morph = Morphology(id="hh_cell_morph") # notes="Simple morphology for the HH cell") seg = Segment(id="0", name="soma", notes="Soma segment") # We want a diameter such that area is 1000 micro meter^2 # surface area of a sphere is 4pi r^2 = 4pi diam^2 diam = math.sqrt(1000 / math.pi) proximal = distal = Point3DWithDiam(x="0", y="0", z="0", diameter=str(diam)) seg.proximal = proximal seg.distal = distal morph.segments.append(seg) hh_cell.morphology = morph hh_cell_doc.cells.append(hh_cell) pynml.write_neuroml2_file(nml2_doc=hh_cell_doc, nml2_file_name=hh_cell_fn, validate=True) return hh_cell_fn ``` ### Create a network ``` def create_network(): """Create the network :returns: name of network nml file """ net_doc = NeuroMLDocument(id="network", notes="HH cell network") net_doc_fn = "HH_example_net.nml" net_doc.includes.append(IncludeType(href=create_cell())) # Create a population: convenient to create many cells of the same type pop = Population(id="pop0", notes="A population for our cell", component="hh_cell", size=1) # Input pulsegen = PulseGenerator(id="pg", notes="Simple pulse generator", delay="100ms", duration="100ms", amplitude="0.08nA") exp_input = ExplicitInput(target="pop0[0]", input="pg") net = Network(id="single_hh_cell_network", note="A network with a single population") net_doc.pulse_generators.append(pulsegen) net.explicit_inputs.append(exp_input) net.populations.append(pop) net_doc.networks.append(net) pynml.write_neuroml2_file(nml2_doc=net_doc, nml2_file_name=net_doc_fn, validate=True) return net_doc_fn ``` ## Plot the data we record ``` def plot_data(sim_id): """Plot the sim data. Load the data from the file and plot the graph for the membrane potential using the pynml generate_plot utility function. :sim_id: ID of simulaton """ data_array = np.loadtxt(sim_id + ".dat") pynml.generate_plot([data_array[:, 0]], [data_array[:, 1]], "Membrane potential", show_plot_already=False, save_figure_to=sim_id + "-v.png", xaxis="time (s)", yaxis="membrane potential (V)") pynml.generate_plot([data_array[:, 0]], [data_array[:, 2]], "channel current", show_plot_already=False, save_figure_to=sim_id + "-i.png", xaxis="time (s)", yaxis="channel current (A)") pynml.generate_plot([data_array[:, 0], data_array[:, 0]], [data_array[:, 3], data_array[:, 4]], "current density", labels=["Na", "K"], show_plot_already=False, save_figure_to=sim_id + "-iden.png", xaxis="time (s)", yaxis="current density (A_per_m2)") ``` ## Create and run the simulation Create the simulation, run it, record data, and plot the recorded information. ``` def main(): """Main function Include the NeuroML model into a LEMS simulation file, run it, plot some data. """ # Simulation bits sim_id = "HH_single_compartment_example_sim" simulation = LEMSSimulation(sim_id=sim_id, duration=300, dt=0.01, simulation_seed=123) # Include the NeuroML model file simulation.include_neuroml2_file(create_network()) # Assign target for the simulation simulation.assign_simulation_target("single_hh_cell_network") # Recording information from the simulation simulation.create_output_file(id="output0", file_name=sim_id + ".dat") simulation.add_column_to_output_file("output0", column_id="pop0[0]/v", quantity="pop0[0]/v") simulation.add_column_to_output_file("output0", column_id="pop0[0]/iChannels", quantity="pop0[0]/iChannels") simulation.add_column_to_output_file("output0", column_id="pop0[0]/na/iDensity", quantity="pop0[0]/hh_b_prop/membraneProperties/na_channels/iDensity/") simulation.add_column_to_output_file("output0", column_id="pop0[0]/k/iDensity", quantity="pop0[0]/hh_b_prop/membraneProperties/k_channels/iDensity/") # Save LEMS simulation to file sim_file = simulation.save_to_file() # Run the simulation using the default jNeuroML simulator pynml.run_lems_with_jneuroml(sim_file, max_memory="2G", nogui=True, plot=False) # Plot the data plot_data(sim_id) if __name__ == "__main__": main() ```
true
code
0.593374
null
null
null
null
# Hyperparameter tuning In the previous section, we did not discuss the parameters of random forest and gradient-boosting. However, there are a couple of things to keep in mind when setting these. This notebook gives crucial information regarding how to set the hyperparameters of both random forest and gradient boosting decision tree models. <div class="admonition caution alert alert-warning"> <p class="first admonition-title" style="font-weight: bold;">Caution!</p> <p class="last">For the sake of clarity, no cross-validation will be used to estimate the testing error. We are only showing the effect of the parameters on the validation set of what should be the inner cross-validation.</p> </div> ## Random forest The main parameter to tune for random forest is the `n_estimators` parameter. In general, the more trees in the forest, the better the generalization performance will be. However, it will slow down the fitting and prediction time. The goal is to balance computing time and generalization performance when setting the number of estimators when putting such learner in production. The `max_depth` parameter could also be tuned. Sometimes, there is no need to have fully grown trees. However, be aware that with random forest, trees are generally deep since we are seeking to overfit the learners on the bootstrap samples because this will be mitigated by combining them. Assembling underfitted trees (i.e. shallow trees) might also lead to an underfitted forest. ``` from sklearn.datasets import fetch_california_housing from sklearn.model_selection import train_test_split data, target = fetch_california_housing(return_X_y=True, as_frame=True) target *= 100 # rescale the target in k$ data_train, data_test, target_train, target_test = train_test_split( data, target, random_state=0) import pandas as pd from sklearn.model_selection import GridSearchCV from sklearn.ensemble import RandomForestRegressor param_grid = { "n_estimators": [10, 20, 30], "max_depth": [3, 5, None], } grid_search = GridSearchCV( RandomForestRegressor(n_jobs=2), param_grid=param_grid, scoring="neg_mean_absolute_error", n_jobs=2, ) grid_search.fit(data_train, target_train) columns = [f"param_{name}" for name in param_grid.keys()] columns += ["mean_test_score", "rank_test_score"] cv_results = pd.DataFrame(grid_search.cv_results_) cv_results["mean_test_score"] = -cv_results["mean_test_score"] cv_results[columns].sort_values(by="rank_test_score") ``` We can observe that in our grid-search, the largest `max_depth` together with the largest `n_estimators` led to the best generalization performance. ## Gradient-boosting decision trees For gradient-boosting, parameters are coupled, so we cannot set the parameters one after the other anymore. The important parameters are `n_estimators`, `max_depth`, and `learning_rate`. Let's first discuss the `max_depth` parameter. We saw in the section on gradient-boosting that the algorithm fits the error of the previous tree in the ensemble. Thus, fitting fully grown trees will be detrimental. Indeed, the first tree of the ensemble would perfectly fit (overfit) the data and thus no subsequent tree would be required, since there would be no residuals. Therefore, the tree used in gradient-boosting should have a low depth, typically between 3 to 8 levels. Having very weak learners at each step will help reducing overfitting. With this consideration in mind, the deeper the trees, the faster the residuals will be corrected and less learners are required. Therefore, `n_estimators` should be increased if `max_depth` is lower. Finally, we have overlooked the impact of the `learning_rate` parameter until now. When fitting the residuals, we would like the tree to try to correct all possible errors or only a fraction of them. The learning-rate allows you to control this behaviour. A small learning-rate value would only correct the residuals of very few samples. If a large learning-rate is set (e.g., 1), we would fit the residuals of all samples. So, with a very low learning-rate, we will need more estimators to correct the overall error. However, a too large learning-rate tends to obtain an overfitted ensemble, similar to having a too large tree depth. ``` from sklearn.ensemble import GradientBoostingRegressor param_grid = { "n_estimators": [10, 30, 50], "max_depth": [3, 5, None], "learning_rate": [0.1, 1], } grid_search = GridSearchCV( GradientBoostingRegressor(), param_grid=param_grid, scoring="neg_mean_absolute_error", n_jobs=2 ) grid_search.fit(data_train, target_train) columns = [f"param_{name}" for name in param_grid.keys()] columns += ["mean_test_score", "rank_test_score"] cv_results = pd.DataFrame(grid_search.cv_results_) cv_results["mean_test_score"] = -cv_results["mean_test_score"] cv_results[columns].sort_values(by="rank_test_score") ``` <div class="admonition caution alert alert-warning"> <p class="first admonition-title" style="font-weight: bold;">Caution!</p> <p class="last">Here, we tune the <tt class="docutils literal">n_estimators</tt> but be aware that using early-stopping as in the previous exercise will be better.</p> </div>
true
code
0.771236
null
null
null
null
``` # !pip install ray[tune] import pandas as pd import numpy as np from matplotlib import pyplot as plt from sklearn.metrics import mean_squared_error from hyperopt import hp from ray import tune from hyperopt import fmin, tpe, hp,Trials, space_eval import scipy.stats df = pd.read_csv("../../Data/Raw/flightLogData.csv") plt.figure(figsize=(20, 10)) plt.plot(df.Time, df['Altitude'], linewidth=2, color="r", label="Altitude") plt.plot(df.Time, df['Vertical_velocity'], linewidth=2, color="y", label="Vertical_velocity") plt.plot(df.Time, df['Vertical_acceleration'], linewidth=2, color="b", label="Vertical_acceleration") plt.legend() plt.show() temp_df = df[['Altitude', "Vertical_velocity", "Vertical_acceleration"]] noise = np.random.normal(2, 5, temp_df.shape) noisy_df = temp_df + noise noisy_df['Time'] = df['Time'] plt.figure(figsize=(20, 10)) plt.plot(noisy_df.Time, noisy_df['Altitude'], linewidth=2, color="r", label="Altitude") plt.plot(noisy_df.Time, noisy_df['Vertical_velocity'], linewidth=2, color="y", label="Vertical_velocity") plt.plot(noisy_df.Time, noisy_df['Vertical_acceleration'], linewidth=2, color="b", label="Vertical_acceleration") plt.legend() plt.show() ``` ## Altitude ``` q = 0.001 A = np.array([[1.0, 0.1, 0.005], [0, 1.0, 0.1], [0, 0, 1]]) H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]]) P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) # R = np.array([[0.5, 0.0], [0.0, 0.0012]]) # Q = np.array([[q, 0.0, 0.0], [0.0, q, 0.0], [0.0, 0.0, q]]) I = np.identity(3) x_hat = np.array([[0.0], [0.0], [0.0]]) Y = np.array([[0.0], [0.0]]) def kalman_update(param): r1, r2, q1 = param['r1'], param['r2'], param['q1'] R = np.array([[r1, 0.0], [0.0, r2]]) Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]]) A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]]) H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]]) P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) I = np.identity(3) x_hat = np.array([[0.0], [0.0], [0.0]]) Y = np.array([[0.0], [0.0]]) new_altitude = [] new_acceleration = [] new_velocity = [] for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']): Z = np.array([[altitude], [az]]) x_hat_minus = np.dot(A, x_hat) P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R))) Y = Z - np.dot(H, x_hat_minus) x_hat = x_hat_minus + np.dot(K, Y) P = np.dot((I - np.dot(K, H)), P_minus) Y = Z - np.dot(H, x_hat_minus) new_altitude.append(float(x_hat[0])) new_velocity.append(float(x_hat[1])) new_acceleration.append(float(x_hat[2])) return new_altitude def objective_function(param): r1, r2, q1 = param['r1'], param['r2'], param['q1'] R = np.array([[r1, 0.0], [0.0, r2]]) Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]]) A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]]) H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]]) P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) I = np.identity(3) x_hat = np.array([[0.0], [0.0], [0.0]]) Y = np.array([[0.0], [0.0]]) new_altitude = [] new_acceleration = [] new_velocity = [] for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']): Z = np.array([[altitude], [az]]) x_hat_minus = np.dot(A, x_hat) P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R))) Y = Z - np.dot(H, x_hat_minus) x_hat = x_hat_minus + np.dot(K, Y) P = np.dot((I - np.dot(K, H)), P_minus) Y = Z - np.dot(H, x_hat_minus) new_altitude.append(float(x_hat[0])) new_velocity.append(float(x_hat[1])) new_acceleration.append(float(x_hat[2])) return mean_squared_error(df['Altitude'], new_altitude) # space = { # "r1": hp.choice("r1", np.arange(0.01, 90, 0.005)), # "r2": hp.choice("r2", np.arange(0.01, 90, 0.005)), # "q1": hp.choice("q1", np.arange(0.0001, 0.0009, 0.0001)) # } len(np.arange(0.00001, 0.09, 0.00001)) space = { "r1": hp.choice("r1", np.arange(0.001, 90, 0.001)), "r2": hp.choice("r2", np.arange(0.001, 90, 0.001)), "q1": hp.choice("q1", np.arange(0.00001, 0.09, 0.00001)) } # Initialize trials object trials = Trials() best = fmin(fn=objective_function, space = space, algo=tpe.suggest, max_evals=100, trials=trials ) print(best) # -> {'a': 1, 'c2': 0.01420615366247227} print(space_eval(space, best)) # -> ('case 2', 0.01420615366247227} d1 = space_eval(space, best) objective_function(d1) %%timeit objective_function({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75}) objective_function({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75}) y = kalman_update(d1) current = kalman_update({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75}) plt.figure(figsize=(20, 10)) plt.plot(noisy_df.Time, df['Altitude'], linewidth=2, color="r", label="Actual") plt.plot(noisy_df.Time, current, linewidth=2, color="g", label="ESP32") plt.plot(noisy_df.Time, noisy_df['Altitude'], linewidth=2, color="y", label="Noisy") plt.plot(noisy_df.Time, y, linewidth=2, color="b", label="Predicted") plt.legend() plt.show() def kalman_update_return_velocity(param): r1, r2, q1 = param['r1'], param['r2'], param['q1'] R = np.array([[r1, 0.0], [0.0, r2]]) Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]]) A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]]) H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]]) P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) I = np.identity(3) x_hat = np.array([[0.0], [0.0], [0.0]]) Y = np.array([[0.0], [0.0]]) new_altitude = [] new_acceleration = [] new_velocity = [] for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']): Z = np.array([[altitude], [az]]) x_hat_minus = np.dot(A, x_hat) P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R))) Y = Z - np.dot(H, x_hat_minus) x_hat = x_hat_minus + np.dot(K, Y) P = np.dot((I - np.dot(K, H)), P_minus) Y = Z - np.dot(H, x_hat_minus) new_altitude.append(float(x_hat[0])) new_velocity.append(float(x_hat[1])) new_acceleration.append(float(x_hat[2])) return new_velocity def objective_function(param): r1, r2, q1 = param['r1'], param['r2'], param['q1'] R = np.array([[r1, 0.0], [0.0, r2]]) Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]]) A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]]) H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]]) P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]) I = np.identity(3) x_hat = np.array([[0.0], [0.0], [0.0]]) Y = np.array([[0.0], [0.0]]) new_altitude = [] new_acceleration = [] new_velocity = [] for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']): Z = np.array([[altitude], [az]]) x_hat_minus = np.dot(A, x_hat) P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R))) Y = Z - np.dot(H, x_hat_minus) x_hat = x_hat_minus + np.dot(K, Y) P = np.dot((I - np.dot(K, H)), P_minus) Y = Z - np.dot(H, x_hat_minus) new_altitude.append(float(x_hat[0])) new_velocity.append(float(x_hat[1])) new_acceleration.append(float(x_hat[2])) return mean_squared_error(df['Vertical_velocity'], new_velocity) space = { "r1": hp.choice("r1", np.arange(0.001, 90, 0.001)), "r2": hp.choice("r2", np.arange(0.001, 90, 0.001)), "q1": hp.choice("q1", np.arange(0.00001, 0.09, 0.00001)) } # Initialize trials object trials = Trials() best = fmin(fn=objective_function, space = space, algo=tpe.suggest, max_evals=100, trials=trials ) print(best) print(space_eval(space, best)) d2 = space_eval(space, best) objective_function(d2) y = kalman_update_return_velocity(d2) current = kalman_update_return_velocity({'q1': 0.0013, 'r1': 0.25, 'r2': 0.65}) previous = kalman_update_return_velocity({'q1': 0.08519, 'r1': 4.719, 'r2': 56.443}) plt.figure(figsize=(20, 10)) plt.plot(noisy_df.Time, df['Vertical_velocity'], linewidth=2, color="r", label="Actual") plt.plot(noisy_df.Time, current, linewidth=2, color="g", label="ESP32") plt.plot(noisy_df.Time, previous, linewidth=2, color="c", label="With previous data") plt.plot(noisy_df.Time, noisy_df['Vertical_velocity'], linewidth=2, color="y", label="Noisy") plt.plot(noisy_df.Time, y, linewidth=2, color="b", label="Predicted") plt.legend() plt.show() ```
true
code
0.447038
null
null
null
null
# Selected Economic Characteristics: Employment Status from the American Community Survey **[Work in progress]** This notebook downloads [selected economic characteristics (DP03)](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03) from the American Community Survey 2018 5-Year Data. Data source: [American Community Survey 5-Year Data 2018](https://www.census.gov/data/developers/data-sets/acs-5year.html) Authors: Peter Rose ([email protected]), Ilya Zaslavsky ([email protected]) ``` import os import pandas as pd from pathlib import Path import time pd.options.display.max_rows = None # display all rows pd.options.display.max_columns = None # display all columsns NEO4J_IMPORT = Path(os.getenv('NEO4J_IMPORT')) print(NEO4J_IMPORT) ``` ## Download selected variables * [Selected economic characteristics for US](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03) * [List of variables as HTML](https://api.census.gov/data/2018/acs/acs5/profile/groups/DP03.html) or [JSON](https://api.census.gov/data/2018/acs/acs5/profile/groups/DP03/) * [Description of variables](https://www2.census.gov/programs-surveys/acs/tech_docs/subject_definitions/2018_ACSSubjectDefinitions.pdf) * [Example URLs for API](https://api.census.gov/data/2018/acs/acs5/profile/examples.html) ### Specify variables from DP03 group and assign property names Names must follow the [Neo4j property naming conventions](https://neo4j.com/docs/getting-started/current/graphdb-concepts/#graphdb-naming-rules-and-recommendations). ``` variables = {# EMPLOYMENT STATUS 'DP03_0001E': 'population16YearsAndOver', 'DP03_0002E': 'population16YearsAndOverInLaborForce', 'DP03_0002PE': 'population16YearsAndOverInLaborForcePct', 'DP03_0003E': 'population16YearsAndOverInCivilianLaborForce', 'DP03_0003PE': 'population16YearsAndOverInCivilianLaborForcePct', 'DP03_0006E': 'population16YearsAndOverInArmedForces', 'DP03_0006PE': 'population16YearsAndOverInArmedForcesPct', 'DP03_0007E': 'population16YearsAndOverNotInLaborForce', 'DP03_0007PE': 'population16YearsAndOverNotInLaborForcePct' #'DP03_0014E': 'ownChildrenOfTheHouseholderUnder6Years', #'DP03_0015E': 'ownChildrenOfTheHouseholderUnder6YearsAllParentsInLaborForce', #'DP03_0016E': 'ownChildrenOfTheHouseholder6To17Years', #'DP03_0017E': 'ownChildrenOfTheHouseholder6To17YearsAllParentsInLaborForce', } fields = ",".join(variables.keys()) for v in variables.values(): print('e.' + v + ' = toInteger(row.' + v + '),') print(len(variables.keys())) ``` ## Download county-level data using US Census API ``` url_county = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=county:*' df = pd.read_json(url_county, dtype='str') df.fillna('', inplace=True) df.head() ``` ##### Add column names ``` df = df[1:].copy() # skip first row of labels columns = list(variables.values()) columns.append('stateFips') columns.append('countyFips') df.columns = columns ``` Remove Puerto Rico (stateFips = 72) to limit data to US States TODO handle data for Puerto Rico (GeoNames represents Puerto Rico as a country) ``` df.query("stateFips != '72'", inplace=True) ``` Save list of state fips (required later to get tract data by state) ``` stateFips = list(df['stateFips'].unique()) stateFips.sort() print(stateFips) df.head() # Example data df[(df['stateFips'] == '06') & (df['countyFips'] == '073')] df['source'] = 'American Community Survey 5 year' df['aggregationLevel'] = 'Admin2' ``` ### Save data ``` df.to_csv(NEO4J_IMPORT / "03a-USCensusDP03EmploymentAdmin2.csv", index=False) ``` ## Download zip-level data using US Census API ``` url_zip = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=zip%20code%20tabulation%20area:*' df = pd.read_json(url_zip, dtype='str') df.fillna('', inplace=True) df.head() ``` ##### Add column names ``` df = df[1:].copy() # skip first row columns = list(variables.values()) columns.append('stateFips') columns.append('postalCode') df.columns = columns df.head() # Example data df.query("postalCode == '90210'") df['source'] = 'American Community Survey 5 year' df['aggregationLevel'] = 'PostalCode' ``` ### Save data ``` df.to_csv(NEO4J_IMPORT / "03a-USCensusDP03EmploymentZip.csv", index=False) ``` ## Download tract-level data using US Census API Tract-level data are only available by state, so we need to loop over all states. ``` def get_tract_data(state): url_tract = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=tract:*&in=state:{state}' df = pd.read_json(url_tract, dtype='str') time.sleep(1) # skip first row of labels df = df[1:].copy() # Add column names columns = list(variables.values()) columns.append('stateFips') columns.append('countyFips') columns.append('tract') df.columns = columns return df df = pd.concat((get_tract_data(state) for state in stateFips)) df.fillna('', inplace=True) df['tract'] = df['stateFips'] + df['countyFips'] + df['tract'] df['source'] = 'American Community Survey 5 year' df['aggregationLevel'] = 'Tract' # Example data for San Diego County df[(df['stateFips'] == '06') & (df['countyFips'] == '073')].head() ``` ### Save data ``` df.to_csv(NEO4J_IMPORT / "03a-USCensusDP03EmploymentTract.csv", index=False) df.shape ```
true
code
0.28393
null
null
null
null
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks%20in%20Deep%20Learning%20Networks/8)%20Resnet%20V2%20Bottleneck%20Block%20(Type%20-%202).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Goals ### 1. Learn to implement Resnet V2 Bottleneck Block (Type - 1) using monk - Monk's Keras - Monk's Pytorch - Monk's Mxnet ### 2. Use network Monk's debugger to create complex blocks ### 3. Understand how syntactically different it is to implement the same using - Traditional Keras - Traditional Pytorch - Traditional Mxnet # Resnet V2 Bottleneck Block - Type 1 - Note: The block structure can have variations too, this is just an example ``` from IPython.display import Image Image(filename='imgs/resnet_v2_bottleneck_without_downsample.png') ``` # Table of contents [1. Install Monk](#1) [2. Block basic Information](#2) - [2.1) Visual structure](#2-1) - [2.2) Layers in Branches](#2-2) [3) Creating Block using monk visual debugger](#3) - [3.1) Create the first branch](#3-1) - [3.2) Create the second branch](#3-2) - [3.3) Merge the branches](#3-3) - [3.4) Debug the merged network](#3-4) - [3.5) Compile the network](#3-5) - [3.6) Visualize the network](#3-6) - [3.7) Run data through the network](#3-7) [4) Creating Block Using MONK one line API call](#4) - [Mxnet Backend](#4-1) - [Pytorch Backend](#4-2) - [Keras Backend](#4-3) [5) Appendix](#5) - [Study Material](#5-1) - [Creating block using traditional Mxnet](#5-2) - [Creating block using traditional Pytorch](#5-3) - [Creating block using traditional Keras](#5-4) <a id='0'></a> # Install Monk ## Using pip (Recommended) - colab (gpu) - All bakcends: `pip install -U monk-colab` - kaggle (gpu) - All backends: `pip install -U monk-kaggle` - cuda 10.2 - All backends: `pip install -U monk-cuda102` - Gluon bakcned: `pip install -U monk-gluon-cuda102` - Pytorch backend: `pip install -U monk-pytorch-cuda102` - Keras backend: `pip install -U monk-keras-cuda102` - cuda 10.1 - All backend: `pip install -U monk-cuda101` - Gluon bakcned: `pip install -U monk-gluon-cuda101` - Pytorch backend: `pip install -U monk-pytorch-cuda101` - Keras backend: `pip install -U monk-keras-cuda101` - cuda 10.0 - All backend: `pip install -U monk-cuda100` - Gluon bakcned: `pip install -U monk-gluon-cuda100` - Pytorch backend: `pip install -U monk-pytorch-cuda100` - Keras backend: `pip install -U monk-keras-cuda100` - cuda 9.2 - All backend: `pip install -U monk-cuda92` - Gluon bakcned: `pip install -U monk-gluon-cuda92` - Pytorch backend: `pip install -U monk-pytorch-cuda92` - Keras backend: `pip install -U monk-keras-cuda92` - cuda 9.0 - All backend: `pip install -U monk-cuda90` - Gluon bakcned: `pip install -U monk-gluon-cuda90` - Pytorch backend: `pip install -U monk-pytorch-cuda90` - Keras backend: `pip install -U monk-keras-cuda90` - cpu - All backend: `pip install -U monk-cpu` - Gluon bakcned: `pip install -U monk-gluon-cpu` - Pytorch backend: `pip install -U monk-pytorch-cpu` - Keras backend: `pip install -U monk-keras-cpu` ## Install Monk Manually (Not recommended) ### Step 1: Clone the library - git clone https://github.com/Tessellate-Imaging/monk_v1.git ### Step 2: Install requirements - Linux - Cuda 9.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt` - Cuda 9.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt` - Cuda 10.0 - `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt` - Cuda 10.1 - `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt` - Cuda 10.2 - `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt` - Windows - Cuda 9.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt` - Cuda 9.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt` - Cuda 10.0 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt` - Cuda 10.1 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt` - Cuda 10.2 (Experimental support) - `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt` - CPU (Non gpu system) - `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt` - Mac - CPU (Non gpu system) - `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt` - Misc - Colab (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt` - Kaggle (GPU) - `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt` ### Step 3: Add to system path (Required for every terminal or kernel run) - `import sys` - `sys.path.append("monk_v1/");` # Imports ``` # Common import numpy as np import math import netron from collections import OrderedDict from functools import partial #Using mxnet-gluon backend # When installed using pip from monk.gluon_prototype import prototype # When installed manually (Uncomment the following) #import os #import sys #sys.path.append("monk_v1/"); #sys.path.append("monk_v1/monk/"); #from monk.gluon_prototype import prototype ``` <a id='2'></a> # Block Information <a id='2_1'></a> ## Visual structure ``` from IPython.display import Image Image(filename='imgs/resnet_v2_bottleneck_without_downsample.png') ``` <a id='2_2'></a> ## Layers in Branches - Number of branches: 2 - Common Elements - batchnorm -> relu - Branch 1 - identity - Branch 2 - conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv1x1 - Branches merged using - Elementwise addition (See Appendix to read blogs on resnets) <a id='3'></a> # Creating Block using monk debugger ``` # Imports and setup a project # To use pytorch backend - replace gluon_prototype with pytorch_prototype # To use keras backend - replace gluon_prototype with keras_prototype from monk.gluon_prototype import prototype # Create a sample project gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); ``` <a id='3-1'></a> ## Create the first branch ``` def first_branch(): network = []; network.append(gtf.identity()); return network; # Debug the branch branch_1 = first_branch() network = []; network.append(branch_1); gtf.debug_custom_model_design(network); ``` <a id='3-2'></a> ## Create the second branch ``` def second_branch(output_channels=128, stride=1): network = []; # Bottleneck convolution network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride)); network.append(gtf.batch_normalization()); network.append(gtf.relu()); #Bottleneck convolution network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride)); network.append(gtf.batch_normalization()); network.append(gtf.relu()); #Normal convolution network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=1)); return network; # Debug the branch branch_2 = second_branch(output_channels=128, stride=1) network = []; network.append(branch_2); gtf.debug_custom_model_design(network); ``` <a id='3-3'></a> ## Merge the branches ``` def final_block(output_channels=128, stride=1): network = []; #Common Elements network.append(gtf.batch_normalization()); network.append(gtf.relu()); #Create subnetwork and add branches subnetwork = []; branch_1 = first_branch() branch_2 = second_branch(output_channels=output_channels, stride=stride) subnetwork.append(branch_1); subnetwork.append(branch_2); # Add merging element subnetwork.append(gtf.add()); # Add the subnetwork network.append(subnetwork) return network; ``` <a id='3-4'></a> ## Debug the merged network ``` final = final_block(output_channels=64, stride=1) network = []; network.append(final); gtf.debug_custom_model_design(network); ``` <a id='3-5'></a> ## Compile the network ``` gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); ``` <a id='3-6'></a> ## Run data through the network ``` import mxnet as mx x = np.zeros((1, 64, 224, 224)); x = mx.nd.array(x); y = gtf.system_dict["local"]["model"].forward(x); print(x.shape, y.shape) ``` <a id='3-7'></a> ## Visualize network using netron ``` gtf.Visualize_With_Netron(data_shape=(64, 224, 224)) ``` <a id='4'></a> # Creating Using MONK LOW code API <a id='4-1'></a> ## Mxnet backend ``` from monk.gluon_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False)); gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); ``` <a id='4-2'></a> ## Pytorch backend - Only the import changes ``` #Change gluon_prototype to pytorch_prototype from monk.pytorch_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False)); gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); ``` <a id='4-3'></a> ## Keras backend - Only the import changes ``` #Change gluon_prototype to keras_prototype from monk.keras_prototype import prototype gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1"); network = []; # Single line addition of blocks network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False)); gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False); ``` <a id='5'></a> # Appendix <a id='5-1'></a> ## Study links - https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec - https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691 - https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80f9a507b9c - https://hackernoon.com/resnet-block-level-design-with-deep-learning-studio-part-1-727c6f4927ac <a id='5-2'></a> ## Creating block using traditional Mxnet - Code credits - https://mxnet.incubator.apache.org/ ``` # Traditional-Mxnet-gluon import mxnet as mx from mxnet.gluon import nn from mxnet.gluon.nn import HybridBlock, BatchNorm from mxnet.gluon.contrib.nn import HybridConcurrent, Identity from mxnet import gluon, init, nd def _conv3x3(channels, stride, in_channels): return nn.Conv2D(channels, kernel_size=3, strides=stride, padding=1, use_bias=False, in_channels=in_channels) class ResnetBlockV1(HybridBlock): def __init__(self, channels, stride, in_channels=0, **kwargs): super(ResnetBlockV1, self).__init__(**kwargs) #Common Elements self.bn0 = nn.BatchNorm(); self.relu0 = nn.Activation('relu'); #Branch - 1 #Identity # Branch - 2 self.body = nn.HybridSequential(prefix='') self.body.add(nn.Conv2D(channels//4, kernel_size=1, strides=stride, use_bias=False, in_channels=in_channels)) self.body.add(nn.BatchNorm()) self.body.add(nn.Activation('relu')) self.body.add(_conv3x3(channels//4, stride, in_channels)) self.body.add(nn.BatchNorm()) self.body.add(nn.Activation('relu')) self.body.add(nn.Conv2D(channels, kernel_size=1, strides=stride, use_bias=False, in_channels=in_channels)) def hybrid_forward(self, F, x): x = self.bn0(x); x = self.relu0(x); residual = x x = self.body(x) x = residual+x return x # Invoke the block block = ResnetBlockV1(64, 1) # Initialize network and load block on machine ctx = [mx.cpu()]; block.initialize(init.Xavier(), ctx = ctx); block.collect_params().reset_ctx(ctx) block.hybridize() # Run data through network x = np.zeros((1, 64, 224, 224)); x = mx.nd.array(x); y = block.forward(x); print(x.shape, y.shape) # Export Model to Load on Netron block.export("final", epoch=0); netron.start("final-symbol.json", port=8082) ``` <a id='5-3'></a> ## Creating block using traditional Pytorch - Code credits - https://pytorch.org/ ``` # Traiditional-Pytorch import torch from torch import nn from torch.jit.annotations import List import torch.nn.functional as F def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): """3x3 convolution with padding""" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=dilation, groups=groups, bias=False, dilation=dilation) def conv1x1(in_planes, out_planes, stride=1): """1x1 convolution""" return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False) class ResnetBottleNeckBlock(nn.Module): expansion = 1 __constants__ = ['downsample'] def __init__(self, inplanes, planes, stride=1, groups=1, base_width=64, dilation=1, norm_layer=None): super(ResnetBottleNeckBlock, self).__init__() norm_layer = nn.BatchNorm2d #Common elements self.bn0 = norm_layer(inplanes); self.relu0 = nn.ReLU(inplace=True); # Branch - 1 #Identity # Branch - 2 self.conv1 = conv1x1(inplanes, planes//4, stride) self.bn1 = norm_layer(planes//4) self.relu1 = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes//4, planes//4, stride) self.bn2 = norm_layer(planes//4) self.relu2 = nn.ReLU(inplace=True) self.conv3 = conv1x1(planes//4, planes) self.stride = stride self.relu = nn.ReLU(inplace=True) def forward(self, x): x = self.bn0(x); x = self.relu0(x); identity = x out = self.conv1(x) out = self.bn1(out) out = self.relu1(out) out = self.conv2(out) out = self.bn2(out) out = self.relu2(out) out = self.conv3(out) out += identity return out # Invoke the block block = ResnetBottleNeckBlock(64, 64, stride=1); # Initialize network and load block on machine layers = [] layers.append(block); net = nn.Sequential(*layers); # Run data through network x = torch.randn(1, 64, 224, 224) y = net(x) print(x.shape, y.shape); # Export Model to Load on Netron torch.onnx.export(net, # model being run x, # model input (or a tuple for multiple inputs) "model.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes 'output' : {0 : 'batch_size'}}) netron.start('model.onnx', port=9998); ``` <a id='5-4'></a> ## Creating block using traditional Keras - Code credits: https://keras.io/ ``` # Traditional-Keras import keras import keras.layers as kla import keras.models as kmo import tensorflow as tf from keras.models import Model backend = 'channels_last' from keras import layers def resnet_conv_block(input_tensor, kernel_size, filters, stage, block, strides=(1, 1)): filters1, filters2, filters3 = filters bn_axis = 3 conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' #Common Elements start = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '0a')(input_tensor) start = layers.Activation('relu')(start) # Branch - 1 # Identity shortcut = start # Branch - 2 x = layers.Conv2D(filters1, (1, 1), strides=strides, kernel_initializer='he_normal', name=conv_name_base + '2a')(start) x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x) x = layers.Activation('relu')(x) x = layers.Conv2D(filters2, (3, 3), strides=strides, kernel_initializer='he_normal', name=conv_name_base + '2b', padding="same")(x) x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x) x = layers.Activation('relu')(x) x = layers.Conv2D(filters3, (1, 1), kernel_initializer='he_normal', name=conv_name_base + '2c')(x); x = layers.add([x, shortcut]) x = layers.Activation('relu')(x) return x def create_model(input_shape, kernel_size, filters, stage, block): img_input = layers.Input(shape=input_shape); x = resnet_conv_block(img_input, kernel_size, filters, stage, block) return Model(img_input, x); # Invoke the block kernel_size=3; filters=[16, 16, 64]; input_shape=(224, 224, 64); model = create_model(input_shape, kernel_size, filters, 0, "0"); # Run data through network x = tf.placeholder(tf.float32, shape=(1, 224, 224, 64)) y = model(x) print(x.shape, y.shape) # Export Model to Load on Netron model.save("final.h5"); netron.start("final.h5", port=8082) ``` # Goals Completed ### 1. Learn to implement Resnet V2 Bottleneck Block (Type - 1) using monk - Monk's Keras - Monk's Pytorch - Monk's Mxnet ### 2. Use network Monk's debugger to create complex blocks ### 3. Understand how syntactically different it is to implement the same using - Traditional Keras - Traditional Pytorch - Traditional Mxnet
true
code
0.800585
null
null
null
null
# Experiments comparing the performance of traditional pooling operations and entropy pooling within a shallow neural network and Lenet. The experiments use cifar10 and cifar100. ``` %matplotlib inline import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR100(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=8) testset = torchvision.datasets.CIFAR100(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=8) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') import math import torch import torch.nn as nn import torch.nn.functional as F from torch.nn.modules.utils import _pair, _quadruple import time from skimage.measure import shannon_entropy from scipy import stats from torch.nn.modules.utils import _pair, _quadruple import time from skimage.measure import shannon_entropy from scipy import stats import numpy as np class EntropyPool2d(nn.Module): def __init__(self, kernel_size=3, stride=1, padding=0, same=False, entr='high'): super(EntropyPool2d, self).__init__() self.k = _pair(kernel_size) self.stride = _pair(stride) self.padding = _quadruple(padding) # convert to l, r, t, b self.same = same self.entr = entr def _padding(self, x): if self.same: ih, iw = x.size()[2:] if ih % self.stride[0] == 0: ph = max(self.k[0] - self.stride[0], 0) else: ph = max(self.k[0] - (ih % self.stride[0]), 0) if iw % self.stride[1] == 0: pw = max(self.k[1] - self.stride[1], 0) else: pw = max(self.k[1] - (iw % self.stride[1]), 0) pl = pw // 2 pr = pw - pl pt = ph // 2 pb = ph - pt padding = (pl, pr, pt, pb) else: padding = self.padding return padding def forward(self, x): # using existing pytorch functions and tensor ops so that we get autograd, # would likely be more efficient to implement from scratch at C/Cuda level start = time.time() x = F.pad(x, self._padding(x), mode='reflect') x_detached = x.cpu().detach() x_unique, x_indices, x_inverse, x_counts = np.unique(x_detached, return_index=True, return_inverse=True, return_counts=True) freq = torch.FloatTensor([x_counts[i] / len(x_inverse) for i in x_inverse]).cuda() x_probs = freq.view(x.shape) x_probs = x_probs.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1]) x_probs = x_probs.contiguous().view(x_probs.size()[:4] + (-1,)) if self.entr is 'high': x_probs, indices = torch.min(x_probs.cuda(), dim=-1) elif self.entr is 'low': x_probs, indices = torch.max(x_probs.cuda(), dim=-1) else: raise Exception('Unknown entropy mode: {}'.format(self.entr)) x = x.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1]) x = x.contiguous().view(x.size()[:4] + (-1,)) indices = indices.view(indices.size() + (-1,)) pool = torch.gather(input=x, dim=-1, index=indices) return pool.squeeze(-1) import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import time from sklearn.metrics import f1_score MAX = 'max' AVG = 'avg' HIGH_ENTROPY = 'high_entr' LOW_ENTROPY = 'low_entr' class Net1Pool(nn.Module): def __init__(self, num_classes=10, pooling=MAX): super(Net1Pool, self).__init__() self.conv1 = nn.Conv2d(3, 30, 5) if pooling is MAX: self.pool = nn.MaxPool2d(2, 2) elif pooling is AVG: self.pool = nn.AvgPool2d(2, 2) elif pooling is HIGH_ENTROPY: self.pool = EntropyPool2d(2, 2, entr='high') elif pooling is LOW_ENTROPY: self.pool = EntropyPool2d(2, 2, entr='low') self.fc0 = nn.Linear(30 * 14 * 14, num_classes) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = x.view(-1, 30 * 14 * 14) x = F.relu(self.fc0(x)) return x class Net2Pool(nn.Module): def __init__(self, num_classes=10, pooling=MAX): super(Net2Pool, self).__init__() self.conv1 = nn.Conv2d(3, 50, 5, 1) self.conv2 = nn.Conv2d(50, 50, 5, 1) if pooling is MAX: self.pool = nn.MaxPool2d(2, 2) elif pooling is AVG: self.pool = nn.AvgPool2d(2, 2) elif pooling is HIGH_ENTROPY: self.pool = EntropyPool2d(2, 2, entr='high') elif pooling is LOW_ENTROPY: self.pool = EntropyPool2d(2, 2, entr='low') self.fc1 = nn.Linear(5*5*50, 500) self.fc2 = nn.Linear(500, num_classes) def forward(self, x): x = F.relu(self.conv1(x)) x = self.pool(x) x = F.relu(self.conv2(x)) x = self.pool(x) x = x.view(-1, 5*5*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return x def configure_net(net, device): net.to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) return net, optimizer, criterion def train(net, optimizer, criterion, trainloader, device, epochs=10, logging=2000): for epoch in range(epochs): running_loss = 0.0 for i, data in enumerate(trainloader, 0): start = time.time() inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % logging == logging - 1: print('[%d, %5d] loss: %.3f duration: %.5f' % (epoch + 1, i + 1, running_loss / logging, time.time() - start)) running_loss = 0.0 print('Finished Training') def test(net, testloader, device): correct = 0 total = 0 predictions = [] l = [] with torch.no_grad(): for data in testloader: images, labels = data images, labels = images.to(device), labels.to(device) outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() predictions.extend(predicted.cpu().numpy()) l.extend(labels.cpu().numpy()) print('Accuracy: {}'.format(100 * correct / total)) epochs = 10 logging = 15000 num_classes = 100 print('- - - - - - - - -- - - - 2 pool - - - - - - - - - - - - - - - -') print('- - - - - - - - -- - - - MAX - - - - - - - - - - - - - - - -') device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=MAX), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - AVG - - - - - - - - - - - - - - - -') net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=AVG), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - HIGH - - - - - - - - - - - - - - - -') net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=HIGH_ENTROPY), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - LOW - - - - - - - - - - - - - - - -') net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=LOW_ENTROPY), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - 1 pool - - - - - - - - - - - - - - - -') print('- - - - - - - - -- - - - MAX - - - - - - - - - - - - - - - -') device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=MAX), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - AVG - - - - - - - - - - - - - - - -') net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=AVG), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - HIGH - - - - - - - - - - - - - - - -') net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=HIGH_ENTROPY), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) print('- - - - - - - - -- - - - LOW - - - - - - - - - - - - - - - -') net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=LOW_ENTROPY), device) train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging) test(net, testloader, device) ```
true
code
0.847763
null
null
null
null
# Real Estate Price Prediction ``` import pandas as pd df = pd.read_csv("data.csv") df.head() df['CHAS'].value_counts() df.info() df.describe() %matplotlib inline import matplotlib.pyplot as plt df.hist(bins=50, figsize=(20,15)) ``` ## train_test_split ``` import numpy as np def split_train_test(data, test_ratio): np.random.seed(42) shuffled = np.random.permutation(len(data)) test_set_size = int(len(data) * test_ratio) test_indices = shuffled[:test_set_size] train_indices = shuffled[test_set_size:] return data.iloc[train_indices], data.iloc[test_indices] train_set, test_set = split_train_test(df, 0.2) print(f"The length of train dataset is: {len(train_set)}") print(f"The length of train dataset is: {len(test_set)}") def data_percent_allocation(train_set, test_set): total = len(df) train_percent = round((len(train_set)/total) * 100) test_percent = round((len(test_set)/total) * 100) return train_percent, test_percent data_percent_allocation(train_set, test_set) ``` ## train_test_split from sklearn ``` from sklearn.model_selection import train_test_split train_set, test_set = train_test_split(df, test_size = 0.2, random_state = 42) print(f"The length of train dataset is: {len(train_set)}") print(f"The length of train dataset is: {len(test_set)}") from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits = 1, test_size = 0.2, random_state = 42) for train_index, test_index in split.split(df, df['CHAS']): strat_train_set = df.loc[train_index] strat_test_set = df.loc[test_index] strat_test_set['CHAS'].value_counts() test_set['CHAS'].value_counts() strat_train_set['CHAS'].value_counts() train_set['CHAS'].value_counts() ``` ### Stratified learning equal splitting of zero and ones ``` 95/7 376/28 df = strat_train_set.copy() ``` ## Corelations ``` from pandas.plotting import scatter_matrix attributes = ["MEDV", "RM", "ZN" , "LSTAT"] scatter_matrix(df[attributes], figsize = (12,8)) df.plot(kind="scatter", x="RM", y="MEDV", alpha=1) ``` ### Trying out attribute combinations ``` df["TAXRM"] = df["TAX"]/df["RM"] df.head() corr_matrix = df.corr() corr_matrix['MEDV'].sort_values(ascending=False) # 1 means strong positive corr and -1 means strong negative corr. # EX: if RM will increase our final result(MEDV) in prediction will also increase. df.plot(kind="scatter", x="TAXRM", y="MEDV", alpha=1) df = strat_train_set.drop("MEDV", axis=1) df_labels = strat_train_set["MEDV"].copy() ``` ## Pipeline ``` from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer my_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('std_scaler', StandardScaler()), ]) df_numpy = my_pipeline.fit_transform(df) df_numpy #Numpy array of df as models will take numpy array as input. df_numpy.shape ``` ## Model Selection ``` from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor # model = LinearRegression() # model = DecisionTreeRegressor() model = RandomForestRegressor() model.fit(df_numpy, df_labels) some_data = df.iloc[:5] some_labels = df_labels.iloc[:5] prepared_data = my_pipeline.transform(some_data) model.predict(prepared_data) list(some_labels) ``` ## Evaluating the model ``` from sklearn.metrics import mean_squared_error df_predictions = model.predict(df_numpy) mse = mean_squared_error(df_labels, df_predictions) rmse = np.sqrt(mse) rmse # from sklearn.metrics import accuracy_score # accuracy_score(some_data, some_labels, normalize=False) ``` ## Cross Validation ``` from sklearn.model_selection import cross_val_score scores = cross_val_score(model, df_numpy, df_labels, scoring="neg_mean_squared_error", cv=10) rmse_scores = np.sqrt(-scores) rmse_scores def print_scores(scores): print("Scores:", scores) print("\nMean:", scores.mean()) print("\nStandard deviation:", scores.std()) print_scores(rmse_scores) ``` ### Saving Model ``` from joblib import dump, load dump(model, 'final_model.joblib') dump(model, 'final_model.sav') ``` ## Testing model on test data ``` X_test = strat_test_set.drop("MEDV", axis=1) Y_test = strat_test_set["MEDV"].copy() X_test_prepared = my_pipeline.transform(X_test) final_predictions = model.predict(X_test_prepared) final_mse = mean_squared_error(Y_test, final_predictions) final_rmse = np.sqrt(final_mse) final_rmse ```
true
code
0.443299
null
null
null
null
# In-Place Waveform Library Updates This example notebook shows how one can update pulses data in-place without recompiling. © Raytheon BBN Technologies 2020 Set the `SAVE_WF_OFFSETS` flag in order that QGL will output a map of the waveform data within the compiled binary waveform library. ``` from QGL import * import QGL import os.path import pickle QGL.drivers.APS2Pattern.SAVE_WF_OFFSETS = True ``` Create the usual channel library with a couple of AWGs. ``` cl = ChannelLibrary(":memory:") q1 = cl.new_qubit("q1") aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101") aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102") dig_1 = cl.new_X6("X6_1", address=0) h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30) h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30) cl.set_control(q1, aps2_1, generator=h1) cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2) cl.set_master(aps2_1, aps2_1.ch("m2")) cl["q1"].measure_chan.frequency = 0e6 cl.commit() ``` Compile a simple sequence. ``` mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 11)) plot_pulse_files(mf, time=True) ``` Open the offsets file (in the same directory as the `.aps2` files, one per AWG slice.) ``` offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets") with open(offset_f, "rb") as FID: offsets = pickle.load(FID) offsets ``` Let's replace every single pulse with a fixed amplitude `Utheta` ``` pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets} wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2") QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets) ``` We see that the data in the file has been updated. ``` plot_pulse_files(mf, time=True) ``` ## Profiling How long does this take? ``` %timeit mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 100)) ``` Getting the offsets is fast, and only needs to be done once ``` def get_offsets(): offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets") with open(offset_f, "rb") as FID: offsets = pickle.load(FID) return offsets %timeit offsets = get_offsets() %timeit pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets} wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2") %timeit QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets) # %timeit QGL.drivers.APS2Pattern.update_wf_library("/Users/growland/workspace/AWG/Rabi/Rabi-BBNAPS1.aps2", pulses, offsets) ``` Moral of the story: 300 ms for initial compilation, and roughly 1.3 ms for update_in_place.
true
code
0.356223
null
null
null
null
# End-to-end learning for music audio - http://qiita.com/himono/items/a94969e35fa8d71f876c ``` # データのダウンロード wget http://mi.soi.city.ac.uk/datasets/magnatagatune/mp3.zip.001 wget http://mi.soi.city.ac.uk/datasets/magnatagatune/mp3.zip.002 wget http://mi.soi.city.ac.uk/datasets/magnatagatune/mp3.zip.003 # 結合 cat data/mp3.zip.* > data/music.zip # 解凍 unzip data/music.zip -d music ``` ``` %matplotlib inline import os import matplotlib.pyplot as plt ``` ## MP3ファイルのロード ``` import numpy as np from pydub import AudioSegment def mp3_to_array(file): # MP3 => RAW song = AudioSegment.from_mp3(file) song_arr = np.fromstring(song._data, np.int16) return song_arr %ls data/music/1/ambient_teknology-phoenix-01-ambient_teknology-0-29.mp3 file = 'data/music/1/ambient_teknology-phoenix-01-ambient_teknology-0-29.mp3' song = mp3_to_array(file) plt.plot(song) ``` ## 楽曲タグデータをロード - ランダムに3000曲を抽出 - よく使われるタグ50個を抽出 - 各曲には複数のタグがついている ``` import pandas as pd tags_df = pd.read_csv('data/annotations_final.csv', delim_whitespace=True) # 全体をランダムにサンプリング tags_df = tags_df.sample(frac=1) # 最初の3000曲を使う tags_df = tags_df[:3000] tags_df top50_tags = tags_df.iloc[:, 1:189].sum().sort_values(ascending=False).index[:50].tolist() y = tags_df[top50_tags].values y ``` ## 楽曲データをロード - tags_dfのmp3_pathからファイルパスを取得 - mp3_to_array()でnumpy arrayをロード - (samples, features, channels) になるようにreshape - 音声波形は1次元なのでchannelsは1 - 訓練データはすべて同じサイズなのでfeaturesは同じになるはず(パディング不要) ``` files = tags_df.mp3_path.values files = [os.path.join('data', 'music', x) for x in files] X = np.array([mp3_to_array(file) for file in files]) X = X.reshape(X.shape[0], X.shape[1], 1) X.shape ``` ## 訓練データとテストデータに分割 ``` from sklearn.model_selection import train_test_split random_state = 42 train_x, test_x, train_y, test_y = train_test_split(X, y, test_size=0.2, random_state=random_state) print(train_x.shape) print(test_x.shape) print(train_y.shape) print(test_y.shape) plt.plot(train_x[0]) np.save('train_x.npy', train_x) np.save('test_x.npy', test_x) np.save('train_y.npy', train_y) np.save('test_y.npy', test_y) ``` ## 訓練 ``` import numpy as np from keras.models import Model from keras.layers import Dense, Flatten, Input, Conv1D, MaxPooling1D from keras.callbacks import CSVLogger, ModelCheckpoint train_x = np.load('train_x.npy') train_y = np.load('train_y.npy') test_x = np.load('test_x.npy') test_y = np.load('test_y.npy') print(train_x.shape) print(train_y.shape) print(test_x.shape) print(test_y.shape) features = train_x.shape[1] x_inputs = Input(shape=(features, 1), name='x_inputs') x = Conv1D(128, 256, strides=256, padding='valid', activation='relu')(x_inputs) # strided conv x = Conv1D(32, 8, activation='relu')(x) x = MaxPooling1D(4)(x) x = Conv1D(32, 8, activation='relu')(x) x = MaxPooling1D(4)(x) x = Conv1D(32, 8, activation='relu')(x) x = MaxPooling1D(4)(x) x = Conv1D(32, 8, activation='relu')(x) x = MaxPooling1D(4)(x) x = Flatten()(x) x = Dense(100, activation='relu')(x) x_outputs = Dense(50, activation='sigmoid', name='x_outputs')(x) model = Model(inputs=x_inputs, outputs=x_outputs) model.compile(optimizer='adam', loss='categorical_crossentropy') logger = CSVLogger('history.log') checkpoint = ModelCheckpoint( 'model.{epoch:02d}-{val_loss:.3f}.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='auto') model.fit(train_x, train_y, batch_size=600, epochs=50, validation_data=[test_x, test_y], callbacks=[logger, checkpoint]) ``` ## 予測 - taggerは複数のタグを出力するのでevaluate()ではダメ? ``` import numpy as np from keras.models import load_model from sklearn.metrics import roc_auc_score test_x = np.load('test_x.npy') test_y = np.load('test_y.npy') model = load_model('model.22-9.187-0.202.h5') pred_y = model.predict(test_x, batch_size=50) print(roc_auc_score(test_y, pred_y)) print(model.evaluate(test_x, test_y)) ```
true
code
0.613352
null
null
null
null
Mount my google drive, where I stored the dataset. ``` from google.colab import drive drive.mount('/content/drive') ``` **Download dependencies** ``` !pip3 install sklearn matplotlib GPUtil !pip3 install torch torchvision ``` **Download Data** In order to acquire the dataset please navigate to: https://ieee-dataport.org/documents/cervigram-image-dataset Unzip the dataset into the folder "dataset". For your environment, please adjust the paths accordingly. ``` !rm -vrf "dataset" !mkdir "dataset" # !cp -r "/content/drive/My Drive/Studiu doctorat leziuni cervicale/cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip" !cp -r "cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip" !unzip "dataset/cervigram-image-dataset-v2.zip" -d "dataset" ``` **Constants** For your environment, please modify the paths accordingly. ``` # TRAIN_PATH = '/content/dataset/data/train/' # TEST_PATH = '/content/dataset/data/test/' TRAIN_PATH = 'dataset/data/train/' TEST_PATH = 'dataset/data/test/' CROP_SIZE = 260 IMAGE_SIZE = 224 BATCH_SIZE = 100 ``` **Imports** ``` import torch as t import torchvision as tv import numpy as np import PIL as pil import matplotlib.pyplot as plt from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader from torch.nn import Linear, BCEWithLogitsLoss import sklearn as sk import sklearn.metrics from os import listdir import time import random import GPUtil ``` **Memory Stats** ``` import GPUtil def memory_stats(): for gpu in GPUtil.getGPUs(): print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal)) memory_stats() ``` **Deterministic Measurements** This statements help making the experiments reproducible by fixing the random seeds. Despite fixing the random seeds, experiments are usually not reproducible using different PyTorch releases, commits, platforms or between CPU and GPU executions. Please find more details in the PyTorch documentation: https://pytorch.org/docs/stable/notes/randomness.html ``` SEED = 0 t.manual_seed(SEED) t.cuda.manual_seed(SEED) t.backends.cudnn.deterministic = True t.backends.cudnn.benchmark = False np.random.seed(SEED) random.seed(SEED) ``` **Loading Data** The dataset is structured in multiple small folders of 7 images each. This generator iterates through the folders and returns the category and 7 paths: one for each image in the folder. The paths are ordered; the order is important since each folder contains 3 types of images, first 5 are with acetic acid solution and the last two are through a green lens and having iodine solution(a solution of a dark red color). ``` def sortByLastDigits(elem): chars = [c for c in elem if c.isdigit()] return 0 if len(chars) == 0 else int(''.join(chars)) def getImagesPaths(root_path): for class_folder in [root_path + f for f in listdir(root_path)]: category = int(class_folder[-1]) for case_folder in listdir(class_folder): case_folder_path = class_folder + '/' + case_folder + '/' img_files = [case_folder_path + file_name for file_name in listdir(case_folder_path)] yield category, sorted(img_files, key = sortByLastDigits) ``` We define 3 datasets, which load 3 kinds of images: natural images, images taken through a green lens and images where the doctor applied iodine solution (which gives a dark red color). Each dataset has dynamic and static transformations which could be applied to the data. The static transformations are applied on the initialization of the dataset, while the dynamic ones are applied when loading each batch of data. ``` class SimpleImagesDataset(t.utils.data.Dataset): def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None): self.dataset = [] self.transforms_x = transforms_x_dynamic self.transforms_y = transforms_y_dynamic for category, img_files in getImagesPaths(root_path): for i in range(5): img = pil.Image.open(img_files[i]) if transforms_x_static != None: img = transforms_x_static(img) if transforms_y_static != None: category = transforms_y_static(category) self.dataset.append((img, category)) def __getitem__(self, i): x, y = self.dataset[i] if self.transforms_x != None: x = self.transforms_x(x) if self.transforms_y != None: y = self.transforms_y(y) return x, y def __len__(self): return len(self.dataset) class GreenLensImagesDataset(SimpleImagesDataset): def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None): self.dataset = [] self.transforms_x = transforms_x_dynamic self.transforms_y = transforms_y_dynamic for category, img_files in getImagesPaths(root_path): # Only the green lens image img = pil.Image.open(img_files[-2]) if transforms_x_static != None: img = transforms_x_static(img) if transforms_y_static != None: category = transforms_y_static(category) self.dataset.append((img, category)) class RedImagesDataset(SimpleImagesDataset): def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None): self.dataset = [] self.transforms_x = transforms_x_dynamic self.transforms_y = transforms_y_dynamic for category, img_files in getImagesPaths(root_path): # Only the green lens image img = pil.Image.open(img_files[-1]) if transforms_x_static != None: img = transforms_x_static(img) if transforms_y_static != None: category = transforms_y_static(category) self.dataset.append((img, category)) ``` **Preprocess Data** Convert pytorch tensor to numpy array. ``` def to_numpy(x): return x.cpu().detach().numpy() ``` Data transformations for the test and training sets. ``` norm_mean = [0.485, 0.456, 0.406] norm_std = [0.229, 0.224, 0.225] transforms_train = tv.transforms.Compose([ tv.transforms.RandomAffine(degrees = 45, translate = None, scale = (1., 2.), shear = 30), # tv.transforms.CenterCrop(CROP_SIZE), tv.transforms.Resize(IMAGE_SIZE), tv.transforms.RandomHorizontalFlip(), tv.transforms.ToTensor(), tv.transforms.Lambda(lambda t: t.cuda()), tv.transforms.Normalize(mean=norm_mean, std=norm_std) ]) transforms_test = tv.transforms.Compose([ # tv.transforms.CenterCrop(CROP_SIZE), tv.transforms.Resize(IMAGE_SIZE), tv.transforms.ToTensor(), tv.transforms.Normalize(mean=norm_mean, std=norm_std) ]) y_transform = tv.transforms.Lambda(lambda y: t.tensor(y, dtype=t.long, device = 'cuda:0')) ``` Initialize pytorch datasets and loaders for training and test. ``` def create_loaders(dataset_class): dataset_train = dataset_class(TRAIN_PATH, transforms_x_dynamic = transforms_train, transforms_y_dynamic = y_transform) dataset_test = dataset_class(TEST_PATH, transforms_x_static = transforms_test, transforms_x_dynamic = tv.transforms.Lambda(lambda t: t.cuda()), transforms_y_dynamic = y_transform) loader_train = DataLoader(dataset_train, BATCH_SIZE, shuffle = True, num_workers = 0) loader_test = DataLoader(dataset_test, BATCH_SIZE, shuffle = False, num_workers = 0) return loader_train, loader_test, len(dataset_train), len(dataset_test) loader_train_simple_img, loader_test_simple_img, len_train, len_test = create_loaders(SimpleImagesDataset) ``` **Visualize Data** Load a few images so that we can see the effects of the data augmentation on the training set. ``` def plot_one_prediction(x, label, pred): x, label, pred = to_numpy(x), to_numpy(label), to_numpy(pred) x = np.transpose(x, [1, 2, 0]) if x.shape[-1] == 1: x = x.squeeze() x = x * np.array(norm_std) + np.array(norm_mean) plt.title(label, color = 'green' if label == pred else 'red') plt.imshow(x) def plot_predictions(imgs, labels, preds): fig = plt.figure(figsize = (20, 5)) for i in range(20): fig.add_subplot(2, 10, i + 1, xticks = [], yticks = []) plot_one_prediction(imgs[i], labels[i], preds[i]) # x, y = next(iter(loader_train_simple_img)) # plot_predictions(x, y, y) ``` **Model** Define a few models to experiment with. ``` def get_mobilenet_v2(): model = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True) model.classifier[1] = Linear(in_features=1280, out_features=4, bias=True) model = model.cuda() return model def get_vgg_19(): model = tv.models.vgg19(pretrained = True) model = model.cuda() model.classifier[6].out_features = 4 return model def get_res_next_101(): model = t.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl') model.fc.out_features = 4 model = model.cuda() return model def get_resnet_18(): model = tv.models.resnet18(pretrained = True) model.fc.out_features = 4 model = model.cuda() return model def get_dense_net(): model = tv.models.densenet121(pretrained = True) model.classifier.out_features = 4 model = model.cuda() return model class MobileNetV2_FullConv(t.nn.Module): def __init__(self): super().__init__() self.cnn = get_mobilenet_v2().features self.cnn[18] = t.nn.Sequential( tv.models.mobilenet.ConvBNReLU(320, 32, kernel_size=1), t.nn.Dropout2d(p = .7) ) self.fc = t.nn.Linear(32, 4) def forward(self, x): x = self.cnn(x) x = x.mean([2, 3]) x = self.fc(x); return x model_simple = t.nn.DataParallel(get_mobilenet_v2()) ``` **Train & Evaluate** Timer utility function. This is used to measure the execution speed. ``` time_start = 0 def timer_start(): global time_start time_start = time.time() def timer_end(): return time.time() - time_start ``` This function trains the network and evaluates it at the same time. It outputs the metrics recorded during the training for both train and test. We are measuring accuracy and the loss. The function also saves a checkpoint of the model every time the accuracy is improved. In the end we will have a checkpoint of the model which gave the best accuracy. ``` def train_eval(optimizer, model, loader_train, loader_test, chekpoint_name, epochs): metrics = { 'losses_train': [], 'losses_test': [], 'acc_train': [], 'acc_test': [], 'prec_train': [], 'prec_test': [], 'rec_train': [], 'rec_test': [], 'f_score_train': [], 'f_score_test': [] } best_acc = 0 loss_fn = t.nn.CrossEntropyLoss() try: for epoch in range(epochs): timer_start() train_epoch_loss, train_epoch_acc, train_epoch_precision, train_epoch_recall, train_epoch_f_score = 0, 0, 0, 0, 0 test_epoch_loss, test_epoch_acc, test_epoch_precision, test_epoch_recall, test_epoch_f_score = 0, 0, 0, 0, 0 # Train model.train() for x, y in loader_train: y_pred = model.forward(x) loss = loss_fn(y_pred, y) loss.backward() optimizer.step() # memory_stats() optimizer.zero_grad() y_pred, y = to_numpy(y_pred), to_numpy(y) pred = y_pred.argmax(axis = 1) ratio = len(y) / len_train train_epoch_loss += (loss.item() * ratio) train_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio) precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro') train_epoch_precision += (precision * ratio) train_epoch_recall += (recall * ratio) train_epoch_f_score += (f_score * ratio) metrics['losses_train'].append(train_epoch_loss) metrics['acc_train'].append(train_epoch_acc) metrics['prec_train'].append(train_epoch_precision) metrics['rec_train'].append(train_epoch_recall) metrics['f_score_train'].append(train_epoch_f_score) # Evaluate model.eval() with t.no_grad(): for x, y in loader_test: y_pred = model.forward(x) loss = loss_fn(y_pred, y) y_pred, y = to_numpy(y_pred), to_numpy(y) pred = y_pred.argmax(axis = 1) ratio = len(y) / len_test test_epoch_loss += (loss * ratio) test_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio ) precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro') test_epoch_precision += (precision * ratio) test_epoch_recall += (recall * ratio) test_epoch_f_score += (f_score * ratio) metrics['losses_test'].append(test_epoch_loss) metrics['acc_test'].append(test_epoch_acc) metrics['prec_test'].append(test_epoch_precision) metrics['rec_test'].append(test_epoch_recall) metrics['f_score_test'].append(test_epoch_f_score) if metrics['acc_test'][-1] > best_acc: best_acc = metrics['acc_test'][-1] t.save({'model': model.state_dict()}, 'checkpint {}.tar'.format(chekpoint_name)) print('Epoch {} acc {} prec {} rec {} f {} minutes {}'.format( epoch + 1, metrics['acc_test'][-1], metrics['prec_test'][-1], metrics['rec_test'][-1], metrics['f_score_test'][-1], timer_end() / 60)) except KeyboardInterrupt as e: print(e) print('Ended training') return metrics ``` Plot a metric for both train and test. ``` def plot_train_test(train, test, title, y_title): plt.plot(range(len(train)), train, label = 'train') plt.plot(range(len(test)), test, label = 'test') plt.xlabel('Epochs') plt.ylabel(y_title) plt.title(title) plt.legend() plt.show() ``` Plot precision - recall curve ``` def plot_precision_recall(metrics): plt.scatter(metrics['prec_train'], metrics['rec_train'], label = 'train') plt.scatter(metrics['prec_test'], metrics['rec_test'], label = 'test') plt.legend() plt.title('Precision-Recall') plt.xlabel('Precision') plt.ylabel('Recall') ``` Train a model for several epochs. The steps_learning parameter is a list of tuples. Each tuple specifies the steps and the learning rate. ``` def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning): for steps, learn_rate in steps_learning: metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps) print('Best test accuracy :', max(metrics['acc_test'])) plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate)) plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate)) ``` Perform actual training. ``` def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning): t.cuda.empty_cache() for steps, learn_rate in steps_learning: metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps) index_max = np.array(metrics['acc_test']).argmax() print('Best test accuracy :', metrics['acc_test'][index_max]) print('Corresponding precision :', metrics['prec_test'][index_max]) print('Corresponding recall :', metrics['rec_test'][index_max]) print('Corresponding f1 score :', metrics['f_score_test'][index_max]) plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate), 'Loss') plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate), 'Accuracy') plot_train_test(metrics['prec_train'], metrics['prec_test'], 'Precision (lr = {})'.format(learn_rate), 'Precision') plot_train_test(metrics['rec_train'], metrics['rec_test'], 'Recall (lr = {})'.format(learn_rate), 'Recall') plot_train_test(metrics['f_score_train'], metrics['f_score_test'], 'F1 Score (lr = {})'.format(learn_rate), 'F1 Score') plot_precision_recall(metrics) do_train(model_simple, loader_train_simple_img, loader_test_simple_img, 'simple_1', [(50, 1e-4)]) # checkpoint = t.load('/content/checkpint simple_1.tar') # model_simple.load_state_dict(checkpoint['model']) ```
true
code
0.608012
null
null
null
null
``` %matplotlib inline ``` # Simple Oscillator Example This example shows the most simple way of using a solver. We solve free vibration of a simple oscillator: $$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$ using the CVODE solver. An analytical solution exists, given by $$u(t) = u_0 \cos\left(\sqrt{\frac{k}{m}} t\right)+\frac{\dot{u}_0}{\sqrt{\frac{k}{m}}} \sin\left(\sqrt{\frac{k}{m}} t\right)$$ ``` from __future__ import print_function import matplotlib.pyplot as plt import numpy as np from scikits.odes import ode #data of the oscillator k = 4.0 m = 1.0 #initial position and speed data on t=0, x[0] = u, x[1] = \dot{u}, xp = \dot{x} initx = [1, 0.1] ``` We need a first order system, so convert the second order system $$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$ into $$\left\{ \begin{array}{l} \dot u = v\\ \dot v = \ddot u = -\frac{ku}{m} \end{array} \right.$$ You need to define a function that computes the right hand side of above equation: ``` def rhseqn(t, x, xdot): """ we create rhs equations for the problem""" xdot[0] = x[1] xdot[1] = - k/m * x[0] ``` To solve the ODE you define an ode object, specify the solver to use, here cvode, and pass the right hand side function. You request the solution at specific timepoints by passing an array of times to the solve member. ``` solver = ode('cvode', rhseqn, old_api=False) solution = solver.solve([0., 1., 2.], initx) print('\n t Solution Exact') print('------------------------------------') for t, u in zip(solution.values.t, solution.values.y): print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0], initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m))) ``` You can continue the solver by passing further times. Calling the solve routine reinits the solver, so you can restart at whatever time. To continue from the last computed solution, pass the last obtained time and solution. **Note:** The solver performes better if it can take into account history information, so avoid calling solve to continue computation! In general, you must check for errors using the errors output of solve. ``` #Solve over the next hour by continuation times = np.linspace(0, 3600, 61) times[0] = solution.values.t[-1] solution = solver.solve(times, solution.values.y[-1]) if solution.errors.t: print ('Error: ', solution.message, 'Error at time', solution.errors.t) print ('Computed Solutions:') print('\n t Solution Exact') print('------------------------------------') for t, u in zip(solution.values.t, solution.values.y): print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0], initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m))) ``` The solution fails at a time around 24 seconds. Erros can be due to many things. Here however the reason is simple: we try to make too large jumps in time output. Increasing the allowed steps the solver can take will fix this. This is the **max_steps** option of cvode: ``` solver = ode('cvode', rhseqn, old_api=False, max_steps=5000) solution = solver.solve(times, solution.values.y[-1]) if solution.errors.t: print ('Error: ', solution.message, 'Error at time', solution.errors.t) print ('Computed Solutions:') print('\n t Solution Exact') print('------------------------------------') for t, u in zip(solution.values.t, solution.values.y): print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0], initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m))) ``` To plot the simple oscillator, we show a (t,x) plot of the solution. Doing this over 60 seconds can be done as follows: ``` #plot of the oscilator solver = ode('cvode', rhseqn, old_api=False) times = np.linspace(0,60,600) solution = solver.solve(times, initx) plt.plot(solution.values.t,[x[0] for x in solution.values.y]) plt.xlabel('Time [s]') plt.ylabel('Position [m]') plt.show() ``` You can refine the tolerances from their defaults to obtain more accurate solutions ``` options1= {'rtol': 1e-6, 'atol': 1e-12, 'max_steps': 50000} # default rtol and atol options2= {'rtol': 1e-15, 'atol': 1e-25, 'max_steps': 50000} solver1 = ode('cvode', rhseqn, old_api=False, **options1) solver2 = ode('cvode', rhseqn, old_api=False, **options2) solution1 = solver1.solve([0., 1., 60], initx) solution2 = solver2.solve([0., 1., 60], initx) print('\n t Solution1 Solution2 Exact') print('-----------------------------------------------------') for t, u1, u2 in zip(solution1.values.t, solution1.values.y, solution2.values.y): print('{0:>4.0f} {1:15.8g} {2:15.8g} {3:15.8g}'.format(t, u1[0], u2[0], initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m))) ``` # Simple Oscillator Example: Stepwise running When using the *solve* method, you solve over a period of time you decided before. In some problems you might want to solve and decide on the output when to stop. Then you use the *step* method. The same example as above using the step method can be solved as follows. You define the ode object selecting the cvode solver. You initialize the solver with the begin time and initial conditions using *init_step*. You compute solutions going forward with the *step* method. ``` solver = ode('cvode', rhseqn, old_api=False) time = 0. solver.init_step(time, initx) plott = [] plotx = [] while True: time += 0.1 # fix roundoff error at end if time > 60: time = 60 solution = solver.step(time) if solution.errors.t: print ('Error: ', solution.message, 'Error at time', solution.errors.t) break #we store output for plotting plott.append(solution.values.t) plotx.append(solution.values.y[0]) if time >= 60: break plt.plot(plott,plotx) plt.xlabel('Time [s]') plt.ylabel('Position [m]') plt.show() ``` The solver interpolates solutions to return the solution at the required output times: ``` print ('plott length:', len(plott), ', last computation times:', plott[-15:]); ``` # Simple Oscillator Example: Internal Solver Stepwise running When using the *solve* method, you solve over a period of time you decided before. With the *step* method you solve by default towards a desired output time after which you can continue solving the problem. For full control, you can also compute problems using the solver internal steps. This is not advised, as the number of return steps can be very large, **slowing down** the computation enormously. If you want this nevertheless, you can achieve it with the *one_step_compute* option. Like this: ``` solver = ode('cvode', rhseqn, old_api=False, one_step_compute=True) time = 0. solver.init_step(time, initx) plott = [] plotx = [] while True: solution = solver.step(60) if solution.errors.t: print ('Error: ', solution.message, 'Error at time', solution.errors.t) break #we store output for plotting plott.append(solution.values.t) plotx.append(solution.values.y[0]) if solution.values.t >= 60: #back up to 60 solver.set_options(one_step_compute=False) solution = solver.step(60) plott[-1] = solution.values.t plotx[-1] = solution.values.y[0] break plt.plot(plott,plotx) plt.xlabel('Time [s]') plt.ylabel('Position [m]') plt.show() ``` By inspection of the returned times you can see how efficient the solver can solve this problem: ``` print ('plott length:', len(plott), ', last computation times:', plott[-15:]); ```
true
code
0.602559
null
null
null
null
# Siamese networks with TensorFlow 2.0/Keras In this example, we'll implement a simple siamese network system, which verifyies whether a pair of MNIST images is of the same class (true) or not (false). _This example is partially based on_ [https://github.com/keras-team/keras/blob/master/examples/mnist_siamese.py](https://github.com/keras-team/keras/blob/master/examples/mnist_siamese.py) Let's start with the imports ``` import random import numpy as np import tensorflow as tf ``` We'll continue with the `create_pairs` function, which creates a training dataset of equal number of true/false pairs of each MNIST class. ``` def create_pairs(inputs: np.ndarray, labels: np.ndarray): """Create equal number of true/false pairs of samples""" num_classes = 10 digit_indices = [np.where(labels == i)[0] for i in range(num_classes)] pairs = list() labels = list() n = min([len(digit_indices[d]) for d in range(num_classes)]) - 1 for d in range(num_classes): for i in range(n): z1, z2 = digit_indices[d][i], digit_indices[d][i + 1] pairs += [[inputs[z1], inputs[z2]]] inc = random.randrange(1, num_classes) dn = (d + inc) % num_classes z1, z2 = digit_indices[d][i], digit_indices[dn][i] pairs += [[inputs[z1], inputs[z2]]] labels += [1, 0] return np.array(pairs), np.array(labels, dtype=np.float32) ``` Next, we'll define the base network of the siamese system: ``` def create_base_network(): """The shared encoding part of the siamese network""" return tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), ]) ``` Next, let's load the regular MNIST training and validation sets and create true/false pairs out of them: ``` # Load the train and test MNIST datasets (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train = x_train.astype(np.float32) x_test = x_test.astype(np.float32) x_train /= 255 x_test /= 255 input_shape = x_train.shape[1:] # Create true/false training and testing pairs train_pairs, tr_labels = create_pairs(x_train, y_train) test_pairs, test_labels = create_pairs(x_test, y_test) ``` Then, we'll build the siamese system, which includes the `base_network`, the 2 siamese paths `encoder_a` and `encoder_b`, the `l1_dist` measure, and the combined `model`: ``` # Create the siamese network # Start from the shared layers base_network = create_base_network() # Create first half of the siamese system input_a = tf.keras.layers.Input(shape=input_shape) # Note how we reuse the base_network in both halfs encoder_a = base_network(input_a) # Create the second half of the siamese system input_b = tf.keras.layers.Input(shape=input_shape) encoder_b = base_network(input_b) # Create the the distance measure l1_dist = tf.keras.layers.Lambda( lambda embeddings: tf.keras.backend.abs(embeddings[0] - embeddings[1])) \ ([encoder_a, encoder_b]) # Final fc layer with a single logistic output for the binary classification flattened_weighted_distance = tf.keras.layers.Dense(1, activation='sigmoid') \ (l1_dist) # Build the model model = tf.keras.models.Model([input_a, input_b], flattened_weighted_distance) ``` Finally, we can train the model and check the validation accuracy, which reaches 99.37%: ``` # Train model.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(), metrics=['accuracy']) model.fit([train_pairs[:, 0], train_pairs[:, 1]], tr_labels, batch_size=128, epochs=20, validation_data=([test_pairs[:, 0], test_pairs[:, 1]], test_labels)) ```
true
code
0.740005
null
null
null
null
# Hierarchical Clustering **Hierarchical clustering** refers to a class of clustering methods that seek to build a **hierarchy** of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. ## Import packages ``` from __future__ import print_function # to conform python 2.x print to python 3.x import turicreate import matplotlib.pyplot as plt import numpy as np import sys import os import time from scipy.sparse import csr_matrix from sklearn.cluster import KMeans from sklearn.metrics import pairwise_distances %matplotlib inline ``` ## Load the Wikipedia dataset ``` wiki = turicreate.SFrame('people_wiki.sframe/') ``` As we did in previous assignments, let's extract the TF-IDF features: ``` wiki['tf_idf'] = turicreate.text_analytics.tf_idf(wiki['text']) ``` To run k-means on this dataset, we should convert the data matrix into a sparse matrix. ``` from em_utilities import sframe_to_scipy # converter # This will take about a minute or two. wiki = wiki.add_row_number() tf_idf, map_word_to_index = sframe_to_scipy(wiki, 'tf_idf') ``` To be consistent with the k-means assignment, let's normalize all vectors to have unit norm. ``` from sklearn.preprocessing import normalize tf_idf = normalize(tf_idf) ``` ## Bipartition the Wikipedia dataset using k-means Recall our workflow for clustering text data with k-means: 1. Load the dataframe containing a dataset, such as the Wikipedia text dataset. 2. Extract the data matrix from the dataframe. 3. Run k-means on the data matrix with some value of k. 4. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article). Let us modify the workflow to perform bipartitioning: 1. Load the dataframe containing a dataset, such as the Wikipedia text dataset. 2. Extract the data matrix from the dataframe. 3. Run k-means on the data matrix with k=2. 4. Divide the data matrix into two parts using the cluster assignments. 5. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization. 6. Visualize the bipartition of data. We'd like to be able to repeat Steps 3-6 multiple times to produce a **hierarchy** of clusters such as the following: ``` (root) | +------------+-------------+ | | Cluster Cluster +------+-----+ +------+-----+ | | | | Cluster Cluster Cluster Cluster ``` Each **parent cluster** is bipartitioned to produce two **child clusters**. At the very top is the **root cluster**, which consists of the entire dataset. Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster: * `dataframe`: a subset of the original dataframe that correspond to member rows of the cluster * `matrix`: same set of rows, stored in sparse matrix format * `centroid`: the centroid of the cluster (not applicable for the root cluster) Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters). ``` def bipartition(cluster, maxiter=400, num_runs=4, seed=None): '''cluster: should be a dictionary containing the following keys * dataframe: original dataframe * matrix: same data, in matrix format * centroid: centroid for this particular cluster''' data_matrix = cluster['matrix'] dataframe = cluster['dataframe'] # Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow. kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=1) kmeans_model.fit(data_matrix) centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_ # Divide the data matrix into two parts using the cluster assignments. data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \ data_matrix[cluster_assignment==1] # Divide the dataframe into two parts, again using the cluster assignments. cluster_assignment_sa = turicreate.SArray(cluster_assignment) # minor format conversion dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \ dataframe[cluster_assignment_sa==1] # Package relevant variables for the child clusters cluster_left_child = {'matrix': data_matrix_left_child, 'dataframe': dataframe_left_child, 'centroid': centroids[0]} cluster_right_child = {'matrix': data_matrix_right_child, 'dataframe': dataframe_right_child, 'centroid': centroids[1]} return (cluster_left_child, cluster_right_child) ``` The following cell performs bipartitioning of the Wikipedia dataset. Allow 2+ minutes to finish. Note. For the purpose of the assignment, we set an explicit seed (`seed=1`) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs. ``` %%time wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=1, seed=0) ``` Let's examine the contents of one of the two clusters, which we call the `left_child`, referring to the tree visualization above. ``` left_child ``` And here is the content of the other cluster we named `right_child`. ``` right_child ``` ## Visualize the bipartition We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid. ``` def display_single_tf_idf_cluster(cluster, map_index_to_word): '''map_index_to_word: SFrame specifying the mapping betweeen words and column indices''' wiki_subset = cluster['dataframe'] tf_idf_subset = cluster['matrix'] centroid = cluster['centroid'] # Print top 5 words with largest TF-IDF weights in the cluster idx = centroid.argsort()[::-1] for i in range(5): print('{0}:{1:.3f}'.format(map_index_to_word['category'], centroid[idx[i]])), print('') # Compute distances from the centroid to all data points in the cluster. distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten() # compute nearest neighbors of the centroid within the cluster. nearest_neighbors = distances.argsort() # For 8 nearest neighbors, print the title as well as first 180 characters of text. # Wrap the text at 80-character mark. for i in range(8): text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25]) print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'], distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else '')) print('') ``` Let's visualize the two child clusters: ``` display_single_tf_idf_cluster(left_child, map_word_to_index) display_single_tf_idf_cluster(right_child, map_word_to_index) ``` The right cluster consists of athletes and artists (singers and actors/actresses), whereas the left cluster consists of non-athletes and non-artists. So far, we have a single-level hierarchy consisting of two clusters, as follows: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Non-athletes/artists Athletes/artists ``` Is this hierarchy good enough? **When building a hierarchy of clusters, we must keep our particular application in mind.** For instance, we might want to build a **directory** for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the `athletes/artists` and `non-athletes/artists` clusters. ## Perform recursive bipartitioning ### Cluster of athletes and artists To help identify the clusters we've built so far, let's give them easy-to-read aliases: ``` non_athletes_artists = left_child athletes_artists = right_child ``` Using the bipartition function, we produce two child clusters of the athlete cluster: ``` # Bipartition the cluster of athletes and artists left_child_athletes_artists, right_child_athletes_artists = bipartition(athletes_artists, maxiter=100, num_runs=6, seed=1) ``` The left child cluster mainly consists of athletes: ``` display_single_tf_idf_cluster(left_child_athletes_artists, map_word_to_index) ``` On the other hand, the right child cluster consists mainly of artists (singers and actors/actresses): ``` display_single_tf_idf_cluster(right_child_athletes_artists, map_word_to_index) ``` Our hierarchy of clusters now looks like this: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Non-athletes/artists Athletes/artists + | +----------+----------+ | | | | + | athletes artists ``` Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, **we would like to achieve similar level of granularity for all clusters.** Both the athletes and artists node can be subdivided more, as each one can be divided into more descriptive professions (singer/actress/painter/director, or baseball/football/basketball, etc.). Let's explore subdividing the athletes cluster further to produce finer child clusters. Let's give the clusters aliases as well: ``` athletes = left_child_athletes_artists artists = right_child_athletes_artists ``` ### Cluster of athletes In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights. Let us bipartition the cluster of athletes. ``` left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=6, seed=1) display_single_tf_idf_cluster(left_child_athletes, map_word_to_index) display_single_tf_idf_cluster(right_child_athletes, map_word_to_index) ``` **Quiz Question**. Which diagram best describes the hierarchy right after splitting the `athletes` cluster? Refer to the quiz form for the diagrams. **Caution**. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters. * **If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster.** Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. * **Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization.** We may need to subdivide further to discover new topics. For instance, subdividing the `ice_hockey_football` cluster led to the appearance of runners and golfers. ### Cluster of non-athletes Now let us subdivide the cluster of non-athletes. ``` %%time # Bipartition the cluster of non-athletes left_child_non_athletes_artists, right_child_non_athletes_artists = bipartition(non_athletes_artists, maxiter=100, num_runs=3, seed=1) display_single_tf_idf_cluster(left_child_non_athletes_artists, map_word_to_index) display_single_tf_idf_cluster(right_child_non_athletes_artists, map_word_to_index) ``` The clusters are not as clear, but the left cluster has a tendency to show important female figures, and the right one to show politicians and government officials. Let's divide them further. ``` female_figures = left_child_non_athletes_artists politicians_etc = right_child_non_athletes_artists politicians_etc = left_child_non_athletes_artists female_figures = right_child_non_athletes_artists ``` **Quiz Question**. Let us bipartition the clusters `female_figures` and `politicians`. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams. **Note**. Use `maxiter=100, num_runs=6, seed=1` for consistency of output. ``` left_female_figures, right_female_figures = bipartition(female_figures, maxiter=100, num_runs=6, seed=1) left_politicians_etc, right_politicians_etc = bipartition(politicians_etc, maxiter=100, num_runs=6, seed=1) display_single_tf_idf_cluster(left_female_figures, map_word_to_index) display_single_tf_idf_cluster(right_female_figures, map_word_to_index) display_single_tf_idf_cluster(left_politicians_etc, map_word_to_index) display_single_tf_idf_cluster(right_politicians_etc, map_word_to_index) ```
true
code
0.609524
null
null
null
null
# Data Attribute Recommendation - TechED 2020 INT260 Getting started with the Python SDK for the Data Attribute Recommendation service. ## Business Scenario We will consider a business scenario involving product master data. The creation and maintenance of this product master data requires the careful manual selection of the correct categories for a given product from a pre-defined hierarchy of product categories. In this workshop, we will explore how to automate this tedious manual task with the Data Attribute Recommendation service. <video controls src="videos/dar_prediction_material_table.mp4"/> This workshop will cover: * Data Upload * Model Training and Deployment * Inference Requests We will work through a basic example of how to achieve these tasks using the [Python SDK for Data Attribute Recommendation](https://github.com/SAP/data-attribute-recommendation-python-sdk). *Note: if you are doing several runs of this notebook on a trial account, you may see errors stating 'The resource can no longer be used. Usage limit has been reached'. It can be beneficial to [clean up the service instance](#Cleaning-up-a-service-instance) to free up limited trial resources acquired by an earlier run of the notebook. [Some limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) cannot be reset this way.* ## Table of Contents * [Exercise 01.1](#Exercise-01.1) - Installing the SDK and preparing the service key * [Creating a service instance and key on BTP Trial](#Creating-a-service-instance-and-key) * [Installing the SDK](#Installing-the-SDK) * [Loading the service key into your Jupyter Notebook](#Loading-the-service-key-into-your-Jupyter-Notebook) * [Exercise 01.2](#Exercise-01.2) - Uploading the data * [Exercise 01.3](#Exercise-01.3) - Training the model * [Exercise 01.4](#Exercise-01.4) - Deploying the Model and predicting labels * [Resources](#Resources) - Additional reading * [Cleaning up a service instance](#Cleaning-up-a-service-instance) - Clean up all resources on the service instance * [Optional Exercises](#Optional-Exercises) - Optional exercises ## Requirements See the [README in the Github repository for this workshop](https://github.com/SAP-samples/teched2020-INT260/blob/master/exercises/ex1-DAR/README.md). # Exercise 01.1 *Back to [table of contents](#Table-of-Contents)* In exercise 01.1, we will install the SDK and prepare the service key. ## Creating a service instance and key on BTP Trial Please log in to your trial account: https://cockpit.eu10.hana.ondemand.com/trial/ In the your global account screen, go to the "Boosters" tab: ![trial_booster.png](attachment:trial_booster.png) *Boosters are only available on the Trial landscape. If you are using a production environment, please follow this tutorial to manually [create a service instance and a service key](https://developers.sap.com/tutorials/cp-aibus-dar-service-instance.html)*. In the Boosters tab, enter "Data Attribute Recommendation" into the search box. Then, select the service tile from the search results: ![trial_locate_dar_booster.png](attachment:trial_locate_dar_booster.png) The resulting screen shows details of the booster pack. Here, click the "Start" button and wait a few seconds. ![trial_start_booster.png](attachment:trial_start_booster.png) Once the booster is finished, click the "go to Service Key" link to obtain your service key. ![trial_booster_finished.png](attachment:trial_booster_finished.png) Finally, download the key and save it to disk. ![trial_download_key.png](attachment:trial_download_key.png) ## Installing the SDK The Data Attribute Recommendation SDK is available from the Python package repository. It can be installed with the standard `pip` tool: ``` ! pip install data-attribute-recommendation-sdk ``` *Note: If you are not using a Jupyter notebook, but instead a regular Python development environment, we recommend using a Python virtual environment to set up your development environment. Please see [the dedicated tutorial to learn how to install the SDK inside a Python virtual environment](https://developers.sap.com/tutorials/cp-aibus-dar-sdk-setup.html).* ## Loading the service key into your Jupyter Notebook Once you downloaded the service key from the Cockpit, upload it to your notebook environment. The service key must be uploaded to same directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` is stored. We first navigate to the file browser in Jupyter. On the top of your Jupyter notebook, right-click on the Jupyter logo and open in a new tab. ![service_key_main_jupyter_page.png](attachment:service_key_main_jupyter_page.png) **In the file browser, navigate to the directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` notebook file is stored. The service key must reside next to this file.** In the Jupyter file browser, click the **Upload** button (1). In the file selection dialog that opens, select the `defaultKey_*.json` file you downloaded previously from the SAP Cloud Platform Cockpit. Rename the file to `key.json`. Confirm the upload by clicking on the second **Upload** button (2). ![service_key_upload.png](attachment:service_key_upload.png) The service key contains your credentials to access the service. Please treat this as carefully as you would treat any password. We keep the service key as a separate file outside this notebook to avoid leaking the secret credentials. The service key is a JSON file. We will load this file once and use the credentials throughout this workshop. ``` # First, set up logging so we can see the actions performed by the SDK behind the scenes import logging import sys logging.basicConfig(level=logging.INFO, stream=sys.stdout) from pprint import pprint # for nicer output formatting import json import os if not os.path.exists("key.json"): msg = "key.json is not found. Please follow instructions above to create a service key of" msg += " Data Attribute Recommendation. Then, upload it into the same directory where" msg += " this notebook is saved." print(msg) raise ValueError(msg) with open("key.json") as file_handle: key = file_handle.read() SERVICE_KEY = json.loads(key) ``` ## Summary Exercise 01.1 In exercise 01.1, we have covered the following topics: * How to install the Python SDK for Data Attribute Recommendation * How to obtain a service key for the Data Attribute Recommendation service # Exercise 01.2 *Back to [table of contents](#Table-of-Contents)* *To perform this exercise, you need to execute the code in all previous exercises.* In exercise 01.2, we will upload our demo dataset to the service. ## The Dataset ### Obtaining the Data The dataset we use in this workshop is a CSV file containing product master data. The original data was released by BestBuy, a retail company, under an [open license](https://github.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample#data-and-license). This makes it ideal for first experiments with the Data Attribute Recommendation service. The dataset can be downloaded directly from Github using the following command: ``` ! wget -O bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv" # If you receive a "command not found" error (i.e. on Windows), try curl instead of wget: # ! curl -o bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv" ``` Let's inspect the data: ``` # if you are experiencing an import error here, run the following in a new cell: # ! pip install pandas import pandas as pd df = pd.read_csv("bestBuy.csv") df.head(5) print() print(f"Data has {df.shape[0]} rows and {df.shape[1]} columns.") ``` The CSV contains the several products. For each product, the description, the manufacturer and the price are given. Additionally, three levels of the products hierarchy are given. The first product, a set of AAA batteries, is located in the following place in the product hierarchy: ``` level1_category: Connected Home & Housewares | level2_category: Housewares | level3_category: Household Batteries ``` We will use the Data Attribute Recommendation service to predict the categories for a given product based on its **description**, **manufacturer** and **price**. ### Creating the DatasetSchema We first have to describe the shape of our data by creating a DatasetSchema. This schema informs the service about the individual column types found in the CSV. We also describe which are the target columns used for training. These columns will be later predicted. In our case, these are the three category columns. The service currently supports three column types: **text**, **category** and **number**. For prediction, only **category** is currently supported. A DatasetSchema for the BestBuy dataset looks as follows: ```json { "features": [ {"label": "manufacturer", "type": "CATEGORY"}, {"label": "description", "type": "TEXT"}, {"label": "price", "type": "NUMBER"} ], "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ], "name": "bestbuy-category-prediction", } ``` We will now upload this DatasetSchema to the Data Attribute Recommendation service. The SDK provides the [`DataManagerClient.create_dataset_schema()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset_schema) method for this purpose. ``` from sap.aibus.dar.client.data_manager_client import DataManagerClient dataset_schema = { "features": [ {"label": "manufacturer", "type": "CATEGORY"}, {"label": "description", "type": "TEXT"}, {"label": "price", "type": "NUMBER"} ], "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ], "name": "bestbuy-category-prediction", } data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY) response = data_manager.create_dataset_schema(dataset_schema) dataset_schema_id = response["id"] print() print("DatasetSchema created:") pprint(response) print() print(f"DatasetSchema ID: {dataset_schema_id}") ``` The API responds with the newly created DatasetSchema resource. The service assigned an ID to the schema. We save this ID in a variable, as we will need it when we upload the data. ### Uploading the Data to the service The [`DataManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient) class is also responsible for uploading data to the service. This data must fit to an existing DatasetSchema. After uploading the data, the service will validate the Dataset against the DataSetSchema in a background process. The data must be a CSV file which can optionally be `gzip` compressed. We will now upload our `bestBuy.csv` file, using the DatasetSchema which we created earlier. Data upload is a two-step process. We first create the Dataset using [`DataManagerClient.create_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset). Then we can upload data to the Dataset using the [`DataManagerClient.upload_data_to_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_to_dataset) method. ``` dataset_resource = data_manager.create_dataset("my-bestbuy-dataset", dataset_schema_id) dataset_id = dataset_resource["id"] print() print("Dataset created:") pprint(dataset_resource) print() print(f"Dataset ID: {dataset_id}") # Compress file first for a faster upload ! gzip -9 -c bestBuy.csv > bestBuy.csv.gz ``` Note that the data upload can take a few minutes. Please do not restart the process while the cell is still running. ``` # Open in binary mode. with open('bestBuy.csv.gz', 'rb') as file_handle: dataset_resource = data_manager.upload_data_to_dataset(dataset_id, file_handle) print() print("Dataset after data upload:") print() pprint(dataset_resource) ``` Note that the Dataset status changed from `NO_DATA` to `VALIDATING`. Dataset validation is a background process. The status will eventually change from `VALIDATING` to `SUCCEEDED`. The SDK provides the [`DataManagerClient.wait_for_dataset_validation()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.wait_for_dataset_validation) method to poll for the Dataset validation. ``` dataset_resource = data_manager.wait_for_dataset_validation(dataset_id) print() print("Dataset after validation has finished:") print() pprint(dataset_resource) ``` If the status is `FAILED` instead of `SUCCEEDED`, then the `validationMessage` will contain details about the validation failure. To better understand the Dataset lifecycle, refer to the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/a9b7429687a04e769dbc7955c6c44265.html). ## Summary Exercise 01.2 In exercise 01.2, we have covered the following topics: * How to create a DatasetSchema * How to upload a Dataset to the service You can find optional exercises related to exercise 01.2 [below](#Optional-Exercises-for-01.2). # Exercise 01.3 *Back to [table of contents](#Table-of-Contents)* *To perform this exercise, you need to execute the code in all previous exercises.* In exercise 01.3, we will train the model. ## Training the Model The Dataset is now uploaded and has been validated successfully by the service. To train a machine learning model, we first need to select the correct model template. ### Selecting the right ModelTemplate The Data Attribute Recommendation service currently supports two different ModelTemplates: | ID | Name | Description | |--------------------------------------|---------------------------|---------------------------------------------------------------------------| | d7810207-ca31-4d4d-9b5a-841a644fd81f | **Hierarchical template** | Recommended for the prediction of multiple classes that form a hierarchy. | | 223abe0f-3b52-446f-9273-f3ca39619d2c | **Generic template** | Generic neural network for multi-label, multi-class classification. | | 188df8b2-795a-48c1-8297-37f37b25ea00 | **AutoML template** | Finds the [best traditional machine learning model out of several traditional algorithms](https://blogs.sap.com/2021/04/28/how-does-automl-works-in-data-attribute-recommendation/). Single label only. | We are building a model to predict product hierarchies. The **Hierarchical Template** is correct for this scenario. In this template, the first label in the DatasetSchema is considered the top-level category. Each subsequent label is considered to be further down in the hierarchy. Coming back to our example DatasetSchema: ```json { "labels": [ {"label": "level1_category", "type": "CATEGORY"}, {"label": "level2_category", "type": "CATEGORY"}, {"label": "level3_category", "type": "CATEGORY"} ] } ``` The first defined label is `level1_category`, which is given more weight during training than `level3_category`. Refer to the [official documentation on ModelTemplates](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e76e8c636974a06967552c05d40e066.html) to learn more. Additional model templates may be added over time, so check back regularly. ## Starting the training When working with models, we use the [`ModelManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient) class. To start the training, we need the IDs of the dataset and the desired model template. We also have to provide a name for the model. The [`ModelManagerClient.create_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.create_job) method launches the training Job. *Only one model of a given name can exist. If you receive a message stating 'The model name specified is already in use', you either have to remove the job and its associated model first or you have to change the `model_name` variable name below. You can also [clean up the entire service instance](#Cleaning-up-a-service-instance).* ``` from sap.aibus.dar.client.model_manager_client import ModelManagerClient from sap.aibus.dar.client.exceptions import DARHTTPException model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY) model_template_id = "d7810207-ca31-4d4d-9b5a-841a644fd81f" # hierarchical template model_name = "bestbuy-hierarchy-model" job_resource = model_manager.create_job(model_name, dataset_id, model_template_id) job_id = job_resource['id'] print() print("Job resource:") print() pprint(job_resource) print() print(f"ID of submitted Job: {job_id}") ``` The job is now running in the background. Similar to the DatasetValidation, we have to poll the job until it succeeds. The SDK provides the [`ModelManagerClient.wait_for_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_job) method: ``` job_resource = model_manager.wait_for_job(job_id) print() print("Job resource after training is finished:") pprint(job_resource) ``` To better understand the Training Job lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/0fc40aa077ce4c708c1e5bfc875aa3be.html). ## Intermission The model training will take between 5 and 10 minutes. In the meantime, we can explore the available [resources](#Resources) for both the service and the SDK. ## Inspecting the Model Once the training job is finished successfully, we can inspect the model using [`ModelManagerClient.read_model_by_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_by_name). ``` model_resource = model_manager.read_model_by_name(model_name) print() pprint(model_resource) ``` In the model resource, the `validationResult` key provides information about model performance. You can also use these metrics to compare performance of different [ModelTemplates](#Selecting-the-right-ModelTemplate) or different datasets. ## Summary Exercise 01.3 In exercise 01.3, we have covered the following topics: * How to select the appropriate ModelTemplate * How to train a Model from a previously uploaded Dataset You can find optional exercises related to exercise 01.3 [below](#Optional-Exercises-for-01.3). # Exercise 01.4 *Back to [table of contents](#Table-of-Contents)* *To perform this exercise, you need to execute the code in all previous exercises.* In exercise 01.4, we will deploy the model and predict labels for some unlabeled data. ## Deploying the Model The training job has finished and the model is ready to be deployed. By deploying the model, we create a server process in the background on the Data Attribute Recommendation service which will serve inference requests. In the SDK, the [`ModelManagerClient.create_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#module-sap.aibus.dar.client.model_manager_client) method lets us create a Deployment. ``` deployment_resource = model_manager.create_deployment(model_name) deployment_id = deployment_resource["id"] print() print("Deployment resource:") print() pprint(deployment_resource) print(f"Deployment ID: {deployment_id}") ``` *Note: if you are using a trial account and you see errors such as 'The resource can no longer be used. Usage limit has been reached', consider [cleaning up the service instance](#Cleaning-up-a-service-instance) to free up limited trial resources.* Similar to the data upload and the training job, model deployment is an asynchronous process. We have to poll the API until the Deployment is in status `SUCCEEDED`. The SDK provides the [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) for this purposes. ``` deployment_resource = model_manager.wait_for_deployment(deployment_id) print() print("Finished deployment resource:") print() pprint(deployment_resource) ``` Once the Deployment is in status `SUCCEEDED`, we can run inference requests. To better understand the Deployment lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/f473b5b19a3b469e94c40eb27623b4f0.html). *For trial users: the deployment will be stopped after 8 hours. You can restart it by deleting the deployment and creating a new one for your model. The [`ModelManagerClient.ensure_deployment_exists()`](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) method will delete and re-create automatically. Then, you need to poll until the deployment is succeeded using [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) as above.* ## Executing Inference requests With a single inference request, we can send up to 50 objects to the service to predict the labels. The data send to the service must match the `features` section of the DatasetSchema created earlier. The `labels` defined inside of the DatasetSchema will be predicted for each object and returned as a response to the request. In the SDK, the [`InferenceClient.create_inference_request()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.inference_client.InferenceClient.create_inference_request) method handles submission of inference requests. ``` from sap.aibus.dar.client.inference_client import InferenceClient inference = InferenceClient.construct_from_service_key(SERVICE_KEY) objects_to_be_classified = [ { "features": [ {"name": "manufacturer", "value": "Energizer"}, {"name": "description", "value": "Alkaline batteries; 1.5V"}, {"name": "price", "value": "5.99"}, ], }, ] inference_response = inference.create_inference_request(model_name, objects_to_be_classified) print() print("Inference request processed. Response:") print() pprint(inference_response) ``` *Note: For trial accounts, you only have a limited number of objects which you can classify.* You can also try to come up with your own example: ``` my_own_items = [ { "features": [ {"name": "manufacturer", "value": "EDIT THIS"}, {"name": "description", "value": "EDIT THIS"}, {"name": "price", "value": "0.00"}, ], }, ] inference_response = inference.create_inference_request(model_name, my_own_items) print() print("Inference request processed. Response:") print() pprint(inference_response) ``` You can also classify multiple objects at once. For each object, the `top_n` parameter determines how many predictions are returned. ``` objects_to_be_classified = [ { "objectId": "optional-identifier-1", "features": [ {"name": "manufacturer", "value": "Energizer"}, {"name": "description", "value": "Alkaline batteries; 1.5V"}, {"name": "price", "value": "5.99"}, ], }, { "objectId": "optional-identifier-2", "features": [ {"name": "manufacturer", "value": "Eidos"}, {"name": "description", "value": "Unravel a grim conspiracy at the brink of Revolution"}, {"name": "price", "value": "19.99"}, ], }, { "objectId": "optional-identifier-3", "features": [ {"name": "manufacturer", "value": "Cadac"}, {"name": "description", "value": "CADAC Grill Plate for Safari Chef Grills: 12\"" + "cooking surface; designed for use with Safari Chef grills;" + "105 sq. in. cooking surface; PTFE nonstick coating;" + " 2 grill surfaces" }, {"name": "price", "value": "39.99"}, ], } ] inference_response = inference.create_inference_request(model_name, objects_to_be_classified, top_n=3) print() print("Inference request processed. Response:") print() pprint(inference_response) ``` We can see that the service now returns the `n-best` predictions for each label as indicated by the `top_n` parameter. In some cases, the predicted category has the special value `nan`. In the `bestBuy.csv` data set, not all records have the full set of three categories. Some records only have a top-level category. The model learns this fact from the data and will occasionally suggest that a record should not have a category. ``` # Inspect all video games with just a top-level category entry video_games = df[df['level1_category'] == 'Video Games'] video_games.loc[df['level2_category'].isna() & df['level3_category'].isna()].head(5) ``` To learn how to execute inference calls without the SDK just using the underlying RESTful API, see [Inference without the SDK](#Inference-without-the-SDK). ## Summary Exercise 01.4 In exercise 01.4, we have covered the following topics: * How to deploy a previously trained model * How to execute inference requests against a deployed model You can find optional exercises related to exercise 01.4 [below](#Optional-Exercises-for-01.4). # Wrapping up In this workshop, we looked into the following topics: * Installation of the Python SDK for Data Attribute Recommendation * Modelling data with a DatasetSchema * Uploading data into a Dataset * Training a model * Predicting labels for unlabelled data Using these tools, we are able to solve the problem of missing Master Data attributes starting from just a CSV file containing training data. Feel free to revisit the workshop materials at any time. The [resources](#Resources) section below contains additional reading. If you would like to explore the additional capabilities of the SDK, visit the [optional exercises](#Optional-Exercises) below. ## Cleanup During the course of the workshop, we have created several resources on the Data Attribute Recommendation Service: * DatasetSchema * Dataset * Job * Model * Deployment The SDK provides several methods to delete these resources. Note that there are dependencies between objects: you cannot delete a Dataset without deleting the Model beforehand. You will need to set `CLEANUP_SESSION = True` below to execute the cleanup. ``` # Clean up all resources created earlier CLEANUP_SESSION = False def cleanup_session(): model_manager.delete_deployment_by_id(deployment_id) # this can take a few seconds model_manager.delete_model_by_name(model_name) model_manager.delete_job_by_id(job_id) data_manager.delete_dataset_by_id(dataset_id) data_manager.delete_dataset_schema_by_id(dataset_schema_id) print("DONE cleaning up!") if CLEANUP_SESSION: print("Cleaning up resources generated in this session.") cleanup_session() else: print("Not cleaning up. Set 'CLEANUP_SESSION = True' above and run again!") ``` ## Resources *Back to [table of contents](#Table-of-Contents)* ### SDK Resources * [SDK source code on Github](https://github.com/SAP/data-attribute-recommendation-python-sdk) * [SDK documentation](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/) * [How to obtain support](https://github.com/SAP/data-attribute-recommendation-python-sdk/blob/master/README.md#how-to-obtain-support) * [Tutorials: Classify Data Records with the SDK for Data Attribute Recommendation](https://developers.sap.com/group.cp-aibus-data-attribute-sdk.html) ### Data Attribute Recommendation * [SAP Help Portal](https://help.sap.com/viewer/product/Data_Attribute_Recommendation/SHIP/en-US) * [API Reference](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html) * [Tutorials using Postman - interact with the service RESTful API directly](https://developers.sap.com/mission.cp-aibus-data-attribute.html) * [Trial Account Limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) * [Metering and Pricing](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e093326a2764c298759fcb92c5b0500.html) ## Addendum ### Inference without the SDK *Back to [table of contents](#Table-of-Contents)* The Data Attribute Service exposes a RESTful API. The SDK we use in this workshop uses this API to interact with the DAR service. For custom integration, you can implement your own client for the API. The tutorial "[Use Machine Learning to Classify Data Records]" is a great way to explore the Data Attribute Recommendation API with the Postman REST client. Beyond the tutorial, the [API Reference] is a comprehensive documentation of the RESTful interface. [Use Machine Learning to Classify Data Records]: https://developers.sap.com/mission.cp-aibus-data-attribute.html [API Reference]: https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html To demonstrate the underlying API, the next example uses the `curl` command line tool to perform an inference request against the Inference API. The example uses the `jq` command to extract the credentials from the service. The authentication token is retrieved from the `uaa_url` and then used for the inference request. ``` # If the following example gives you errors that the jq or curl commands cannot be found, # you may be able to install them from conda by uncommenting one of the lines below: #%conda install -q jq #%conda install -q curl %%bash -s "$model_name" # Pass the python model_name variable as the first argument to shell script model_name=$1 echo "Model: $model_name" key=$(cat key.json) url=$(echo $key | jq -r .url) uaa_url=$(echo $key | jq -r .uaa.url) clientid=$(echo $key | jq -r .uaa.clientid) clientsecret=$(echo $key | jq -r .uaa.clientsecret) echo "Service URL: $url" token_url=${uaa_url}/oauth/token?grant_type=client_credentials echo "Obtaining token with clientid $clientid from $token_url" bearer_token=$(curl \ --silent --show-error \ --user $clientid:$clientsecret \ $token_url \ | jq -r .access_token ) inference_url=${url}/inference/api/v3/models/${model_name}/versions/1 echo "Running inference request against endpoint $inference_url" echo "" # We pass the token in the Authorization header. # The payload for the inference request is passed as # the body of the POST request below. # The output of the curl command is piped through `jq` # for pretty-printing curl \ --silent --show-error \ --header "Authorization: Bearer ${bearer_token}" \ --header "Content-Type: application/json" \ -XPOST \ ${inference_url} \ -d '{ "objects": [ { "features": [ { "name": "manufacturer", "value": "Energizer" }, { "name": "description", "value": "Alkaline batteries; 1.5V" }, { "name": "price", "value": "5.99" } ] } ] }' | jq ``` ### Cleaning up a service instance *Back to [table of contents](#Table-of-Contents)* To clean all data on the service instance, you can run the following snippet. The code is self-contained and does not require you to execute any of the cells above. However, you will need to have the `key.json` containing a service key in place. You will need to set `CLEANUP_EVERYTHING = True` below to execute the cleanup. **NOTE: This will delete all data on the service instance!** ``` CLEANUP_EVERYTHING = False def cleanup_everything(): import logging import sys logging.basicConfig(level=logging.INFO, stream=sys.stdout) import json import os if not os.path.exists("key.json"): msg = "key.json is not found. Please follow instructions above to create a service key of" msg += " Data Attribute Recommendation. Then, upload it into the same directory where" msg += " this notebook is saved." print(msg) raise ValueError(msg) with open("key.json") as file_handle: key = file_handle.read() SERVICE_KEY = json.loads(key) from sap.aibus.dar.client.model_manager_client import ModelManagerClient model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY) for deployment in model_manager.read_deployment_collection()["deployments"]: model_manager.delete_deployment_by_id(deployment["id"]) for model in model_manager.read_model_collection()["models"]: model_manager.delete_model_by_name(model["name"]) for job in model_manager.read_job_collection()["jobs"]: model_manager.delete_job_by_id(job["id"]) from sap.aibus.dar.client.data_manager_client import DataManagerClient data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY) for dataset in data_manager.read_dataset_collection()["datasets"]: data_manager.delete_dataset_by_id(dataset["id"]) for dataset_schema in data_manager.read_dataset_schema_collection()["datasetSchemas"]: data_manager.delete_dataset_schema_by_id(dataset_schema["id"]) print("Cleanup done!") if CLEANUP_EVERYTHING: print("Cleaning up all resources in this service instance.") cleanup_everything() else: print("Not cleaning up. Set 'CLEANUP_EVERYTHING = True' above and run again.") ``` ### Optional Exercises *Back to [table of contents](#Table-of-Contents)* To work with the optional exercises, create a new cell in the Jupyter notebook by clicking the `+` button in the menu above or by using the `b` shortcut on your keyboard. You can then enter your code in the new cell and execute it. #### Optional Exercises for 01.2 ##### DatasetSchemas Use the [`DataManagerClient.read_dataset_schema_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_schema_by_id) and the [`DataManagerClient.read_dataset_schema_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_schema_collection) methods to list the newly created and all DatasetSchemas, respectively. ##### Datasets Use the [`DataManagerClient.read_dataset_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_by_id) and the [`DataManagerClient.read_dataset_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_collection) methods to inspect the newly created dataset. Instead of using two separate methods to upload data and wait for validation to finish, you can also use [`DataManagerClient.upload_data_and_validate()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_and_validate). #### Optional Exercises for 01.3 ##### ModelTemplates Use the [`ModelManagerClient.read_model_template_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_template_collection) to list all existing model templates. ##### Jobs Use [`ModelManagerClient.read_job_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_job_by_id) and [`ModelManagerClient.read_job_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_job_collection) to inspect the job we just created. The entire process of uploading the data and starting the training is also available as a single method call in [`ModelCreator.create()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.workflow.model.ModelCreator.create). #### Optional Exercises for 01.4 ##### Deployments Use [`ModelManagerClient.read_deployment_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_deployment_by_id) and [`ModelManagerClient.read_deployment_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_deployment_collection) to inspect the Deployment. Use the [`ModelManagerclient.lookup_deployment_id_by_model_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.lookup_deployment_id_by_model_name) method to find the deployment ID for a given model name. ##### Inference Use the [`InferenceClient.do_bulk_inference()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.inference_client.InferenceClient.do_bulk_inference) method to process more than fifty objects at a time. Note how the data format returned changes.
true
code
0.38122
null
null
null
null
<a href="https://colab.research.google.com/github/MattFinney/practical_data_science_in_python/blob/main/Session_2_Practical_Data_Science.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/><a> # Practical Data Science in Python ## Unsupervised Learning: Classifying Spotify Tracks by Genre with $k$-Means Clustering Authors: Matthew Finney, Paulina Toro Isaza #### Run this First! (Function Definitions) ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_palette('Set1') from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.cluster import KMeans from IPython.display import Audio, Image, clear_output rs = 123 np.random.seed(rs) def pca_plot(df, classes=None): # Scale data for PCA scaled_df = StandardScaler().fit_transform(df) # Fit the PCA and extract the first two components pca_results = PCA().fit_transform(scaled_df) pca1_scores = pca_results[:,0] pca2_scores = pca_results[:,1] # Sort the legend labels if classes is None: hue_order = None n_classes = 0 elif str(classes[0]).isnumeric(): classes = ['Cluster {}'.format(x) for x in classes] hue_order = sorted(np.unique(classes)) n_classes = np.max(np.unique(classes).shape) else: hue_order = sorted(np.unique(classes)) n_classes = np.max(np.unique(classes).shape) # Plot the first two principal components plt.figure(figsize=(8.5,8.5)) plt.grid() sns.scatterplot(pca1_scores, pca2_scores, s=50, hue=classes, hue_order=hue_order, palette='Set1') plt.xlabel("Principal Component {}".format(1)) plt.ylabel("Principal Component {}".format(2)) plt.title('Principal Component Plot') plt.show() def tracklist_player(track_list, df, header="Track Player"): action = '' for track in track_list: print('{}\nTrack Name: {}\nArtist Name(s): {}'.format(header, df.loc[track,'name'],df.loc[track,'artist'])) try: display(Image(df.loc[track,'cover_url'], format='jpeg', height=150)) except: print('No cover art available') try: display(Audio(df.loc[track,'preview_url']+'.mp3', autoplay=True)) except: print('No audio preview available') print('Press <Enter> for the next track or q then <Enter> to quit: ') action = input() clear_output() if action=='q': break print('No more clusters. Goodbye!') def play_cluster_tracks(track_df, cluster_column="best_cluster"): for cluster in sorted(track_df[cluster_column].unique()): # Get the tracks in the cluster, and shuffle them for variety tracks_list = track_df[track_df[cluster_column] == cluster].index.values np.random.shuffle(tracks_list) # Instantiate a tracklist player tracklist_player(tracks_list, df=track_df, header='{}'.format(cluster)) # Load Track DataFrame path = 'https://raw.githubusercontent.com/MattFinney/practical_data_science_in_python/main/spotify_track_data.csv' tracks_df = pd.read_csv(path) # Columns from the track dataframe which are relevant for our analysis audio_feature_cols = ['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature'] # Show the first five rows of our dataframe tracks_df.head() ``` ## Recap from Session 1 In our earlier session, we started working with a dataset of Spotify tracks. We explored the variables in the dataset, and determined that audio features - like danceability, accousticness, and tempo - vary across the songs in our dataset and might help us to thoughtfully group the tracks into different playlists. We then used Principal Component Analysis (PCA), a dimensionality reduction technique, to visualize the variation in songs. We'll pick up where we left off, with the PCA plot from last time. If you're just joining us for Session 2, don't fret! Attending Session 1 is NOT a prerequisite to learn and have fun in Session 2 today! ``` # Plot the principal component analysis results pca_plot(tracks_df[audio_feature_cols]) ``` ## Today: Classification using $k$-Means Clustering Our Principal Component Analysis in the first session helped us to visualize the variation of track audio features in just two dimensions. Looking at the scatterplot of the first two principal components above, we can see that there are a few different groups of tracks. But how do we mathematically separate the tracks into these meaningful groups? One way to separate the tracks into meaningful groups based on similar audio features is to use clustering. Clustering is a machine learning technique that is very powerful for identifying patterns in unlabeled data where the ground truth is not known. ### What is $k$-Means Clustering? $k$-Means Clustering is one of the most popular clustering algorithms. The algorithm assigns each data point to a cluster using four main steps. **Step 1: Initialize the Clusters**\ Based on the user's desired number of clusters $k$, the algorithm randomly chooses a centroid for each cluster. In this example, we choose a $k=3$, therefore the algorithm randomly picks 3 centroids. ![Initialization.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXEAAAFuCAYAAAB3ByjqAAAABmJLR0QA/wD/AP+gvaeTAAAw4UlEQVR42u2d2ZdUVdqn/dbXq/uue/VN911f9P/QfdFd30UNyyqUqRAtkEFFlEllcEAEZRBFFBBBBREoFAVBJnFmUhEQKJBJUMGBSQZllBmH3fns6vCLPHlIEjLixPQ8a8WqRWSZO/LEOb+997vf9/dec42IiIiIiIiIiIiIiFzTokWL//ynP/3pf/ry5cuXr9K9brzxxv94VSLeqlWr5W3atDnbrl27H3358uXLV/av66677vwf//jHB65KxP/617+u+8c//hFERKQ0/P3vf//197///cOKuIiIIi4iIoq4iIgo4iIiirgiLiKiiIuIiCIuIiKKuIiIIq6Ii4go4iIiooiLiCjiiriIiCIuIiKKuIiIKOIiIoq4Ii4iooiLiIgiLiIiiriIiCKuiIuIKOIiIqKIi4go4oq4iIgiLiIiiriIiCjiIiKKuCIuIqKIi4iIIi4ioogr4iIiiriIiCjiIiKiiIuIKOKKuBSb48ePh+effz7069cvDBs2LHz44Yc19fd///33YcKECeGee+4JI0aMCGvXrq2oz3/gwIEwfvz4+PlHjhwZNm7c6E2tiEutgADceOONYerUqWHLli3ho48+CnfccUd47rnnauLv/+qrr0L79u3Dyy+/HLZt2xaWLVsWbrnllvjvSmD79u3hhhtuCLNmzYqf//333w+dOnUK8+bN8+ZWxKUWGDRoUFi0aFG9986dOxc6d+4cBaLaueuuu8Ly5cvrvXfy5Mlw0003hT179pT957/tttvCJ598Uu+9I0eOxInp0KFD3uCKuFQzv/zyS2jZsmU4f/582k1ZMavRq+Xs2bPx7//1118b/OyZZ55pMLmVG4g1u6g0Ro0aFZYsWeJNroiLIl69nDlzJrRu3fqSIr5w4cKKFnFCK6KIS5UzcODA8Oabb9Z7D1Hv0qVLjLFWO7169WpwkPvjjz+Gv/3tb+Gbb74p+89P/H79+vX13jt27FgMpxw8eNAbXBGXamf//v1xNcfK+/PPPw9r1qwJvXv3jivRWmDnzp2/HQx+8cUXYeXKleH2228P06dPr4jPz0TL53/99dfj5//ggw9C165dw2uvvebNrYhLrcC2nBQ7xHvw4MFh6dKlNfX3s2IdN25cXJU//PDDUcgrib1794annnoqfn5SRNetW+dNrYiLiIgiLiKiiCviIiKKuIiIKOIiIqKIi4go4oq4iIgiLiIiivjVc+rUqTBlypTQv3//6Kq3ePHiVD+LaiXn540fNMU2el6IKOIVJWA333xzePbZZ6OfNdVm9913X3jkkUdqQsgPHz4cbU9ffPHFWEK9evXqaI9KBZ6IKOJlD2KV9KnAXQ8hwwei2mGywvcin4sXL0afaDu0iCjiZQ+r8DS3tbfeeiv6WVQz7DTatGkTw0lJZs+eHVfnIqKIV6SIY49aKyKO9WmSV199NbZMExFFvKwZM2ZMA7EinNKnT5+aaNg7dOjQMHfu3HrvXbhwwXCKiCJeGZw4cSI2dsW/evPmzbFf4IABA6KlZi1Ap3UaELzwwgvxYHfVqlXREpbJTUQU8YqANlmsxkkxfOihh2IopZZSDJnIJk2aFPr16xf9rGvNz1tEES+yiCOyHLL17ds3PPDAA2HBggUx5CEiImUu4hy6de7cOUycODHmMTPWgw8+GF+1tFoWEalIER8/fnyM1+aDeBP6sLJQRKTMRZymq/v27WvwPgL+xBNP+O2JiCJeziJ+6623hj179jR4/9133w1PPvmk356IKOLlLOJ0QcecKR/CKZg1LVu2zG9PRBTxchZxSsIJqZC3/Omnn8Y8bgyqcNvzYFNEpAJSDM+ePRtmzJgR7r333ijetWYVKyJS0SJe6VBM89xzz8VSfvLcFy1aVNBJCDtdrHSptBw4cGDmxUpHjhyJFa+9evWKqZ+cV2Q5PlWn+NwwPsVaWRcr4b3DTpHxWWRk7X65f//+6MbZs2fPWKz18ccfZzo+Z1YkGfTo0SOOz25ZFPGqAYHLlbXv2LEj5rkTDipUWX/OD5yK088//zysX78+pl8+9thjmQnIjTfeGHdKX3zxRVi7dm24++67M/Mj3717d2jfvn145ZVX4vgICGLKWUoW7Nq1K9xwww3htddeC19++WW0LbjzzjsbpMUWC+4pxsdOmPFXrlwZbr/9dh7qTMbHqoHrP3/+/LBz587oN0QyAi6YoohXBYgpjoD5UG2K0BRixcRkkDSw+umnn8Idd9wRBbXY0AnpjTfeqPcefuQ8yJs2bSr6+PjcvPfee/XeO3/+fOjSpUvYvn170cdP852nwrhjx45R4IsN3zONPPKhQI6JPS2rq9BwXrVhw4YGO0Mm9jR3UFHEKw5uZlbjSRYuXFiQ1WK7du1iuCYJwo4fSjEhZNKqVat4ZpFk5syZDZptFBomq5YtW8b/TTJt2rS4Oi8m/N2MnxY6IqMq2Wyj0CDWWAmnjf/000/Hs6Ni8sMPP8RdZhqEVyzGU8SrAm5yYrZJ2H4Sxy6EiLPyScL2vthbesSjdevW4fTp0w1+9tJLL8UQSxYijnVuEnqmzpo1q6jjs+Jn/J9//rnBz/hu+Y6LCded658m4sToaWxSTI4dOxZDOZfagWqkpohXBaNHj24Qn0R8iJsW4gBo5MiRDVachDO6devWYJtbDDjISq44c+GMrVu3Fn18DoqTK07CGTT7IEZebDBlS644WSEzeX/zzTdFH5+wXDKckwtncF5RbLjP1qxZ02CFjrgfOnRIAVDEK59cI2ayN8hzJw7Owd+oUaMK8vsJ1RB/ZeVHDPqjjz7K1A+cBxXBInSDHzuCwgSFYVkW5A5Wcblk/BUrVsSDvawOFnMHq4SOOOSjAI3zgKwOFom7Mz47HybNJUuWxAk0q4PF3MEqCwnG53yC+73YuxBRxDOFbS8POXnuNCZm5VbIFDwKoshOyTWzyLqSlZg8oklWzPDhwzNPsWNbTwya8UeMGJF5ih0TKSmk+LGzM8o6xY5wHZMmuwLCGGQoZQkHmCxSGP/xxx+PixVRxEVERBEXEVHEFXEREUVcREQUcRERUcRFRBRxRVxERBEXERFF/KqhWIWKR/yYKZihjBynwWqBYhQMkaikxAa33JpqUKwyduzY+Pkoo3/nnXcy/XwUq9CvlfHxY6fqsZbA7ZAiHRwR8WPHzjZLvv7661ik1L179+jHnizjF0W8UY4ePRo6dOgQKw7xY6Z0nRuZ5gbV0D0IgaIsnbJsSrTxU2GievTRR8tGQCgbx66Xz0e1IbYEWdkGICA5P3DGx74X24KsbANKDaXyXH9cNbkW+KFjW1Bs87IcGzdujPcnCwvGZwLRj1wRvyKwzHz55ZfrvYd4U0JcDY2YMahK+ljkDLiy8Bu/HOwMWHnngyshDzJeJMUmreH2uXPnQqdOnWKTjWqH65ws0z958mT0I9+7d2/Rx8dnBc+bfLBRYGLRj1wRbxKYR6W5rSEsWa0GiwWTEX7TeKskycJv/HJg4YqVK66HSbKwsm3MShYr22pfDbILRSzTILxVbCtbRBoRT4Pwjla2iniT6Ny5c/juu+8avE+3mvHjx1f8l3spv3HCF5hmlXqSuVRTCT5bsZs65PzIse5NkkVTh1LDirvu+b3kDjXZManQcBZyqaYSGJllbaSmiFcoHPhNnjy5wQqRQ86s3fCKAbHvpBiyAs2qfdrl4CBr3rx59d7L+YFnEc4gnENj6XxyfuBfffVV1T/8NDfGvjgfDsKzCmdwHyZdH9kZc06BL7ko4peFB/aWW26JjX2JDdLolb6JWTUaLjY8kDk/cw418dsmHs6/y4GcHzlWrhxyEZ+m0QDhjCzI+ZFzsM342ARzP2R1sFdqcn7k+J9jIUsYMUs/cPqgIticS7GoYEIl0SA5sYoi3igcZNETkvQy/LBJMauGzJQc+JnzkN5///3R75qJqty29YRPWBUzeSYb/xYbwk2IOOPTrKMcDnyzhLAG4SP87ulElTxoLDas+EnxJWuKxdS2bdtUZUW8tmC1PW7cuJjny4OwYMGCqpqEpLIhLEVYjx3WoEGDqiJMqYhLQVcxhAPYjvKwsCVlxU2HH5FSQ4iKcA3hkW+//TbmoSPm7HxFEZc6aPeWjF9yMMuBlZVvUmrS8sBzjZwPHDjgBVLEpW3btvFwNgnpccQ5RUrF4cOH4yFlGqQwcogsinjNw1aVoo0kFKpkleEhkkZjxUTEyMmUEkW85qGyjerGfHJ54MQjRUoJPizEwZMrdNIKyYoRRbzmwWcCnw+yU8hzX758eVnlgUtt88UXX0TBJu+eRYV54Iq4pECFI6tx8tzJs05W4ImUElbenM+Qh48f0WeffeZFUcQrC2KDuTxu3BGrza+cqkv8uNk69+/fPyxatMg8dSkYO3fujEV4hAixiE6GZ0QRLyrE/UinYjtJHjf+zfiV86oGocMPnO0yB6nffPNNzFOn4KhabAuktKxbty4+P9gBcK9RzXvbbbdFAzdRxDMBNzYaDuSDeOPPUm7l71cDnXjefvvteu/hDsiqHK8WkauF5wRfnWT4Jec3nmYhrYgr4gWHVQRl8UkIOVT64SMhIaxc8Z5JUg5Wt1LZYBGNVXQa7PSqoWmLIl4BcBLPwU6SaijGaazpBIZbydRHkSuhMb/xoUOHeoCviGcDp/E45OVDEwL8I5JtsSoRPFpmzZpV7z2aPJD2iM2oSHPgMDNpH8EKnXOYtCI3RVwRLzjYrHbt2jVmb2B+zxaQhhNkq1QDGPez2yA0xCEUpdRMUMmJS+RqyPmNkxjAGQsNnVmdv/vuu14cRTw7iBnTXYeMFGJ51XCgmQ/hFB4y0r/wxNB4SwoJB5iEHnHnZPFTCw2sFXEREUVcERcRUcRFREQRFxERRVxERBFXxEVEFHEREVHERUQUcUW8hFD1+NRTT8VKR9wNcT2kY31WUExBkQ4Wn/fcc0+YN29eVfmdS+Ps2rUr2idQWUzjhqyNpej+88gjj4QuXbrE8bMuhqMq9OGHH47j48p5pX7l27ZtC0OGDIn/PY1ZSlHspoiXkIMHD0anQ1z/vv3223hDDR48OAwaNCgTv/F9+/bFsmYmjt27d8cbkhuRh1qqn08//fQ3v27uBZ5lbCGyarK9du3aOD52Dfv374/WDd27d8/MPO3jjz+OZfxMXIyPAOPXkrSPvhRMOPn/PX7n/PcY3CniNQJubPPnz6/3HuJ99913Z+LSRql8st8hu4AePXpErxepblg9IuT5YKOAMNGEoZhcyi/8xIkTUdgPHDhQ1PHZbeJLTuegfLCOZmHDDrkx8M3n/0ezl+TOul27dpkadCniJYQv+/jx4w3ez8JvnIeoVatWsUdnkjlz5oTJkyf7BVUxiBVimQYeJcVuZoxI43aZxqhRo8KSJUuKOj47X0KYabATvdwiCvGm5WIahGeybCOniJcQViL4JpdKRC81ibz88svRE1yqlx9//DH6waeF7RBRQhylmkQQ0WLHxhtrOoFZ3eV2onv37g233HJL6s+IrWdpNa2IlxD8xpPxx/Pnz8e42saNG4s+Pq6KyfgjfuDc3MTHpbrp3bt3g4NMmpgQJkhrZlJoaONHXDoproyf1hGr0DtRwknE4fPJ9Y3FRvpy/z3PSbINIb1mCdOkNUtRxKt0NcRsTnYI26/33nsv3tgTJkzIZHz6FXIjkh3D+BxwMYHoB14bIDishtn1sfJcsGBB3B0uXrw4k/G//PLLKHgvvvhiHJ/zoZtuuik+B1lAPB7Bnj59ehx/7ty58XosX768Sf89jc/579m18t+zg+a/z7rrkCJeYlh50y2eONro0aMzjaXlVt5kxzA+Yk7GgNQOTOT0QyWEwDlM1h2ZOAhExHPjk3KYJew4WLQw/sSJE2PK5ZVAhhmTIP/9s88+G77++uvMv0NFvMzhJmOlzoq9T58+UfCzzCOXxuGAbPjw4XFr3rdv33ggmEV6qPyTXFouh6T33ntv1TVdUcQrHOKDbDfJW+UgZceOHbGwgFlfoSgPAWE7jXCTJ7x58+ZYMFUt7ffKHXatuTxtsl04TCTPfObMmYq4Il4eEOKgr2A+iDeVnStXrvQClZg777wzFnjkQ3iMVbltxIoLzwHx52T4hWwrFj5UIiviinjJadu2beopOQdAxN+kdJBfT5592o6IGGtTq/7k6qDCFKuANEaOHNnkw0lFXBEvKqw00lKtiItnVRot6Vy4cCG0bNkyVu4l4YCOgi0pHpwVdejQ4ZI72KwTBBRxSYVskbQ8clYgxF+ltNCBPSnWZHsw+Ra7bF1CNG1LijVnR5xTpBWxKeKKeOYQSiErhQo6iiLI46ZU+Pnnn/filAEcPHOwRl4/sXEEvWPHjjHfWIoP8fD8PPNcnnfWToyKuDQKK2/iq9h1sjLXmKq8oGCLqleyhsaPHx8LQCQ7yDPP5blzTnSled6KuCIuJYawBXna5AmTtcOhby2lXyJaxIBvvvnmmKfObi3Lv58sHPK0Gb9fv35FN64qNwhr4gbKDow89aSNgCIu0gjYiBL/fOONN2KeMGXUNBZ4/PHHa+Lv37RpUwwfUKZO5SCCwkSWVeYSviOMv3Tp0jg+fj+9evWK4Y1aYMWKFVG8KTAipZHrQbgz68wkRVwqFnzXP/jgg3rvXbx4MR541cLBb5pRGqmPCEsW5d+MkzRKI7yE/wkpgNUMWUm4gOI/kw/ZZLxfbAMvRVwqHlL8rr/++tQUPwyJsNOtZsi+QCzSyCLFsbEUP85uCOtUM4SxKPZKQz9xkSZAZxbytM+dO9fgZ5MmTYqOctUMxmWXKjbCSO3dd98t6vh04Kl7/lN/NmLEiBhqqGYaKzYi9TRLXVTEpWJhxZOMPyIupP2Vwk0ua3IHmfnQZIRzAmLUxYb4dzKdLys/8FJzKT9yOv5wTnD69GlFXORyIFhkRYwdOzZ6yRBC4N+zZs2qib+fwhYmLA4y2b7Pmzcv/rvYrdVy5PzIqVtg/FyedlZ+4KWGg3Ty1ElxpE4AS2f+nXWGiiIuFQ0HeTw8NJ0mFlxrlawUhJGnzq4EP2ycLrOE2DxnEIzPZJJsPFwLCwlEnDoBwni7d+/O/DMo4tIoWKwS42SFy/aZ8IV+5iLlgyIul4SGB8Q3adtFjJXCDirjWHWJiCJetvy07+/hwo77av7m4JQ9meXAKrxnz56W/4so4mUq4N/NDqff/tdw+q1rwoXt/Wv2xuD0/VIpfIRUbKYsooiXtYDnXrUs5BSTkLKXctNUfTGNiCJecQL+Wp2A/4d6Av6bkO8YUJM3Bx4kCHY+5L9Sbm37MRFFvCIEvJaFnPQxqtLwM//oo4+i0RQFDklhFxFFvHQCfmDOZQX834X83pq7QfAzp4R92LBh0S87abgkIop4yfjl+NoGMfDLvX7aO9W7RqqG7du3h0GDBsVKT2xs33rrrUz9yHFBHDhwYHQ+xJWyVqo9FfGC8Wu4sK1PkwX83Jp/C7/+dMq7RqqC9evXR/GkMzwdchBUhDQrP3JK9Zk8sBNmfKpta8mPXBEvqJD3VsCl5qAKN+kHjo0BFrNJn+yCP3V1q318VuiTmQ9+5LxPpbAo4gUT8nNrfqeAS1XBypdVeBpPP/10WLx4cVHHR6Q5JE/jiSeeCO+//75fkiJeGCFXwKUaOXXqVGjTpk1q/JvUUlquFZNjx45FS4c0OEQnG0oU8asU8l7/LuCr/2+dgP/oXSJVCfHvpK0CvUoRV9z5ig2dcZLNI7DXZXzSW0URb5aQK+BS7ezZsyceLE6YMCGufLFTIB5d7K5AOXINFJ577rk4Pj7w/JuDVlHEm6njv9Q5PZ3x7pCqBz9yLBQIYdDcgd6RWcKKGz90xp88eXLRD1QVca1oRUQUcRERUcRFRBRxRVxERBEXERFFXEREFHEREUVcERcRUcRF5JL8TGFZM8APHD/u9u3bRxvXRYsWZeoHvnXr1nD//ff/Nv6bb76Z6fiXgyYm9957b/x8+KUX2lgLO9/+/fv/9vuXLVumiIvUCu/v2xT+z8IHwsEzV+cTwrNHmTr+I0ePHg07duyIgjJu3LhMPv/q1atj2T4l8xhaffbZZ9GPhcrPcoDyfex28S3n823ZsiVONNOnTy/M91c3IXTu3Dl88skn8ffjh44fzMyZMxVxkVoQ8P82o0v4L9M6hP81b0A4dPbKhRwrV4Qjn3PnzkXhyqJ8HgFPNszGj5z38WUpJb/88kto165dgzJ+bAaY+A4ePNis3//TTz+FOv1r8HdiI4CBF1a/irhIlbIkT8BzrysVcsQCkUpj4sSJYeHChUX9Gw4dOhQ6duyY+rMxY8aEd955p6TX+Ntvvw3dunVL/dnIkSObbbLFJMmqO42hQ4eGlStXKuIitSLgudf/nn9vk4X87NmzoWXLlqnx5yyaKtCBp23btqnjjxgxIrZcKyWNTTIPPfRQDIE0h8aaWtx3332ZNhRXxEUyYum+zZcU8NyLGPmRc02zQObAjoPMfLL0A+cgL2lbS3iB8YnRl5rbb789fPjhh/Xew/6WQ0gmoebA5HXrrbfGeHs+tJsjXENYSREXqTIB/++XEfArFfLvvvsurjYJXxAeePXVV6OAZNUxPudHTjs3xudAD4EsFz9wQh5cDxo/85lmzJgRJ5ik8F4tOcHGD53fXyem8fc3d5WviIuUGScunAn/Y2a3Jgl47tVr5aQm/e7Tp0/HZg7EeV944YW40swSVrQ0c2B8utQTiy4nODt45ZVX4uebNm1a7BxUSNhx4MfO7yfrpRQNnhVxaRac/g8ePDiuSNi+ckNzci/1WbF/a91KvGuTBPzfFj0Yjp23r6so4pLBdpXtI5kIR44ciYLOyTwHR+VU8FFJQv67hQMVcFHEJRv69evXoNEt+bm9e/cOa9as8QKlsHz/lkvGxhHwowq4KOKSBT///HO4/vrrw4ULFxr8jBgt8VlpupAr4KKIS6YQLiFPOC1ViwMkDpOkaUKugIsiLiVh1KhRYerUqfXeO3HiREw7y7preiWybP/m8MfFQxRwUcSlNCDYt912Wxg+fHisEJw7d27MW549e7YXp6k7muABsCjiUkKIiePTQan3pEmTopOdiCjikhEUZ5ASiJkSZcQULFy8eLFgv5/ik0GDBsXfz6qdqjnzyEUUcSmQwCKu5HlT2UYZ9aOPPhpN/guR501ZMnnkhFr4/bt37w7Dhg2Lk4aIKOLSTDBQWrp0ab33EO977rknGv03F35P0oCIPHKM+bP2lxBRxBXxqgKxxsqUJgJJXn/99WZ3ZyFkQh55WmgGrw18NkREEZdmQKiDtlJJiFvzau4k0aZNm9Q8cgQcxz0RUcSlGWBhmlxx076KPG/i2c2FPPLkipvY+E033ZS5256IIq6IVx2skrt37x4efvjhaO4/Z86cmOddqFVyLo+cw0w8rinHZ4JgHBFRxKUAELNevHhxePLJJ8PkyZMbzfMmYaWumXl45pmm/37yyOk+M3r06OinQkd2EVHEmwzdTx555JGYStepU6fY5YP+hHJl5AT8mmtC+Jd/CWHCBK8JbN++PfZUJP5PY152MZiDiSjiBSDXb5CKQmKxCPr48eNDz549LTi5QgGvyxaMAp57KeQhNsOlGQZplISm8FOnsIlcexFFvADQdXvevHkN3h8yZEgMIcjVCXi+kE+cWLvXhpV3sqs5iwPOAbZu3erNI4p4c+nQoUM4fPhwg/eXLFkSMyfk8gwcmC7g+UJeF6GqOeht2bp169TK1ilTpmgCJop4IaDnI1vcJIRXJhjUvSwPPti4gNeykLPiplgqrZiJ7u/u9EQRLwB4XT/++OP1Vkscat5yyy1hw4YNfvuNUOeD1SQBzxfyVatq6xoRlqP6NJ+DBw/GcxjOY0QU8WZCeht9IPEIefPNN2M5eZcuXWwd1qRrF+oyLpou4n37/jN+Xkv88MMP8X4aOXJkzLN/+eWX40Hn22+/7Q0kinihwHCJGPjYsWNjdeKmTZv81q9AyOvCvpcV8B49ak/Ac5w/fz4sWLAg5tlj4/v1119744gifiXs378/VhySB37zzTfHWPeZM2f8ZjMS8jvvrF0BLwc2b94cd5utWrWK2TJUxbJwEUW8IkQ8F3+kIpASb/6NiN9ZpyzmgRdWyOs0ooGA33GHAl5KsPPFxmD16tUxW4ZdwAMPPBCrY0URrwgRp6giLQ986NChUdileEKugJcedp5JmwTCO8TpC2FgJop40cGs6dChQw3eX7ZsWXjsscf8dosg5HVZdQp4GXD06NHQvn371J9hLTF//nwvkiJe/iJ+R52apFmasgp/5kpcmuSKhFwBLz008yBPPS3+TdNqWuKJIl72Il734WN6V1oeuO6KUu0Q/yZtNp+9e/fGcyLSH0URL3sRp1puwIABoX///rFCbu7cueaBS81AKLFz586xoI1m1yxqCLEQThRFvGJSDFmFc9NS6owfNilXIrUCO0/y1J966qnYUm/Pnj1eFEXcphCF5Ntvv63zIHkwmiVxEKufueTDouOeOptImkoTBtSPXBTxMmL37t2xCImt7qlTp+L2d9y4caF3794+qBLWrVv3W543Ezv3y0MPPRTPeUQU8TKABzLNR2Ngnb8rPSeltuH8Juk7zjnPrbfe2mibPBFFPCPqrk+sJE3y1ltvRS8XqV1OnjwZ2rZtm/qzSZMmxUN6EUW8DFZaeLskwd/CDJraBodN8rzT7CE4pCQEJ6KIlxhWVLjb5eex04uRcult27Z5gWocwm3JDkBM+uR5p3WkElHEMwYfCzIP6IZOHjsNBBBwfKdFvv/++9C1a9foAcTZSS7Pe+nSpV4cUcTLhZyfOXnsdBravn27F0XqTfTYRIwZMya89NJLMUNFRBGvIOgBSvk08VEaO2edR75z5864U2B8dgmEgBAWqQ3Wr18f7rrrrtCiRYuYp054J0s/cux0SbllfLJysBH4VXMeRbySBJw8csyKaGRBHJQ88j59+mSSR45dKfFXKl4ZHz92vKhpMuCDVP189NFHceJGyJm42QWQ/so5ThYQNuLwf+PGjXF8ngdaKWpep4hXDJfKF+f9LFzo8Jz54IMP6r2HeKe9L9XHTTfdFHdiyfAOfizJ9wsN9xkLGCqa82EXyo7UsJIiXhG0qetETLZKEpo6syIv9kNEqXda6IRGG88995xfUBXDoSrVommwEi52U5V9+/bFQ9002AnQeFoU8bKnU6dO4cCBAw3exz/jxRdfLPr4dGY/cuRI2k1hBk2VQ/gMP5+0sBnZMsuXLy/q+MePH48r8bTx6Zm7atUqvyRFvPx5/vnnY2FH/o1MJR/bySyyWMiYScYfjx07FrfZu3bt8guqcjjQTrY3JC7NOQn3QbG5++67464zH8I4LC7SdqiiiJcddGfhRuZhYvvKChwB53+zgOa6vXr1ijH4N954I8ycOTNusZONBqQ64SCb+DetDBFTdn8IOAeeWUBIhYNVOhFhRUEVM+OvWbPGL0cRrxxI5+KUnhXxtGnTwueff57p+GTBcIg6fvz4GEb58ssv/VJqCA4SWUBwBvPKK6+k2kQUO6yDHzpeQixemFhEERcRUcQVcRERRVxERBRxERFRxEVEFHFFXEREERcREUVcREQRV8Svmq+//jpWXGImRbkwpexUQmYFxTkDBgwI1113XSyXnzhxYiygqBZoY0d3JPymqWalqq+c/M4//fTT6Mf95z//OVYfUjBFx/pagerKnj17hmuvvTbayuJHnoWNcg6qS++88844PoZatehHrog3A2w0MfGhcw+Vb0ePHo1NHbip0hrgFhp8JihTXrFiRSzhx8yKyjtEJUtj/2KxefPmODFihoRwHzp0KJaI00SjHB7UtWvXRpsC/hfhxsxs2LBh0cCpFsCHHuFmIuN+37NnT7RwwA8oC2gmTSOJLVu2xPF5HvEjnzBhgiKuiDcNGt2mdSUfPHhw9IIoNtywPEhJ7r///qro08gKD4HMB/GmaQYdYUoNnXAQkHxYhXbv3j1s2rSpqu99vgcWEBhm5cNky46EHWox4Tq3bds27N27t9777EKZWPFlUcQV8cvCKhxLzbQVQharEVqqpYVO8MKo9O4orGwJEaVtzfH4wGemlJw6dSr6waftCOilmpWJWalg14GVchrsBpPuhMXYBXfr1i31Z6NGjYq7Y0VcEW/SSowtZJLXXnstxm6LDTFiQgxJaLhLbLbSV3qIJGKZhIYVpXZaZPvOJHrhwoUGP8PMqdgiVmoam8SGDh1adCdEQpft27dP/dmDDz7YYAeniCviqbDiIkab9APv2LFj+Oyzz4o+PmKW9CPHB5rtZLHba2UB/T7xXM+HSYuHtxy2y8S/k5Mln4swQ9rkWm0Qzps/f36993J+4CdOnCj6+Jz9JCdLfPg54M8yuUARr2BYhXEj9+3bN64M6YaDgGa1leYwk8wNslMw958xY0a8gavFD5zGAMTFOSxDLJg0EYi33367LD4fE+btt98ehgwZEhYuXBimTJnyW+PpWgDbWA42R4wYEUN4TLj8/VmdVzBhEtJhIcX4JBUwwdeaJiniBdj2f/jhh/EGRkTpIJ8lZKGQncKqnEmk2jryELYgvskDSiy83BroErunHyQZEUzepdohlCpbh4UEq2HOYAgjZr0D4UyIhiaMP3fu3Ng7tNZQxEUqGGK/PXr0iHnqhPFYSGSR3iqKuIg0Ew4PSefbuHFjFG4yRshRp9GxKOKKuEiZw/lLsp0e4R2yprJotC2KuIhcJVTncsibBucHyawRUcQVcZEygspI8tTTiqHI1qiVDBlRxEUqFmwfyNjJhzJ40vwohhFFXBEXKWNIp8MAisPMBQsW/JanvXLlSi+OIq6Ii5QTl3KlpOCMPHUsiMnTpgCnmv4+UcRFKhqqH7E2Jg+cbBSqVqvJr5xCOape8QMnXZKCqSz9yBVxRVykaOTywLG1RdiohsSvhVc1kPMDx2eIlfj+/fujeVVWfuSKuCIuUlRwqUzaOFDUc9ttt8WOR5UMop3mB57zI0/6lIsiLlJRYK7FIWUaHGBWuskZ4k1RUhqsxNOarYgiLlIxcGBJHniaDwpND95///2K/vuwqq3Tj1TjLjpjrV692ptAERepbEgdxJkyn1weOBWblU7//v0bVJZiF0AlalozEFHERSoKhJrMDYp68IvHbria8sAx7KJDPQe1iDl2srXoB66IS1VgWlk6pBMSOsm1pCtWHniprj8HmcS/c34vP/zwg1+6Ii6VxObNm0OvXr1CixYtYuNpilbOnj3rhcmIjz/+OK72//KXv8Q8dNrN6UeuiIs0CfKfaSdHYwOEg4wMMhOIlZaqU00tgUkW7dVIVyTljzx04vAjR4704ijiIpeHjjTr16+v9x7iTd/QVatWeYGKCNeZ+HMyH5vwDXHqrNsMiiIuFQZicd1116XGYmfNmhVefPFFL1IROXz4cGznlgb9Qmn8LIq4SKMrQSr2Tp482eBnuUM8KR6cO7Rq1SrVeGr48OHhgw8+8CIp4iKNM27cuJhWlh//Ju2MNDp8NKS4DBo0KBpO5UO7N/K0jx8/7gVSxEUa5/Tp0+Guu+4KAwYMCHPmzAmTJ0+Ocdr33nvPi5MB+JF369YtDBkyJO58CKMwgeKcKIq41BhXm5bGdp6tOwKOkO/bty/T8cv9+hT791PeTx46fizkaSPstXT/KeKKeM1DnnfPnj3jIWWbNm3C008/Hc6cOZPZ+Bs2bIh+2+SZ48VRbnnmrGrJw87lwU+aNCkWuBQK8rxZTfP72cVwIIww1wpLliyJ2TS5PPeZM2fWXOGYIi7NEnDyvNetWxcfHA4piXETIsmiUwvpiTy4GzdujOMTxyXPnPBMOeSZUx6PrSrXietBGT1NjCmjLwQrVqz4Lc+bv5dqRw4lq8Vv/HIsWrQoTpCkQ/L3U82KH/mYMWMUcUVcmgKVlsn4KQ8TIppFdgMP8KefftpgfCaRNWvWlPz6IOA7duyo9x6TzR133BELnZoLE+iuXbsahBXwG6fRQjXDdWTnx0F4PufOnYs+7Hv27FHEFXG53EPEFj4tFjl37twYNigmhCSuv/761BU3W+rp06eX9PqwKyGFMo0pU6aE2bNnN+v3s6oniyQNUjQxzKpmKFJiEk/jiSeeqHirXkVcMoEYL6XySV544YXYtLeYIN6tW7dOtSwly4Lu76WEyQ0/8LT49+jRo2Nz4+ZOYpfyG6dsnpL6aibXNCNtEh84cGCDSmBFXBGXFIh/jx07tt6DRFySA7YstrPEv3G/y4f8ch7u5Da7FIwYMaJB5Wkh/cDxOZkxY0a993bu3Bmvfy3keffr16+BHznnD5yTZHm4rohLxUIWSN++feOLlTchFAQkq9ZarMJzeeaMT5ocAlkuq1CEFH8YVoZ8PiacQvqBMxF07949Fu2Qnsmkyu8vh/OALMj5kQ8dOjSG8FhQEGJCyGsJRVyaHdagKzshFB6kZOPbYpPLM2d8ClbKrdKTcMfy5cvj52PVWGg/cH4/kxZ59pWc5321EFaiOIwFBNkqaeE9RVwRFxFRxEVERBEXERFFXEREEVfERUQUcRERUcRFRBRxRbxWwb7UDvO1SyHtcStxfEVcKhbc9PDjxg8cL5Inn3wy/Pjjj16YGgE7W6oec37nFCVl6UeOURV2urnxMS6r5QYPirhcEVu2bIllypgFUflIuzRK12nyUGvG+rUIlY633npr+Pzzz+Mu7OjRo9HvJSs/8jfeeCM6EeL3Avih4wczatQovxxFXJoCviOrV69u8D5eH3RMkeoF0Wblu3v37nrvswpG2Ldv317U8Vk04Af+3Xff1XufXUCnTp3CV1995ZekiMvlHmK2sGlbZzw4ku6AUl0cOnQodOzYMfVntLhbuHBhUcfH5ZLJIg0682RloqaIK+IVDZadaYZJ2Ke++uqrXqAqhg44+JGnhc0Ip3z44YdFHZ9zF5pmpB2m074u2TFKFHFJgfg38cc0P/DkNluqjyFDhoSXXnqp3nuEUTgnoTNRsbn//vsbNA/J+YFzPiOKuFwG0rp4kPr06RNbmtHWCz9qt7K1AX7kZCZxBsLOCz9uJvCsnunDhw+Hbt26xZX3rFmzYpOPWvQDV8SlWbAKZ+tKahex8HLz45biwkEmfuxTp04NixcvLkjHoSvh4sWL0Q+dEN6bb75ZEx2JFHEREVHERUQUcUVcREQRFxERRVxERBFXxEVEFHEREVHERf4JZeL6nYso4lJhUIiEUdL1118fWrVqFav6Tp065YURUcSl3Fm7dm102tu6dWtchZ85cyY67N19992uykUUcSl3aBiQ9NFAvPv37x9WrlzpBRJRxKVcweeclnFpK27MmvD6EBFFXMoUxBu/6RMnTjT42TPPPBMWLFjgRRJRxKWcQaw5yMxfjeNzjl0uvucioohLGcNB5oABA2Iv0BkzZoSnn346CjjWqSKiiEuBBZfmt4Um53dOBxq6o9NIQK4cOuWY0aOIK+LSANIAyeOmXyOvxx9/PJPWXtI0lixZEtM0ybFv3bp1bIBNRydRxBVxiW28OnToELZs2RJXeVRVTp48OfTs2TO1Aa9kC232aH+2a9eu+G+aE9NTdfDgwV4cRVwRlxB69OgRNmzY0OD9Bx54ILbcktLBpNquXbuwZ8+eeu8zuZJ/v23bNi+SIq6I1zKIQYsWLWKPxiSvv/56eP75571IJeT777+PXePTIKRCP1VRxBXxGofO5GmNdRHwOXPmeIFKCHFvzijSJtlhw4aZ4aOIK+LyT7EmxpqflbJ3796YBrh//34vUIkZPnx4mDJlSr33CKMw+RIfF0VcEXe1FwYOHBgPMqdPnx7GjRsXBXz58uVenDLg+PHjoU+fPtFrhjz7J554Igr4xo0bvTiKuCIu/4QDtPXr14eZM2eGxYsX12weN/a4xciTby6cXaxatSrm2b/77rtR2EURV8RF/j/Elm+++eaYg00MesyYMbGwRkQRFylzCBt17do1fPnll/HfVK0SUsIqwOpIUcRFyhwqIXfu3FnvPcSbOPS6deu8QKKIi5QrZHe0adMm9WfTpk0Lr7zyihdJFHGRcoVDTGLgZ8+ebfAzskA4RBRRxEXKmCeffDJ6nufHv7/44ovQvn37cPToUS+QKOIi5Qxphf369Yt+5+TJjx49Ogo4zo4iirhIBZDzOydPHtdAV+CiiIuIiCIuIqKIK+IiIoq4iIgo4iIiooiLiCjiiriIiCIuIoDPN97fIoq4SAXx3nvvxW47dKbHi+Wpp57Sj1wUcZFKEfBu3bqFb775Jv4bM63x48eH++67Tz9yUcRFyh1W4DkBz4F49+rVK2zYsMELJIq4SLlCDJwQShp0qJ89e7YXSRRxkXLlp59+ijHwc+fONfjZyJEjw7Jly7xIooiLlDM0j0j6kW/bti2GWU6cOOEFEkVcpJyhxVv//v1jT86pU6eGxx57LAr4xo0bvTiiiItUAqzC169fH1599dWwZMmSGCsXUcRFRBRxRVxERBEXERFFXEREFHEREUVcERcRUcRFREQRFxFRxBVxERFFXEREFHEREVHERUQUcUVcREQRFxERRVxERBRxERFFXBEXEVHERUREERcRUcQVcRERRVxERBRxERFRxEVEFPE0WrZsufTPf/7zxeuuu+68L1++fPnK/nXttdf+/Ic//OG+qxLxFi1a/Kff/e53/9WXL1++fJXudeONN/7rNSIiIiIiIiIiIiIiIiIilcr/A754jFwmK69/AAAAAElFTkSuQmCC) **Step 2: Assign Each Data Point**\ The algorithm assigns each point to the closest centroid to get $k$ initial clusters. ![Step2.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAW0AAAFtCAYAAADMATsiAAAABmJLR0QA/wD/AP+gvaeTAABK30lEQVR42u2dB3gV1daGr+W3F1RABHu7yLVdRFHRK6ICKoIdUZGiAoJIFykaijQBqVIFAoTeMZBKSCGUkEASCAkhBBISkpCEFNLb+veaMOWQc04KmVO/9z7zeJ8J5+yZPXO+2bP22t/6178AAAAAAAAAAAAAAAAAAAAAAMBSdOrUaVnHjh3zsWHDhg2b9bZ33313fI1E+6OPPvLz9fWl3NxcbNiwYcNmhW3Dhg30wQcfLK+xaO/bt48AAABYh+3bt0O0gfVITU0lvqeysrLQGQBAtIGts3TpUhK3FnXv3h2dAQBEG9g6a9eupVatWtGcOXPQGQBAtAEAAKKNXgMAAIg2AAAAiDYAAEC0AQAAQLSBU+Ht7U3u7u508eJFdAYAEG1g67Rt21bK0fby8kJnAADRBrbOuHHj6J133qHk5GR0BgAQbQAAgGhDtAEAAKINAAAAog0AABBtiDYAAEC0gVPx66+/kpubG5WXl6MzAIBoA1smKSlJys9u2LAhOgMAiDawdbhajYuLi7QBACDaAAAA0YZoAwAARBsAAABEGwAAINoQbQAAgGgDZ2L//v3Ut29f2rFjBzoDAIg2sHWmTp0q5WgPHjwYnQEARBvYOseOHaPZs2dTYGAgOgMAiDYAAEC0IdoAAADRBgAAANEGAACINkQb1A/Zxfm07lQgjT60mqYe3Ux+SZFOdf4Zhbm0Jtaffjm4kqaHb6V9KSfs6vhTC7Jo1Uk/GnnAlWaGb6cDqTG4qSHawFH5auB3dMerT9Bt4zvSncu+ULbPvadRemGOw5//7oRQenxNH4Nz562H32zpYWbrbI0/QA+7fVfl+PsG/EV5JYW4wSHawJE4m5tG//fAXVKO9m2/ta/yw//Ua6pDn/+Ji4nU2LV7lfOWt95759r08Yelx9E9K74yefwDgxbjJodoA0fiO/95dJtLB7r529b0tFt/mn/MXQqRNNQIgfe5ow57/p+Jh5J8ni9uHkoLj++m4fuX093Lv1T2H0qLtdnjf3fXeOU4X9k6ghZGedBP+5ZQg2XdpH383+OZCbjRIdrAUXh240/Kj1774/41ZI2yf2LoBoc89wrxPzmswOKWeCld+RsLn3z+cyP/scnjLy0vo3tdv5GOkUfbaQXZBg9j+fiXRfvgRodoA0fh/lW9lB93bkmBsv/vaG9l/5Dgvx3y3IvLS5U3ikYrvqYSIYIyPJknn/+4w+ts8vhzRLxdHlE/sLq39BCSmRC6Xjl+nlgFEG3gILyneb0etG8p5ZcW0ans8/TC5iHKfteYPQ57/m22jVTOc2yIGxWWlVCUiHO3WD9A2b9NTPTZKs9vHKQc5+SwTVQkjj88I56eWNvXKcJbEG3gdGyM22cwcSWP3OSNsyo4Hc5RWXrC2+z5P73hR7qkeQOxNWZH7jR7/K1EnL6wrBg3OkQbOBL9AxcZzTzgeKlHYphDnzuHFL7e86fR828mQkdB56Ns+vjLKsrpE68pRo//QREyOXzhFG5wiDZwFIqKiuiRRx6hTp06SQtrOri7UJOVPejf636g7kLIYrOTnaIfWLhXxPjSWzvHigdVdyk08q1I9Uu4dMEujr+8ooIWRXnSmzvGSOmL/HbQN2ABnc/PxE0O0QaOREREhJSf3bx5cwMBc2bs/fyd/fpBtIFDU15eTnFxcXTo0CF0BgAQbQAAgGhDtAEAAKINAAAAog0AABBtxxTtitJLVJq0ioqjBlNx9C9UlrJN7CxzmgteUZJFpedcqfj4T1QcM4bK0tjvQr9MgJKSEvzKAIBo143yi/sp3+9RynP/l8FWEPwqVeTHO/zFLsvYQ/m+91c5/8KDb1FFoT650itXrqQ77riDxowZg18bABDtWowwi9Mo37tRFcFShDvoBR6GO+4IuyCB8j3vNHn+hQfain9UXu/turi4SDnaEydOxK8NAIh2zSmOGqQIVL7PfVQS/ycVn/yN8jxuVfaXJi5z2AtdFN5TPf89D1HJmTkiPDKK8nbfqJ5/yhZd2s7IyJA2AABEu8YU7GutiFNZuur7y+It7y+K+NZhL3SBf3PlPMuzQtSHGT+4Lu8vPjEMvwgAINo2Ilp7n1RFK++ksr80ZbMq2qEfO+yFzve5VznPiqJU9fzF24X60OqNXwQAEG0bCQ8c6arGb0M/ElkUmSLOe1bEslsq+0tO/e6wF7rwUEdVnI92F+H7bOnhVeD/lHr+Z+fjFwEARNs24MyJPPdrTE7E5XveLkT8jMNeaE5tNHXu0vl7NzQYgQMAINo1VJe8y3nUg0SMdQSVnt9Ub1kdxTGjjYvWrv8Tbbo5/MUuPv6j8fP3uJnKUrfXe3vnz5+XbFkBAA4q2uVZh0Ts+fGq6Xj7XhSv8vVTqbosbZdIb2snpb/l+zShwsOdqTzHecokcQy/8MAb0ptFvm8zKgr7lMovRevSVvv27en6668nX19f/NIAcDTRrihOl0TUZB51wDNC1VHOyJ5o27atJNrx8fHoDAAcTbQ5FKLmUTeRUvFKYidII0Ilj/jsQlwxO6O4uJgqKmCYD4DDiXbh/tfVPOoLu5X9JWf/Msh6AAAAiLYNiHZBwNNqHnXuMWV/WepONVUvpBOuGAAAom0Los2jaK04VxRfqMyjFpOQyoo9sXoPAAAg2jYg2mWZQSL17lrTucTCI6Q8Pw5XDAAA0baVlL+S2PHGF8BwHrXwgLYHeKUhLw0vPjagMs9cpNjVp1+35Ied+Lf4/h+EH/hIsWhmqy7ufKbbzxTtL6WiY/0q/chTd5AxP+6AgABdTKIyCnNpRYwvDQ5eShNC15NHYphFK4OnFWTT8ujK9ieGbiDvc5ZNF03Jv0h/R3vToH1LaVLYRtqTFGHR9pPzMmnJCS+p/clhm8g/+RhU1JlFWxpxp3uLJdftKd/rbsmNr/BwF5FHfcQuOrMsM0A46D1o3K+7HlZbVvphN6tqqyomcSsKEvU/vwueRtMyOe9d68edlpYm2bHeeeed9Zo5wgL9+Jo+dOeyLwy2jzwnS2KqN/+cDaFH3L6r0v7n3tOkh4nebInfTw+5fVul/S99Z1BWUZ7u7a87FUj3r+pVpf0efrMppzgfauqsom2vsGjxg8ZknrmIzV/Nyk4WZXN+2Pxg0LMCT0X+adH+bWb8uN9URtwxMTHUunVr6tixY721H5udTPe6flNFMOTtE68puo64oy4mUmPX7ibb7+YzQ9f762h6PDVc8ZXJ9nv5zdG1/UNpsXT38i9Ntt83YAFEAKJtX3C4QvWrfkD4Vc8T4Z5xIhZ/i5pnLpbn1xW2jVW//2EpFbLSD/xm9fuT1+t2fkVHv1IfEH6PifYXVNoCaPy49VgKL9N9z5+KQLTaPJSWRfuQy+G11GjF18p+n3PhurXPo2m5ndZbh0shkrEhbnSPRkiDzkfp1n5nj9+Vdl7d9rMUIhp1cJWBkIakxerWfgd3F6WdN3aMIteYPfTzgRV01/Juyv6IjDMQAoi2/VAQ+F9VvDIDlf0lp/9Qs1+EsNf5+wP+o/HDPqh+v3AmVL5f1H3U7fyEUCvt56jiWBwzVm0/+mfd2m++rr8iDjzqlvk1ZI2yn2O8evHomu+VdhIvpSv7Wbjk/TPCt+nSdrkIMTW7HJZgkUzJz1L+9tO+JUr784+569J+SXmZeMupfMvg0b42FNQ34C+l/aUnvCEEEG37Id/vEdWvWoQSZEqT16qLg458Uffv18SyKwrPqd8vJmiV7w//Rr/z875Hbb9YFa3ShEVq+5F99AnNiP81XdVTEoYGy7rRpZIC5W8LozwU0WAB1YNSIVpyaIRH1oVlarHi2ZE7lfb5AaIH3J48omfx5OORmXp0s+4PrVzR39zv3AbHtMs18xTjD6/T/aEFINr6hA9EAQVtMQUWNsmvWnimKH7Vp6fX/YcrctdV8e8qZXGw0ZOBH/YZ/eKahQffMfTjLrkoLYLSGnyVJizWrf2O7uMUcfgxaLE08RWZeZae2TBQ2b82NkC39t/cMUZpZ9j+ZZKQcZz5qfXqGwBPFOrFK1tHKO38cnCl9OAKvRBHT67tp+zflXBYt/ZbbhqstMNhqbySQjqYdtLgDcQ3KRxCANG2H7iEmVm/bq+7xAg5qe7fL9wJzfth3yP8sFP0Oz+RWmi2fVEZRzsCr282nQ42OQnGG4dPsnXMYHCL3Wu2fX54aN8A6htO8zPXPotqQal+Nrh/Hd9ltv3WW4ZTkeYNBEC07QKpUK4xUdt9U6Uv+NV+v6jhaEk/7Crti9xzUwufZL+Y06dP07Zt2ygxsf5TEHmEbUwwOHSyNzlS9/Pv4z/faPscMghOOaFr2xwi4tQ6Y+1zGqCek5AMh0S+8JlutH0ebfNbB4Bo2+eIW4gXhxJ4ZM1ZJEVhn4kwRv1lFbAXS+HBt8X3N5BywjlUUp4XY7Hz44rtih+5yGIpOtJNtH9K+fvMmTOlHO0BAwbo0j6HIDrtnigJ5bMbf5KE9GxumsXOn3OV39s1XpoYfH7jIOoXuIDOXUq3WPurT+6VQkXcPo+u+wcukhbcWAJ+cHDWCmeS8IOSs3gGigdpakEWfvgQbWCvbNq0iTp06EBubm7oDAAg2gAAANGGaAMAAEQbAAAARBsAACDaEG0AAIBo2yCVfteX/aCjBkvLzK/Gfc/mzq84Q1paXhT5vZTXLeWHW8hve/369eTq6ipZs5riQmGOtFCEU8V42TfbnVrSD5t9O9gPmnO+eWXf7oRQi7ZvbTg9kZf+DwhaJPmR62myZYyESxdo4fHdUqoi+5H7JUUSgGibpNLv+oGqtqZBL4hc41i7v5hlF7yM+10HtxHWrgm6t9+yZUspR3v/fuPLuT0SwugxI37Y7++eYJFc451nDtHDRvywP/ScZBE/bmtjyg+bF81Ywo971Uk/xUNGu32zZxb8uCHaRkagRecNTI+qCHfgc2LZV7H9jrBFkYV8z9ut5rfNzJgxg7p27Uo5OTlV/nYq+zw1WdnD5DLoD8SCGT1HvCeq8cPW24/b2oRU44fde+9cXdvfJ1aMam1cr9x+CFwINYZoG1J8fKCh37Xwoy6Jm2IgdBw2sVeKwnsa+F1ziEQq32Yhv+3q6KlZgs0r6diPmV3ptELKI3G90C7BZj9sHvWxQ53Wj9vSpbssybtiFafWD5u9VMYcWm0g5GxApRftdqqGW2//86vU/khhfKUV8uOZCQQg2graqu5lGXuV/eyMp7jXRfS22wtZ4N9c9bvODlXP79RE1e9axPCtxX82/Kj8OGOyVOMsjqvK+1lE9eKJtX2Vds5olr2PFsIl72ebU0ekTMxpyG85bPGaXqi+CXFsWz5/jjXrMqAQRlLyw5Ef0tpQzPcaPxcuLAEg2qqo7X1SFTWNVwdP1KmWqp/Y7YXkepqq37ZaJICLDKsPpV5WO74HV/dWfpzaHy1PCmotT/WAzY5k0eKRXb7GDW+eKBwgt88C7ojw+coj2vtEP5RpJqZ/D9ugnP+0o1t0aZ+vt+zHzeZWWj9urv4jtz9HeJMDiLb6tBcFCJSJOeFNzTFuyY868FnVj/rUZLu9kIUh76viLEyqKooviAozEQYPKy6BZi3Y5EmtJ/iXVP0kPCNeMn2S93PIQi/eEa/kWj/ui0WXKCw9jlqsH6Ds3xAX5LA/9Je2DDPw42Yb2wOpMQYTw56J+hXQ1l5nztrh9jnOrS1UjMruEG3DV0SROZK361rTftDCtc4SFc11Oz/hLmjeb7uxJOTWYofI3DDnx/zvdT/omsHAgmyu/adF+CZXRz9sa6N9ozG2sagX6uiHrX2jMba12TaSistLCUC0DZAm5owVKpD8rjfYxTlwHjYX1S2K+E7EqIdUTi5ezgopPjHCtN91mrtux1RaWkp9+vSRbFnNMUJTT1G7sc2oJfJ1B5rw435AhG726eyHbfX7xowfd336YbNNLhdMYMtanqOQ88DN+XHzfAMmISHapkek6b4ilPCeVGml0g+6a736Xes7mjaeh12w72Up5U/6N0KcCw91FCPrhlIWCVdR1/pd60FoaKiUn33ddddRWZn5tEL3s4fpI8/J9IjIl2Y/aP5x84ILS7Et/gB18Zgk5Wu/sHmItMgjKS/DaX7wa2L9pVAVhyW42szg4KX1lqNuKg/7S98ZUjiEHxwrxb9hP3Ke4+AsHp7H0BYKBhBtxxkpVZeHva+17nnYpuAVkI0bN6brr7+ejh8/jovlhARXk4fN8xgAou1UcOaHYR72EjFx+rsU+lDzsNda7HjKy8vpvffeo9mzZ0v/n8U6Ly8PF8pJeWvnWIM8bC6yzNk4WiHnQswAou00FPi3UFMWs0KU/ZzxouRhiwVElmLnzp1SSKRZs2aUm4vXW2eGJxDlPGz+r3ZCmUfYsmiz5wyAaDsN+b73q3nYmiyX0nMr1FS/8B4WPSYuLbZr1y5cHCeHfUPkPGye1NXmYf8mTMFk0Z4VsQOdBdF2HgoPd9YsAvpY5JmniDzs8CvysOeio4BV+K+YVJbFmRfMcB580PkoA4Ouvclw9INoOxGc9WI0XVHJw24ohDwVHQWswgKxBB552BBtu6SiJFMyo5LyqI//RKVJbvXm110cM8q4cOuch60lVTwX5omFlb2FTctgYWWyYQNPSmpelYXD37Zt26pN/QOOBYdEvt7z51XnYcfnptLcyH+kWPg4TZ43gGjrNBr2MfAAMfTrrp9c6bILHkqeecHeJ0Qedncqz4+zyPmtWZNBjRqRmIA03Nq0IUq8HGZv3ry5NEF54MAB3BBOCLv3cR44h0Ve2TqChu9fXuM87GXRPkbte7/ynQm/bYi2DiNsMTnIS+FN+3X/l4fhdnt+69cHCzG+RWzTq4g2b6+9VjniHiyG36+//joFBgbipgA1hn1H5MlMYxsvwgIQ7XqlKLKPGl/2e6QyjzpumqFft8j0sFeeeWa8EOdrxParGE2LUZEw5Js4kejmm1Xh3rIF9wGoG+zxLQt0B3cXqcrOlXneURcT0VEQ7fqjIPB51a/7YrCyv+T0TDXrI7Kv3Z7fo4+yMAeIrYiOaczYXFxU0R4xAvcBqD2FZcWSzzcL872u3xiEQr7zn6eINhfOABDteiPf71F18YsmxswTkYpoH+lmt+fXsKEqzhc0ViELFqj7+/bFfQBqD6cHysLMsXBtnveog6uUv7FTIIBo1194JOxT1a/78AeiEEFSpZ+1/1NqHvXpGXZ7fu3bq+IsykDS+fNsFEX0yCPq/iVLcB+AuvHMhoGKOP8qFuTw5CXndT+gKZ7Bed8Aol1vlF3cZ96vWxQV5gUx9oq7OxmdgJS3++4jyszEfQDqxvxq/LZf3/4LlZQjjRSiXc+UxI4zkUd9C5Wl2tcS3pSUFAoLMyy0O2yYccG+4w4iHx/Dz/v7+9OPP/5IERERuDFAtXBIhC1cTRXIiL54Dp0E0dZpxJ2xRwqP5Ps2VfOo82Lt7jw+//xzyWp1xYoVVUbcwtyPmjQhKYuklyg7edaIeVv//v2lfO3x48fjpgA1gv22Oc/7A5HnzYUZeBUlV2znmDeAaANzox6RcD1MDKsbNGhACQl1qyjC94mLSC2JjITfBAAQbWARLl68iE4AAKINAAAAog0AABBtiDYAAEC0gc2xTJiJ+Pn56fb95Vr/VgAARNteqCi+ICrMzBGlwXpS8bH+wkRqpUXd/7hQQkn8bKk0WfGxAWKZ/Wo6HRcrjJ9upmuuuYaioup31dkW4SLVokULmjBhAi6+DXAmN41mR+6UPD3YkMn97GEp1c5SxOWkSGXHvt07l8aI9j0Swix6/iezk2lm+HapfV556X3uaK0+H5OVRDPCt0mf51Jq1vD7hmhbEPbCzvdubNS21RK53GWpO6UKN1e2n+Pfin4fN5z69OlT723KBYDffvtt3ABWZuVJP7rPiF/1x56Ta+x5fTUsPeEtGUJd2X5X7z8MigHrxV/Hd1Fj1+5G/bpzSwqq/TwXZzD2+W/2zKJLNfg8RNveRtgFZwxsWi3tt80PBV6ZabL9fa1F+/W/TDgvL48CAgKotBSlpqzJgdQYA5vTK7fuouqMnlTnl/29/3xd2/dMPGJ2mfyAoEVmP787IdTs5wftWwrRdjSKIr5VBXLv45K3NhtIGfptu+rX/tGv1Pb9m0ttsd83lypT2k9ejwvloLBHtSww7/zzK22M20cTQzcolqi8haTp97an9cvu6D6ONp0OJpfDaw0eJJGZZ3Vrv/XW4Uo7vOKS2+fwjPZBwqEPU7TaPFT5dx96TqLN4vPsQCh/nv/LoR+ItgNREPCMat2adUjZb+C3fayffu2LJfVK+zmqH0jJqUnK/uKowbhQDkipMFqSwxIs0pmaZeE8QpTFiIvz6kFhWQk1VPyyuxv4ZfcNWKC0z+XG9IBDF/LDodmqXpRfWqT8raffbKX9NbH+Rj+fLY5XFucHhetggebz2rqY6+OCINqOBFezkcWxIv+0+oMSE5GKaB/9Wr/2NbUrKwqT1fYTFqvti2LEwAEHDEJk7l7+pSQsXIOxVOOWNyF0vSI6fxzdqkv7V4qe1i+bJ0Pl9jlmrAdpBdkGRYa1E6/D9i9T/rY4ytPo58/nZxqYV2k/r33o6fXQgWhbKzwS+onqtx3yvhTjLs8OFSPgJ1W/7fhZurXPbcrtZO7rTG1eaUWuC10MHialZxfiQjko2vAAC1VK/kUpzsxFB+T9tc2kqA3PbxyktPOLMIFKyc8i36RwA7/swPPHdWu/+br+Sjuc9ZFakEUeiWHUdFVPZf8hM+EhFnv53/GDjh8EHOfWFiIOS7dMgW2ItoWo3m+7oZSOp1v76b6KbezMfv+SMjqefVRkjuy83L4YiVeU6GuInZycTOHh4bgZrMCKGF+zE2mttwyXwhh6sTDKw6p+2ZzmaK59jvOXVZheS8BpfuY+3/6f38x+HqJtp5TETjAq3Pmet1FZ2i7p31QUp0kjbrZ05Ri3NDlZT1klxTFjJOHO/edfNHvAvyhwltz+HULU9X218xGG25wH3qZNG7u+homX0mmOEADO0/35wAraEr/f4HXfVuFX+j4iQ8OY4Dy5th8d03ESkOE+6qGJH2s3HgVHZ9XML5vzrFlAe/nNkUIrPNqt0aBFCGo3H+N+3f/Z8GO1k4gcUuLURGOf56o78bmpFruWEG1Lj7gz9lJh6EeUv+chkcXRQsSRe4kYd3zl39LcxYi7ka553GXp3sLvu4to/0ExOfofKY5dUZCg+3nn5ubS3XffTeIestvVkVwhnCeyrvzRvrtrvBRusAc46+FTr6kiNttPGt2yX3WmBf2quQ85L5wfFJxRwsKbrZmYNDtaFxOl9xrJk2Yxrcl3yH7dH11u/80dY6QFNrk1zLHmz68Sue6cPcLhknY7x0gZMJbM0YZo29JISExO8ojbWnncFnlgldlvuagj6acN0uOu3FgILLmy0NnglYfm8rz7BvzlNH0B0bYReFm7msf9pFhevkpKBzTM416JjrISPDqUBaLtjtFSnu+0o1sMVsj5JaHgg15wFRu5nzuJPGsOS12Z5x11MRGiDdG2HBwqUfKos9U4Xcnp6WoetfAqqSvp6eno5Lq+BYn/PeT2rbKIQhsK0aas6ZUy5+zka1IWeRl+Xkmh8rfvNXF615g9EG2ItgVvTBHjVvKoRTqgDK+cVPKow7+p03evXbtWKh22atUqdHQduHJxinZxBZsPyaLBIz9Q/3DMXe7jR0SKonbid+QBV90XB0G0gVF4ctIgj1tMTpZnhUhL3pU87jNz6/TdvXv3llL82H4V1I23do7VxE8XUHJeppRXrM3f3RZ/AB2lE0+LDA+5n3n5OL/tcJ61dmJ4f2o0RBuibTmqz+NuJKUD1hVfX1+qsIHUtIKCAlqzZo3dVWnnrAtzebrPbfzJYHk0qF94FG2u/9+uJs8aog10wXQet8ijvuDpEOfIqX833HADXXfddZSZWfvFPEl5GdJCid4iT5pX9rHfg6V+rNolz9rtsTV9KPSCZVbDcT4wh2TYM4PzxLeK0b0ls1ZOZZ+n6eFbpfY5NLHjzCGLtM8hEbZANZVnHZudbJHz58nOqUc3SznnPJ/Bo32ItrOPuDP8xZL3j0WM+2GRR/205A5YUXDWoc5x5MiRNGvWLMrOzq7V5zhjQ7vsWbuajcXcEuxKOExdff6gp9b3l/KcWTgvFOZYpG2eaGtixA+bsyl4WbXeLDnhZdQP+yML+XEznOf9idcUyQOE86Q5zzqnhnneV8ufEdsV4yvt9qXvjBrnekO0gdPAK/Yarfja5OsxL3Cxh5WJdaU6P2xeYKInASJ+by5PmkfBjsw/Z0PMhmcGBi2GaIO6ExMTQ6dOnXKoc/rCZ7qBTwXHmHnkox35Wbp0lSXhh5L2zYLDIvyarh35Baec0K19HtVqH5A86TopbKPBgiNLhYiswQubhxhU+tl+5qA0ytf6aXPoCKINag0vEX/55Zelmo9eXl4Oc1687Fj+0ZzNVSdk2chf677miHDMXi4TxvnK2lDE8P3LlfNnTxQ9KC4vVd5y+L/acMSPYoQpt78oytMh+59LoWmtZQvLipW/ddf4aXPoBqINag2X9+rRowc98MADlJOT4xDnxBNd919O7eIfj9Znguv+yT8a9tFwRIquKCLAIiozOWyTcv5TjmzS554Si1m0RQS0ftxscyq3z28+jginF8JPG+g/OsjKcqjzeX/3BOXHwQ5v7La3T4QD+Edk6coh1uA1zTJuHt3yxOuepAhpsYkSHtIxk+FFTbktzqLhPHUv4b/9oGZieG+y4y7j177p8SIqLoywU2TOwE8bOBV9+/alhx56qEZZJByvNjcRxPaYlnZbsyRrYwOs6oe9PNrXqn7Y1mZWxA6z5881OC01EQ7RBgYkidqm06YRff45UZ8+4se6XOSP66QFb7zxhrRSk2/CmjA2xM1oBgOPNjm7wtH5ad8So4LBNquROvthc0hAW89Ru3H6Y/TFcw7d9zyvwKl9phZWaedZINrAYmzeTMKjhISQGm4tWxLF6fDmd/DgQYqIiKjVZ9ii8yvfmdKyZh7djRB50lw6ylngV3LOE2+xfoDkB81Lui9a0A+b3fU4vZCFmjNK+EGabaE8aVuAi/+yHzkXbuBqNTwRrjWwgmiDGjF9+nQ6cuTIVX3HCZEtdtNNVQVb3l56SRgnlaKvAbA2Ti/a5VkHhXf1ars9/uDgYLr22mvplltuoYyMuq8K7NpVFehnniFaL7LnZs4kuu02df/69fjBAADRtqZgZ4dRvtfdkt+HvRYY4BS/oUOH0oQJE67qex59VBXnqCh1/6RJ6v4hQ/CDAQCibW3Blo2Zdl3n1JVhmjRRxfm8ZmHXggXqfpHsAQCAaFtBsHOOGAq2VriTnLNQwHvvqeLcuXPlxGOgWOD14IPq/iVL9Gs/OjpaemsAAEC0aybYBsK92uluhD2iUtM115ieiGzWjOiiTgXHP/vsMyn1b+tWlOsCAKKtFey8GCHYd5kWbI1wl6XudLqbwcXFuHDfdReRv79+7U6ePJkaN26MyjoW4KTwnWajqa+FZwavbGS7W0v6cXM+95Qjm6W0TfZN2WZhP3CItt2pdjEVHu5SrWgX7P03VRSdt8lTSElJoRdeeIG8vb11+f69e3nkS/TII0T//S/RgAFEyTr7yxcWFkpGV0BfKv2wu1dZHMIWAZbw455/zN2ger28feg5SaoDCSDaZoS7s10KNjNs2DAplNCZA88A1JDAavyw2fpWT3hRlLn2uRIMgGjXWrglwS5MtulDLxFryqdMmUKJiYm4e0GNYQ9urR82m/r/cXSrgR+3nlYAvHpVbqezx+9S++xKyFaz8v6j6fG4UBDt6oT7A41gP2nzgg1AnR70wshJDouwSGuXnQ8NVutecvhCD9h/+h7FWvYbg2Xf/QMXKe0vPeGNiwXRrplwQ7CBI6MVTbYS1brxjT+8ThFNLtirB1w0QQ6NcI1PbSFm9k6R25+n00MDou1o3iPlhSKGnYo7wQZISEigv/76iw4cOIDOqGde3jrCwI874dIFyX9b64fNcWe9+O+mwQZ+3Nw+F0ludrm4BW9B56NwoSDaDvBqK2LYta1abq/8/vvv0iRrXyy9rHfYnc6cHzQXWdBWxKlvuKqLufbf2jnWYAQOINp2C+cwN23alDw8PBz+XNmmtatwrtq2bRsuvA6Y8uNmm1e9/bDN+XFzAYvYbIQnIdoOQIWohPH222+LBS/XiBWLe9Ah4KpxP3tYMvNnoWQ/aC6dlWNBP2yuYt7Np7J9rvbCxZgdueIQRNsJ4UUnEGwAAEQbAAAg2gAAACDaAAAAINq2yOHDhyVTKGcmLCyMPv74Y/r5559xQwAA0bZduAjAww8/LCqiN6h1lXJHe3Bxvjb3BQAAom2zpKWlUadOnahly5ai8rnzlj7njBn21oYpVlVispJoctgmyZWP867XnQqw6KKUExcT6fewDVL7g/YtpY1x+6i8wnb8sCMzz0pphF19/qAhwX/T1nr262ZTK176z9/PKzs5hdHSfuAQbRskKysLneAg5JcW1dtKw8VRnkb9sDnfOiX/ou7nwoZSjVZ8XaX994Rr4IXCHKv39YzwbQauhVq/7ov14NfNroT3GPn+T7ymGJhwQbQBsFMKhGB/sHsifeY1lYrKSq7qu9iX467lpv2oWTj0HPH5Jpn3w+YFM9aEFw2ZWyb/7d65V/X9PGI39/39AhdAtAFwBMGWf9Rc3kvrrldbePWgdmS7OyGUZkXsMKgEo6fhUtsdo5V2unhMIo+EsCoj25C0WKv1d+stw5Xj4Ickm2FxWTOtX/fxzIQ6f7/W8IpDI/z9k8I2GjxIT1poKT5EGwAdBJuN/q8cjdVVuDlmzZaq/B38ep5VpFatH3nAVfn+2ZH61DXltwQ5LMIPCe2yc45ry+0vPL7bKv3NxyOLJ7sGshWtTB//+crxucbUbWUxh1bkt4yH3b4zeGvqKSruyN+/NjYAou0MjB07luLi4tARpgSrrIzOnj1rN8fLgmFMsOWtex2E+0rRLNSIBk8Kyt/NBXv1gIsWyKLYdFVPg+MfG+Km+0OjOjIKc5VjeHTN9wYTszxZKP+N5wTqQkp+lvId/17Xz2DidaCwuZX/tiLGF6Lt6Kxdu1ZKbbv//vupuLgYHXIF8fHxdM8994giw4/YjWBz6MBc7FMW7tJaCre2XFffgL8oLidFClE85Patst8z8Yhu5/bSlmEGftynRftcMkzrh+2ffMxqff/0hh+V4+Aq73x8HIeW31B4O3QV4RsWa/l7fjm4kuJzU6VK9tqJYUuVS4NoW5H09HTq0aMHubq6ojOMwKl/jRs3pieeeIIyMjJs/ni9zx01O1kobzxqDr1Qu7crFghz39lGZz9st9i9Ztv/3/ZRVxWzv1q40ry54+M5gatJTfzr+C6z38/zF5ZK/YNog1qOfkWJqvFEH35I9M03RAsXEun5ksAPNnuCxc2ccPPEHWc61IWfD6ww6YfN+dN6o41fa7dnN/5kdT/sSr/uv4weX0sxiXg2N+3qBhBC8Hv5zTH6/S9uHkrnLlnuPoVog5oLkhvR7beTCOkYbs88Q3TyJPqnOuG+GsHWjuZ58ouFqKP7OGmhhyX9sDlr5Zs9s6RsCq7qzjF1W/LD5sUuPOH7/MZB9P7uCVIGCU8M1xdb4vdLfuT8/Z3E6Jor2msnPiHawGbgVfU33FBVsOXthReInHgRZxVWnzQUbhZsjgEDANEGFuGTTwwFmiuCiRq8dMcd6v7169FPxoSbc4U3i5g0ABBtOyMyMpJefPFFCgmxvxHXgw+q4hyrmYQXJSyV/UOG4BpfCaeB7ThzCB0BINr2OVr9RErxGzRokN0d+733quJ8/ry6f+5cdX///vq0zbUy2a51zpw5uIkARBuibTnYetXFxYVyc3Pt7tjfe08VZ/7/x4+LSTFvEpXi1f3Ll+vTtpz6xw+82NhY3EgAog3RBtXh50eiKrzpiUi2vtbzWTRs2DDq27cvVo8CiDZE27k4c4Zo3DiiLl2IvvqKaN48sZKvsGafnTSJ6Lrrqgq2GATT/v2V/4Y19bffiDp3JurevXKysqgI/Q4ARBvUmjVrjOdZt2ghzO1P1Ow7+PJ//XXlZ159lUfARBcuVP5t5UqiW2+t+v1PP408bgAg2nYAx64LazqM1ZljwhbixhtNhzeee+7qVjYePUr0f/9n+vtbtUIeNwAQbRvn+++/l3wzbCHF7/PPVQEV1czEhReVUBaTqEep7l+9uu7fz8va5e8RWY20YwfRggWGedwbNuCeAACibaMUFBTQs88+K0a3N1J0dLTVj+fRR1Xx1B7O9Onq/oED6/79wqhQ+Z7Tp9X9HAeX93MoBQAA0bZZSkpKKDAw0CaOpVkzVTwTNAU8eKJQ3i+SM+pMo0bq96SkqPtnzaqfh4LMokWLqF27dsgiARBtiLZjw9kcsni2b08UHl6ZZ60V8yVL6v79HTsa5nGLxZ/k4UHUpIm6vz4caL/44gspX3v+/Pm4qACiDdF2XIKDjafrafOsczQFtVnUs7Nr/v179pjP4+Y6BpeuviA28f23QQTHL168iIsKINqOINoVhUlUEjueCg93oaKwz6nk9AyqKM3FlRb88QfR9dcbz7PWXtYjogDK3XdXGkPVRhtN5XHzaPvgQcfow5isJJoQul4qHsv+zcujfa1q/g8g2nYt2mWp2ynf6y7Kc/+XwZa/52Eqzz6se/sBAQFidJpt0xecxVMUyxGTpERt2hjmWWsFWxbcV14xHIHXZETPxRHYY/u114hGjBA1/DIc48fC1VG05aXkjUuBJVy6ADUBEO1ajbDzT1O+521VBFsRbr9HxYj7km7tJycn05133im8OJqKibgUu7wZONdalGSsMlKurXA7IgdSY8xWpOGCBFdTzgoApxPtosi+ikAX+LegspRtVHpuJeX73KvsL4mfrVv7nM3QRgxdu/D6cAcSbHnj1Y+5Thxl4hqAskBzxRauILNIVPfWFo71EvsAgGjXkIKgloo4l1/cr+wvTVym7C86+pWux8BudDl2OCStTrBtVbg5F56dE3V/ixP/e2B1b0mYGyzrRumF6jUeJ8p9yaI95cgmKAqAaNf4BxzwjCraOeGqaCevU0U77DNccSP4+BDdfHP1ov3UU0RpabZxzFOmTBHHfLNFUv/KKsrpvssjaq5Co62JOO3oFkW0eYISAIh2TcMj4T3U8Ejwq9LEY1mGHxX4PaaGR+Km4YqbgHO2zQn3k09y3N52jtdVJH1fe+21olqOZcrldHB3UcSZC8dGiernXKT3YbfvlP2oAQkg2rUJTVyKorzdN5meiPRuRBXFabjiZvDyIrrpJtsXbOaSSPq+cMFyGRseiWEmJyF5a71luKjKXYKbCEC0r4TzsItPuog87A9EuONTafRcUVoZYyxNWCKE+8aqgi3SAMvSvev1OIqFJR4b89trpkhNhVv4XdmcYFsLDn9wTPtKwW6xfgAdz0ywyDFwO7+FrKGPPCdLeeJcg7IUeeIQbVsV7co87AZG8rAfpPKsytUb5bmRIpPke2lisiD4FSqOGiwJfX0zSawk4eXUrVu3dribwtOzUrhZsJOS8CPREnQ+ShLLV7aOoE4io2RS2EbKKc63SNvzjrlToxVfV3lotN0xmpLyMnBxINq2JdoV+fHm87DFAhpLrnxMTEzkjqO9e/c65I3BMW4Itu3glxRpdJQvb509fpeyXABE22ZEu+hYP8M87NQdVJq0WuRhN9HkYf+JKwocknf++VUR6C4ek2hPUgQtjPIwWKEZcP44OgqibTuiXRDUShHnsovByv7Sc65qSt+RL3BFnQyeW/D19RXL54Md9hw5Zn2v6zeSMN+z4iuDcMyog6sU0f4zYjtuCIi2DYl24HNqHnZ2mHpDJ69XRTv0E1xRJ2PZsmXS3EJn9p511AdTeakSy+b/FpapdeHGaxb3cM44gGjbTngkorcaHtn3sjTxWJax54o87Cm4ok4Ge748//zzNGHCBIc+zzd2jFLEuZffHClPfGv8Abp/VS9lvzeW0UO0bUm0yy+dqCYPuyFVFKXq1v7u3bvpf//7H8XExOCuARZnV8Jhs3ni7DQIi1iIts2l/Ek+IkaEW4887CtpJcqJ82v4LK6fZYYzZ4jGjCF6912iTz+t9LC+dAk3GqiE86x/FXnWH3pOou/859HSE941Flv+nLEMkuc2/kQns5FMD9G20cU15bnHJUe/gqAXqDC4jcjDHiLysPW/YTOEKbSLiwuVlZn+ga0X1hO33268aktYGG42Z2e+iTzr17aNrLEft3/yMSVPnIV/6tHNlFdSiM6FaKPcWG3hCufmvDt4oUp+PvrJWeF0PHN51mz3Cj9uANG2IFwNRhZoMS9Gu3YRrVplWKV84UL0k7PCRRK0C2H2JkdKoRGtH7dvUjg6CkC0LUWLFqo4H9VM4rNQy/t790Y/6QV7wIwePVqUNxthc8d2pbVrVpHqAc4+IrJo/3F0Ky4kgGjXFY5hm4tfXwk74cnifOKEun/5cnV/9+644fTi/Pnzohr8NXTrrbdSUVGRTR3blYtjtDFo9i6RRXtyGIooAIh2nXnnnXekjJFTp07V6N9366aKMxe0PXRI2Hl6ED3wgLp/7lzccHoydepU4VLoRaWlpTZ3bG9rlqGzH/exzLO0JX6/UhGHN7Z/BQCiXQd4wcZDDz1EDRs2pPT09Bp9hkMiN9xgeiKyWTOiixdxwzkrHglhyLMGEG094dqDh3i4XAsWLSK68caqgs2TkQEBlf/m9GnhFzGKqH17oo8/FuWqplk2jzs2luiXX/hNgugTsep/+nRktViKiaEbjFZ0f1bkWUdnnbPIMRxJPy35lXChYs4T/zva26J+3IcvnKJfDq6UbG2/959PrjF7pJg/gGhbjchIor59iV54QSw9foNo5Ei1ruKaNUS33Wa9PO4VK4huuaVq+48/XnncQH/2pZygHwIXUhuRm82FDNgvxFJ51jPCt0kx9SsfGrxE3hJ+3Byz54nYK9t/a+dYSsnPws0B0bYtoqKMl/LSlvTSc8RbXfjmP/8R/i5FuE6OimfiEbPhGbZ71dOPe+eZQ2bb/9wb9Vsh2jbG11+rAtmyZeUk5erVhnncixfr1z4vqZfbeemlyrJiojYu3X23un/lSsfp73zEfAzg6jayQH7mNVVa7LM4ytPAj5vfAvTiZbGCU26nm88MChTtLzi+22CFKIdOAES75iMRUWOrsFC/19TmzVVx1IYi/vpL3f/tt/qd38MPq+1wXFtm5kx1/4AB9n8dCwoK6JVXXqEGDRpIXtugqrWrNhwz8oCrIppzInfq8wAtLVLCIpyvrrWWHbRvqdL+IvEQARDtGoYOjtL1119PTz31lG7CzcvYZXHkJe8yS5eq+3v21O8ctamHbGolM2+eur9fP8e4ns8884yYEL5Ruq6ApCrxDS/Hsnlkra0aPzbETfciCpdKCpQJ2GbCSpYfIjIjDqxQ2ueRN4Bo14gwMQv49NNP0+DBg3Vr44svDPO4D4o6xO7ulamA8v758/U7xw8/VNt5802ikBCiHTuImjRR9//9t2Ncz2jxVOTsH6DCplSyOPb0m02RIk98fVwQNV3VU9nPS+v14sXNQ5V2+oisEW5/bWyAsuiItwOpsD2GaNeCkpIS6dVav9E80f/9n/k87iwdJ9APHCC67jrT7XMGCyxkHRf3s+b9uLkGpZ6pd+tOBZptv4O7CwyzINq2h6k87nvvFalgFujWOXOMPziaNq1cwQkcG1N+3K3EKPh0Toru7XP83Fj7rbcMr7E1LYBoWxyehPzhB/G6+GJlmIIX2tRw0WW9cOSImkf+1lsipjkWKzWdCc7a4DxxDpdwFgnHsbUTg3rjlxRJ/QIXSO1zmh9Pfmpj7ACiDQAAEG17JzMzk4YPH07Z2dm4AxyYnJwc2rJlizRfAQBE2475/vvvpXqPn3/+Oe4AB+bZZ5+VrnOAbPwCAETbPokSa8rfEgHdkydP4g5wYH4RzlhvCAMYf39/dAaAaAMAAIBoAwAAgGjXH3FxRD//XJku98EHwiN5IpEl5zM5oiPmUKldO6IuXYTd5WSi3FzH6V9OiRw6tDId8qOPKv2+dVzzVGsiMs5IftDvierp3UUFGl56XeREKWuH0mLpZ7H0nKvH9/KbIxUntmTxBl41OXz/cqlQcu+9c2l5tK9F/cAh2nYG+2GLEoRVFqewr4clFqewH/bNN1dtn82gOP/a3mEfFGOLj9i29sQJ6x/f3Mh/FA8Pg8UhW4dbZHGKtTHlh81+3Ml5mbq373J4rdEiEuzHnVrgPH7cTiHa84QadOrUic6dq3t1EBYNc37Yei8DDw8374f973/b1oi0tgQHm19m//zzbDVgveNjK1Njq/nk7W2dl4Fbm3/Ohphdhv6hp75+3JtOB5ttv6vPHxBtRxFttudsKtZpc+rXtm3b6vw9PXoY+mF7exNt2GBouKSn4VPXroZ+2D4+RGvXGvpx27Ph07vvqufBFX/8/Cr9voXDqrKf+7sucPZI//79hUlWSJ2Pj8MhWoFi7+mVJ/0MDJe4DqSj0kZjOPWl7wzanxpNS054Gfhx8z69aKUxnOohDK+4rYVRHgZvPmHpcRBtRxlpJyQkSJW6rwau7CKLh9bxU2utysKuF489ZtzaVWutysvT7RXtw0fUVVZfySer+4cNq9t3DxVBcn5oj2LPgDrARkZsKcrCwK/nF4vUV6rfwzYoojEpbKNDigQvc79HsXb9xmDZu9Zadf4xd13az9VYu94vroN2DmFg0GKlfY6vQ7QR01YQVttGixgsX67u795dv/Y5/GKsiMGCBep+sWbIbhGF75XzSNGEh7nwsbyfJyjrAo+wx48fT8eOHauzaMsjao7pZherVXGmHNmkiAYLuCNSIIoYyKLdRCpioIrmSDEpK5//PJ1Em/tbDk09uLq3wcTn4GC1iMJiJymiANGuISzIsni0bk0UFCTq3olCH/ffr+5nFz29+OwzQz9ujgFztEcbntGzXJnedOignsfbb1f6jXM45J571P0cDrIWnK2grWcYeiGONsbtoweEiMj7dyUcdtj7/xVNuTD24+aq7m6xeyURt0S5spabBhv4cXP7XMm9sSY84yzlyiDaNaS6wrzsh62nWx5nh5jz4+aRuD2n/vFtZW4ikt90Cgutd3xcIMDcROTr23+xaOqbpanOj1tvP+x1pwKsWpgYoq0zaWlpunwvh0KMpdzxaNcS3WLKj5sfGI7ghz1jhvEH00MPVWbPWJvp4VuNprw9v3EQxWQlWeQYeITPcWQe+XOe+EIL5omb8uPmlEdL+GGb8uPmSVJLpBxCtHWCs0VatGghXrHf1kW8eRJw4ECiV18leuedSj9qS/phc1j2xx9FdeuXidq3F7mrLo7lhy0qv0l+4xyC4owSXrxkS28Q/ArOcdQ3d4yRwiQzw7dLhWstwbSjW4w+NDh0cTY3zSLHwCEQnvzj6u5f+EyX4tiWXFzkn3yMBgQtktrnLBZ+aGlrTkK07VC0w8WQrKGY1Wouyp8XFRURAFdSVmZ/YQzPxCNmwwM88ka5Loi23YZH0sXQt66ZAsCxb3Z+mI8ZM8bujp1X/SkLSbz/kJZz80ScdiKQq8IAiDYmIoHD4OXlJeVrs12rPcETnPIiFl5MkleizshynFkW7Rnh23CRIdoQbeA4FIr0kyCRq2lv4REWbTm1rdGKrw1E+zeNaPNEKYBoQ7QBsAHa7RyjiHM3nxnShOja2ACDZfR7kiLQURBt+xDtHTt22OXkEgA1xSMxzOxE5DsOblgFHEi0d4pliRynfJMNmAGwc3gEzX7RvFjlK9+Zkp+HvGycl8wbS/l7acswis9NtYvzY6OnIcF/U/t/fpPyzBeJpecl5RhwOZVo+/r6isUXD4kl5HNwNYFdYyoPmxevyKLMRQg4T5yzSThPmj2+C+2kCAPH3435Yf9v+yinWhyD8IggLy8P4RFQK5KSksRCHtuxUvU6d9Rs+INHpvach725Gj/sjz0nO80ydIg2ALUkICBACqm1atXKZo6JiyhcmYfNhkzaiUafc+F22+ettww3MJziN4Zl0T4Ghk8H007i5oRoA1CVAlHip3HjxqIG5UdUWmr9JdClUh72N5JwsQXqpRK1BBGX2JJF7Y+j9pnSxymKctiHH0LaZe9Dg5cp58dL0gFEGwCjlJfbTqZFTUWbY972KtpyLPtK0R62H6Lt8KJ96tQp+vXXX6WFEgA4CleGRzhU4KjhES4XFiLCI1xJHeERJxBtdu/jeOTPP/+MqwccBvYNceTCwdVNRHb2+B0TkY4q2vv375dysjMyMnD17BCur8nWslx95/33RY3F3/WtYm9PmEr54zzs0zkp9dIGT3ByyiAvxmFr0/kWtFY1lfL3mpP5YTudaAP7ZfZs40UOHn3UsO6mM8OLazjOyyl+sqjWVx72xNANRkXzZQv6cQcLP25+aPCbAxbXQLSBDcM1Na+91nQ5saefJrK0/TlnkaxevdourVpryz9nQ8yGJ97dNR5+3BBtAFTee08V6HbtKsuzrVtHdPfd6v41aywv2jeL+nHXiqeJXiXqbAVedSgL9NdihMsTgatPGhbmDTh/HDcqRBuASkRatCLOycnq/j/+UPcPHmz54+JMJLZAyMrKcti+55g1W7qyMHO2RoGmPNroQ6sV0Z4VsQM3KkT76hk+fDh1796dMjMxUWHP3HuvKs5Jmhq406ap+4cMQT9ZWrRHHVyliPbsyJ3oLIj21cGvrLfccgtdd911FBoaiqvlIOERLhzj70/k5kZ0113q/rVr0U968cYONTzCRlNyuTJ5UQ9vgQiPQLTrg+PHj9PChQtxpeycAwdIPHxNT0S2aMGVZdBPeuGRYN6Pm61gMREJ0QYOiCh2L+VZt2lD9O67RBMmEOXm1uyz8+YR3XDD1aX8sTFf//5Er75amec9aRK7PNpO/3DKHvtFcx50V58/pDhxYVlxvX0/rxqUU+Z4xDxHhDRqmhJoyo/7FZHyl3gp3S7uP54sHRi0WKrkw37jC6M8nC5lEKINaszcucZFV9iZ05EjNfsOFudBg0Q2w/+IOncmmjq15otrpk83ned97Jj1+4cNnYyJ4gubh1BsdvJVf//ksE1G86xrs/iG3fXkIgucJ82iV1xeahf338iDK42uGn19+y90Pt955rwg2qBGiIWoZsMbTz2lb3hj717zed7PPktUfBUD2h9++IGaNWtG6el1G3GyL4i5Zehv7hgjGUPVld0JodX6bTtyubF1pwLMnv8nXlOcZhm8zYl2amoqAdvjgw9UgWzbtjLPeuNGooYN1f0rV+rX/jvvqO3w/+eHCOd5aycy16+v+/e3b99e8rTZyCdVB3jkqjV84jzo9XFBdP+qXsp+jivXFRZ9+Xt4paSxwr6+SeEOe/+13DRYOc/v/edT6IU4WhFjaDjFfQLRtrBos58I+xx37dqV8vPzoZQ2RNOmqjgmJKj7Z85U9w8cqE/bPD8mL8K55hrOKtKEDCar7Yvs0DrD2UnH6hhj4RHufZcXqXB4JKdYvXcnhW1URIXDG3WBY7ayOHHqXr4mZU9r3To9fKtD3nvcn/JbzIOrexvEsDm+L5//khNeEG1Li7a3tzfddtttYsVcO/FDxUy2LSEiB4o4njmj7uc4s7z/p5/0a187ok/RhG/ZcEreP2KEdfqGRVse8bJoZxWpM6MTQtcrosITgXUV7Xsvi3bDK/y2x4a4Kd8/I3ybQ957ueJ85Vj+A0K0tQZXPCkpn//SE94QbWuERxLEMO6MVhWATdCliyqOnDni51eZZ61dhi5sPHSjQwfD8AzHuF1die68U91fx8hGvdDRfZwiHh+JeodsjLTypJ9B+MIz8Uidv58L+crf85nXVKmqOYcHtMvQ2d7VUWm1eaiBHzfnmbPZlLxoiLew9DiINiYigQyn2hnL3LDURGR1E6HPPXd1E5FXC6ei6emHXZ3ftqNPRG6qxo/7c+9pTvNbhGiDGrNoEdFNN1nPWtVUyuETTxCdOGH9/uGcaS4XdqWgcNWW+vDD5pi1se/nPOuESxcc/v4z5cfNk7Qp+VlO8zuEaINaERUlirEOrQxRfPhhpXeIJYsYRERUGkvxUnhRl1eKqdf3nDWvws2t6YqhKwjPiKeRB1wlq1POg14g6h4W1mORAQ4B/HxghRSO+WbPLCnP2lJFDGwBDgux3zhn6/TeO1eq6F6KxTWWE20ursoHAICt8OWXX0qpf2thhAIg2sZed+dKP5AePXrgSgCb4M8//6QmTZrQ0qVL0RkAon0l68VqiIYil2vnTlhCAtugSJTPQbopgGiboa6xQwAAgGhjIhIAACDaAAAAINqgDrAftjC0o5deqjReEiUSKSsL/eIsBJ2PogFBi6RKNrwgpb79vqtjb3Ik9Q+sbJ+NteZG/uNUKYt2J9pBQUE0a9YsKisrQ89bAV6cYmxVI/uKhISgf7ScPXtWFG2YRwEBAQ5zTrw4xdiqyhfFEvH4XP3dNU35Yb9sR0UYnEq0S0tLqXnz5lKK34IFC9DzFqa6cl+PP17/i1Tsmeli1Q7fqz179nSMH/qZg1YtN8Y2teba/2D3RKfxw7arkba7uzt17NhRSqsCloVXL2oNlw4eJNq2zbBK+t9/o59koqOj6YsvvqCtWx3D7vS1bSMVgewpDJeOpseLwgKBiqUsbxy60IsXNYZP7IfNK0fdYvcaFBZmEyhgY6INrMcDDxi3VuW6jfL+fv3QT44IL6NveNmzhC1etTFsrbXrnxH6rE6+pLFWbSaKQmjLm40QS/Ll9nnJP4Bog8s8/LAqzrGx6n6xAFDZP2AA+skRYZGULUy5mEJeiWrHyHFmWTTZ8EoPuGiDXDuTR/bah4a2iAFbrQKINrjMp5+q4syZI6LehFQeTOuHzf7UwDFpu2O0gR83Z5EsFiJ5r6ZcF+/TC55slNvp5jNDamuhGFlr/bC5RBuAaIPLcKqfMVtTS/lhA+tSnR+33hOBuxIOm52I/NBzEiYibUG0udbjyJEjhRhADWyBFSuIbrmlqmA/9phl/LCBdeFyZMb8uP+3fRQl5WXo3j7XyJTDJNqt3U7n8sO2WdGOFCpw7bXX0u23307nzp1DT9sIHM8Wz1F66y2iTz4RP+QZRHl56BfTbyjhwrf7I1H/8ieHOJ8j6afpFxHHfn/3BPpW+FH/He1tUChXbzgEMvJy+5xF4hqzx+n8sG16pB0cHCw5+QFgr3CFds7Xvu+++9AZwPFFGwBHYKWYscXbIoBoAwAAgGgDAABEu4ainZKSgh4FAAB7EO2TJ0+KdLJbqH///lLBXgAAADYs2jxZc4NYvdG7d2/0Kqgz7Eb47bdE//1vZVri2LFE2dm2cWxsKXz69GlcJOAYos0cP36cMjMz0augTkyebNw+ls2ujhyx7rElJiaKJf93C+/xZij8CxxHtAGoK76+RNdcY7vL7FmoWbCffPJJSk1NxQUDEG3g3HDpM1mg3323cmTNft+NGqn73dyse4x4iwQQbQAuc889qjinpan7RQEZZf/gwegnAOos2nnCsMLDwwM9COoFbQUdET5WmDhR3T9iBPoJgDqL9pAhQyRPBhcXF/QiuGref18V51dfJfIUfviLFxPdcYe6f+NG9BMAdRbtuaK0d4MGDUTs8Qh6EVw1YWHGK8XLG6cAlpSgnwC4qph2bm4uetDJOHSIiFPxn3+e6I03iH75hSfo6ue7lywhuvnmqoLdogVRjI3UfOUsksOHDwtL2xk2eX24OG/fgL/oFVEppovHJJp2dItU7gtAtIETMm0a0fXXVxVVdi1lMa8PWJxHjSJq356oW7fKwsO2VEODRbtp06ZSaJBtW22J0YdWG61O8/zGQRSbnYwbGKINnAl/fxJFLUyHLx5/XBRwzXeOvhgzZgz98MMPdOrUKZs5pi3x+82W8+LqMCg2ANEGToR2orBDh8o8and3oiZN1P1czgxYh9e3/6IINFekibqYKAn5/at6Kft9k8LRURBt4CxoxTkpSd0/e7a6f8AA9JM1KCoroYaXaz82FtXVC8vUGdtxh9cpoj09fCs6C6INnAX2/5DFOS5O3T91Kha/WBuu8XivEGsWZhbvnGI1TsX1GGXRnh25E50F0QbOwscfq+L84otEu3ZVZnvceae6f+1a9JO1ePufXxVx5qwRDoUsOL5bEXPegs5HoaMg2sBZEEXJhfWu6YnI554jKi5GP1kL/+RjRjNH5K2zx+9UQXAnhGgDp8LVlejWW4078EVHO0cf7N1L1L070TPPkHD8W0wPPdRWpDvaxgh23jF3arTia6OZI8l5MLuCaAOnhOPZXJjgvfeIvvzS9vKo9YTzxw3tY3tK+doNGkyniAjbOMbjmQnkcngtfeQ5mfoFLqBVJ/2Q6gfRhmgD52PHDmNhoRCxbRJbljTyRngIQLQBsBHatFHF+pNPiHgxJPt9i2I2yv4tW9BPAKINgNURpSGVWD6XQ9Na7kyYoIr26NHoKwDRBsDqcElIObWRY9raIg1smCWLNlyKAUQbABuhXTtVnF9/vTJPfc4coltuUffzPgAg2gDYAPv3G68UL28vv5xPWVmwKgYQbQBshgULiG66qapgN2v2B9144000c+ZMdBKAaANgS5w4wdaslXnqPXpULuV3c1snbGuvpR9//BEdBCDaANg6+cJIPD09HR0BINoAAAAg2gAAANEGAAAA0QYAAADRBqD+KSoqIh8fHwoICEBnAIg2ALaOm5ubZNXagSsfm2BPUgT1FoV3X9oyjDq6j6PxooajtjwYABBtACwEp/21bNmSxo0bZ/Tvow+tNlpdpsX6AZIPNgAQbQBshG3xB0yWAuPt1W0/U3F5KToKQLQBsAW45Jcs0N+K8MjJ7GTySAijh92+U/bvSjiMjgIQbQCsTYko9yVXRW+44isqKC1S/jblyCZFtCeFbURnAYg2ANamXBhyN13VUxLmu5Z3o4xC1Q2Q49yyaE8P34rOAhBtAGyB93dPUMS5g7sL7U4IpbmR/ygjcN72JkeiowBEGwBLk5qaKiq3j6KffvpJ2Xcw7STdvfxLkxORH+yeSBXifwBAtAGwMJz6x1atNwnzbXYAlPk72luMrL+pIthv7RxLyXmZ6DgA0QbAWvz555/S6sjSUsM0Ps4amRy2ib7wmU4DgxbT2tgAKqsoR4cBiDYAAEC0IdoAAADRBgAAANEGAACINkQbgNpx6dIldAKAaANg6xQXF9PLL79Mt912GxUUFKBDAEQbAFunVatWUr52SEgIOgNAtAGwdU6dOmWwwAYAiDYAAEC0IdoAAADRBgAAANEGAACINkQbgHojKyuLNm3aRIWFhegMANEGwNZ58cUXSfyEyNPTE50BINoA2Drjx4+nN998k/bs2YPOABBtAACAaEO0AQAAog0AAACiDQAAEG2INgAAQLQBcFoCAgKoX79+hN8RgGgDYAeMGjVKytceOnQoOgPoL9pdunTxEaOEgtGjR+dgw4at9luvXr3yXn/99aLvvvvuEvoDW1223r17F77//vt/10i027Zt+3S7du0+w4YNGzZsVt0e+xcAAAAAAAAAAAAAAAAAYIf8Pzto3sTsry8sAAAAAElFTkSuQmCC) **Step 3: Recompute the Cluster Centers**\ For every cluster, the algorithm recomputes the centroid by taking the average of all points in the cluster. The changes in centroids are shown below by arrows. ![Step3.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAW8AAAFuCAYAAABOYJmxAAAABmJLR0QA/wD/AP+gvaeTAABMrUlEQVR42u2dB3wU1RrFn2B5dsWGFTuKiiIqgtgAeWABu2IDpSNVQDpYQKqA9N57CS0QICSENEhCCiEhhBBqaIEkQHr93v1mnbLJburO7uzu+fub9+CS7N2Z3T17595zz/ef/wAAAAAAAAAAAAAAAAAALs4XX3xxfePGje/EgQMHDhyOO1q2bHlbhcT7ww8/XNe8efO8999/PwcHDhw4cDjmaNasWWGTJk1qlVu8P/30U+89e/YQAAAAx/HVV19deeedd56BeAMAAMQbgPKTl5dHs2bNoqCgIFwMACDewFmIiYkh8daiJ554AhcDAIg3cBYOHz5M7dq1o759++JiAADxBgAAiDfEGwAAIN4AAAAg3gAAAPGGeAMAAMQbgFIIDg6mcePGUUREBC4GABBv4CwMHDhQ8niPGDECFwMAiDdwFry8vKh79+7k5+eHiwEAxBsAACDeEG8AAIB4AwAAgHgDAADEG+INAAAQbwBKYdSoUZJFMCkpCRcDAIg3cBYefvhhyeN95MgRXAwAIN7AWVi+fDkNGjSICgoKcDEAgHgDAADEG+INAAAQbwAAABBvAACAeEO8AQAA4g2AFVJSUqh58+Y0dOhQXAwAIN7AWeD4V/Z3v/baa7gYAEC8gbOQmppKmzZtIk9PT1wMACDeAAAA8YZ4AwAAxBsAAADEGwAAIN4Qb2BfsgtyaXG8L/UKnEtd/WfSrNjtdDk3023OP6cgj5Yd2U19guZR5z0zaGbMNkrJSXea559XWEArjuyhvsHzxfOfTtNjtlJy9hW8sSHewJX5ffJYuuXp++mmDq/T7fO/Vo5nVnYj/7MxLn/+8ZfPUEOP/mbnzsdTKzrTrqQowz//o1fO0ZsbB5Z4/k8s70Rep8LxBod4A1ckKz+H7m72vOTxvrHNyyUE4LFlHeh8VppLj7gbePQrcd7y8dCSHykp45KhR9yWhFs+7l/clo5fvYA3OsQbuBo8PXDbP5/Qzf2b0EP/tKGZsV60REyfPLe6uyIAA/YtdtnzX3TYRznPh5f+JE03LD/iRy+u6am081SSUVmZsEd5ng+KL5op0Vuk6ZOX1/ZW2nkaCEC8gYvRwW+q8iFn0ZbZfSZaaW+2ZZjLnn+PgNnKefI8v8y+C/FKe6MNvxr2+fcLXqA8TxZumahLx5T2+uv64I0O8QYu92bzHqd8yLed3K+0x6ScVNpfF/PBrsqPvv8o57kuMUhpTxTzyHI7j8KNCo+q5efJI26ZMxkpSnvtlV3xRod4A1djdMRa5UP+xoYBdCj1FJ0Qc6StvUYp7V38Xfe2e3L0ZuU8X1vflw6mnKBT6Rfpix1jlPa2vpMN+/x52kt+njxVwiNunqP/ZtcEpZ2/oAHEG7gYLFS8KGdtweuuhd/S/uSjLnv+5zLT6FGxKGvt/O9c0IYCzx0y7PO/lH1VcsVYe/53zG/jFI4ZiDcAlYCnC+5b9H2JD36NBd/Q1IOuH1TleSKMagpXhiXhGxfpYfjnv/1UBD2wpJ3F5z8yfDXe4BBv4Io0btyYXnjhBfIM8ZUWL+uJW292mnznM5H2nj/sNteBvdK8uYUX9+qs+lmadgg4G+s0z5/tgDy99eq6X+jZVd3oa+/x5JsUjTc4xBu4IkVFRXTrrbdKHu9Lly7hggAA8QbOQkZGBoWHYxceABBvAACAeEO8AQAA4g0AAADiDQAAwN3FuyCD8k/OoZzoTtKRf3K21OY2FGRR/ql5lHOwC+Uc6EB5J2ZQUb5+ecyFhYV09epVfNoAgHhXQUguh1GW7xOU4fkfs4PbCtNCXP4FL7xygLL8nilx/pk+j1BBir8ufcbHx9M111xDb7zxBj5xAEC8K05R/mXK3PVQCeFSBGzXg1SU57p50nx3YemLSzn/nfdSUW6yzbv19vamG264gVq0aIFPHAAQ74qTd+QPM6HKOzFNOvjPcntu/AiXfbHzEidozv8uyjs2mfJPzBRfWg+o539In1S//Px8bM4BAOJdObJDP1BEKv/sGlVYzq5V2rNDXHd0mBP+hXr+pxeqA/ILW9XzD34LnwoAIN4GE28hTLJIFVzyU8UrJUAVryDXnZflLybl/JO3Ke2Fl8PVuX//evhUAADxNha5sb1VkQpqRIUZCVSYeVQSbGXaIKaHy77YuYcHq+cfUJ8K0+OoKOs4Ze9torTnHGiPTwUAEG9jwWKV4XWj1QW7jG03UOHVgy77YhdlHqPM7bdYP/+t10puHAAAxLtyIpOfrvFhdxaeZFGUtSDTJo/Nnu6MrddZFC5evHN18k8vFl9S11s4/+piAXOSzfu7cOECnT17Fp80AFxdvAvTQinT9/GSPuzdT4lR4X7b9CEeJyf8c+FtriUd/Gd3GnEWXo2mnIivxHV+TJz/w5S9/xMqSA3Spa/hw4dLMbD8/wAAFxXvorxUyWtt1YcsNpLouRMQ2J4hQ4bQHXfcQUuXLsXFAMBVxTs3frgq1N73mXzYx6cIT/I9Sntewki8ak5IQUEBLgIArireWitb/rn1Snv+mZWqlS/0Q7xqAACIt5HEm+17ig85NVAdtV3ajU0kAABgVPHOjemuEenGkrWNfdhZQQ1VH3ZsH7xqAACIt5HEuzA9VtjY/luGDzsGrxoAAOJtNKsgL1Ky59qiD5tzt50B4UnnvOzcg101PnUb5oVLedzzxeN3+zePfI7kjbcbhdlSNkruwZ9F/x2l16Uov2Re9969eykhIcHmi5U5BXm04sge6r93IfUMnEPz47zpSm6m3U4/tzCfVibsoV9F/z0CZtO8uJ102Y795xUW0KqjATRg7yLqLvqfe2gnpeXYL48+X/S/mvvft1jqf86hHZSSk07AzcVb0gaRq83eY45vZR9yzv7PnMaHXXglQnjSn9YtL7zwSpTlPG7h2S5MDdb//MQO1Cy/OhZsnLWkjBgtjz/+uOTxjomx3d1SXNppaujRn26f/7XZUWfVzxR47pDu53/k8hl6Y8OAEv0/s7Ib+Z/V/67w6JVz9NbGQSX6r72yC+0+E617/8evXqB3Ng0u0f/TK7qQ9+koKKq7i7ezwh50FjHreeEPSF72yj9+usUNTKq9sqbI49YxdlXcUfBmqfLkgXP1nPfff18S8NzcXJt0n12QS6+u+6WEcMjHY8s60Pks/fLYecT/uoUvDvmotaw9nctM1a1/HvFb+uKQj4eX/kRJGfq9/jzifnvTIKv9P7jkRzqZngwhgHg7H3kJf2mE7G4pLzvv+FTJs64suB4eWvnHTxxXLI97kimPXIi28vhxA/U7P9Gf0v+OO8XfJ4r+pxfLA++rW/98ey4LxSNCqKYe9JSmTJ5d1U1p56kEvVh4eJfSz0NCqP6J3kwL4nbRc6u7K+2/BM3Xrf9lR3abCeWkA5uk5/TC6h5KO0/j6MXKBH+ln/sXt6W/ozbSosM+9OKankp7F/8ZEAKIt/ORHdZa9amfWaEOWM95qC6afc0qP/IT00fK4yctUR///Gb18fe+o9/IM+Jrtf9TqkgVJHvZJVK3857pikiwaMr4nTmotDfZPES3/n8OmKX0Mzt2u9IeJKZr5PY3N+r35dk7aK7Sz/SYrUp7yIUjSjvfGegFrzHI/UwWX1wyERcTlfZXxJ0RgHg7n3jvfVeTF+6jtBem7VXnvgNfq/zj73tPk8e9Q318sR6gRr2+rN/5hb6v9n/BU+1fzMMr/e95Qbf+23hPUERi60l1DYTnweX2V3UUjx98Jin9bDy+T2nneWi5/aU1vXTrv/3uKUo/axPVLBqeqpDb+S5Avy/PGUo/vGArczYzxWzuHUC8nY7cQ79o8sIbCp96osjLPiFE/W11WkE4UCr9+HG/mn0JcB55UdZJ8zxu4f7Q7fwODzHPA8+Ip6Ls02ZfKjlR7XTrf3TEOkUkGou533ixeMjC8fmOMUp7R79puvU/IWqD0g8vmvKXBs9xf7VznNLezneybv3zNI3cT4P1/ehQ6ilpjv+bXeqX2re7/tat/xkx28xG2DEpJ+lC1mWzLzW+FgDi7XSUmRcu5WVXPhmRvwwyvG52WB53ufLAbeCoscap9IvSXK+1BbM7F7Sh4PNxuvXPXxS8KFla/3o6Xs5lptGjYlHWWv93zG9DPkkHdOv/YvYVenJ5p1L733k6EkIA8XZOTHnh11rOyz7+T9UfX8w1W84jry4tIOp+flbzwKtJC6pMRkYGLV68mKKibG8dW3M0kO5d9L1F4eCRsd54HNtL91npn+8M9Gbz8RDR/w8W+x8Zvlr3/red3E81xWKlJfEeEbYCAgDxdvIROOeF7/9U8qhzxC0vZNrSg81ecl685Ihcdnrw4+uVx225/yg1D937ftH/R8Lj7a/8O2/OYX/3Sy+9pEv/PF3A0xPssmB/Md+qB5yNtdv583TJj77/UF3hsnhqRWf6cudYadHUXvB0Ec9/s8uDR8JfiGkjPUfcxeE5fp6e4vn9J0T/PG21AyNuiDdwfiIiIujLL7+kgQMH4mIAAPEGAAAA8QYAAIg3AAAAiDcAAACINwAAQLxdQbylPOwFUh41Z2Lzn7nNZZDytheJykQ9pB2dnC1u0zzxUvD396cZM2bQoUPWN6uY8rD9pRApzuvgcKP0PPtdfyUPW+RR9wqcK4U7Xc3LcpsPPacDrhPb6wftW6LkodszD7ygqJDWHwumwSFLlTx05IFDvMvWNSlvu3bJvG2RwV2V3Y+GOb+r0SJf5DkLed+PSxkretO9e3fJ4z1+/HiL/344LYkabfi1xAaP50Umx97zh3V/fgmXz0pb6x2VB+5oOI+bw7ss5YH7JumfB867ZJttGWYxDxy7MyHeVjHlbT9iPY9abKrhn3HeO4oM8SX0ZOl533kpuj6FtWvXUvv27SkoqOSmobLyuHnDB2/B1ouy8rgf1TkP3Agj7tLyuDnm9rQQVz1H3E03D7Xa/wNL2tEJ8eUCIN4lb5cT/iyWtz1JOjJ33qO05x353WlfzLzECeZ52+Lvecen2C3vuyxmiRhVbeEADlrijO7aK7sq7XwrrRc8PaAVKs7D5hJi2jxwLm3mqnASoDaPe3yUhxStq80D765jHjgnIcr98Bb7sZHrpSkrbR44JxcCiHfJkV/oh5q87VXqiOTsajWPOqSl076YvGVdOb/TqgjZK++7LHhLtaU87l1JUUp78y3Ddeu/q/9Mi3nce0T5Mrn93U1DXPbD3jd4vnKe/2jyuEM1eeANPPrp1j+vMcj9aHNoIi8eU9rrr+sDVYZ4WxDv4DfVPOoU9RwKUgM1Ua6NnPbFtJ73Ha6en389x73ZNNGpXifDlXbOKrGHeHBcqtwPBzzJcF1KdxAP7ZcnL9jKcOk0uZ3vQuzx5clVgWR4qkpu56wYAPEuQW5sL3UEKoScs7alvO3gt9RphZjuTvti5sYN0OR9NzDliUt5283UvO0D7R32/P4KX6t8SLmILgcccR72ZztGa26bp+vW/7hID6UfrgXJos3Cof1S+UkEPrkq00TZOPk8X1vfl+JST1OyWGP43mei0v6193i7TJvxl2Ss+NK+lH1VCvmS2zlkC0C8S8CVzzO23WA9j1pEnbJbw1kpzDhcjjzxMIc9P3Y68KKUtQWrGgu+kUp66QU7HR5yYB64o3F0HjgXbnh8eUer/fOBdEKIt1W4YK71vO2pTqLS2VKudm5Mz3996qqPm/9sOe+b87bH6/q0/vjjDxo5ciRdvGjdscCLZvcs/M6icHBBW70pLQ98TOQ6l//AW8sD5+O3sJU26YNdJZx7PkQsPss+8tR/fdyl5YEP0XGxGuLtIpt0CtP2SRnYnIVtysNuZRcPtG3uHiz7uLN8n1Cq2PAcd/b+T8S5PSQ5TXihVpu3rRc33HCD5PEu6/3B5bP4Vp291XIetr9YNLQXPMfOpbu43qOch737TLTbfOg5D5ynh9hlwiNhnray1YiX62la8nFrfeQ8XcZ55JyH/pi4E/h0+19mayAA4u16sI9biLRVH7f4IirKS3XIU8vPz6eGDRtK4r1o0SK8Vm4I+8gtbQCSDy5hd0pHHznEG+Jt3CkfMe1h7uMeL5VWy/S+T11wPTzIrs9pwIAB1LNnT0pPT6fExEQ6e/YsXig3pbiPm6eh2BJaV+Pj7uIPHzfE2w3h0meqj1sd3Rac36jxcb9rt+dz5swZuv766+naa6+lyEgsNLk7nFVjyccdfvGoWdV5APF2O7SWv4KLO5V2zmRR5r4DXrbrcwoJCaEpU6bgxQHSqFoW6eVH/JR2drloM0wAxNvtyI37VePjfl141I8LH/eZYj7uDrhQwCHMjPUy95GLQs2cFggfN8Tb7SlMjzO0jxu4N7zh6bFSfOR8bD8VgQsF8Tayymp82GLHppQzYqO88PyTs6341PX3cSt3ALlES4Ult3dvom5iN/X8+UQZmjjogoICCggIwOKlG+J5Isyqj7y8oWPsE994fB8NDV0m5a3zomcq8r4h3rrr9pUDlOVXR9e8cB5dKz71nfcKH/f7ZlktehISkkZ165KwBJofjz9OtPdfqzxHwrJlcNKkSXhDuCEcO8BTJZzRXmtZe/p4+yhpc0554Eja/3mOsOgT5wAzAPHWhaL8dFH04LFSfNgPiZ+57LTnl5R0SThL7hPC3FUcOSUE/L77iC5dIlq+fDnVrl2bZs+ejTcFKDc84n7PwgYfrU+cNwEBiLfNyTs6WuPDrmHK0z420SwvPDd+uNOeX/v2G4RIXy+OpnT77UU0dizRP/8Q1aypCnj//uJLrKgIbwZQYbg0mizUPPUyOmKttLW+LvK+Id56k73/Y9WHnbRMac8/t071YYvYVmelTRsW6BhxHKO5c9X2rVtV8W7cGO8DUDm4pqYs0pwAKaPN+355bW9cKIi3DuItCh0oPuxLu5V2zhxRLX6vOu35tWypivS2bWp7dLTa/vzzeB+AytHNf5bFvG9OI5TbOYsGQLxtTm5sH01eeGPhwz4pfNhJQtSbqD7sg867SWHwYFWkX3uN6NgxsQHjHFGLFmp727Z4H4DKoc375p2Y7BNnlwmHWMntHKIFIN42p/BqTOl54SKqlRP/nJUjImr7pptKOk3ko3p1ouBgvA9A5ShP3nd5XSsQb4h3hTHlhVe34sP+26nOhf3afn5+Zm3s6b7uupLCfc01RKNGmf9+QkKCaBtF69atwxsDlIutJ637xLkGJoB46zsC57xwkaHNWdqqD9vf6c7jH2ElYb/2L7+YhwmFi5uH1q2JHniA6O67TdMmPj4lf3+p2MnDv9+CfwCA8t7hCZ94O9/JUt76I0t/olZeI6XNPwDiDcrJtGnTxDTJTbR58+ZK/X5ycjJ16tSJPD09cTEBgHgDe3LhwgVcBAAg3gAAACDeAAAA8YZ4AwAAxNtNKMxIEP/rGlkegYGBtGrVKryoAEC8XVy4U4Mpc/ttlBPdsWICXpgjZZ/wjkzO/Obsb1vlfZev/1zKP7OCcg/9IvrvIeWN52Sm0jPPPCNZ+9jiZ0vi4uKoefPm9Mknn+BNYwA4vW/T8RAaHrqcfgmaT4vjfelKbqb93n4iuGzLiVAaEbaC+gTNo0WHfeiynftn6+FvYSul/hcerlheeJH4j/3o/Pu9g+ZKv5/igLxxiHcVhVvZ9h7dqVwCXnj1IGXted5C3ndtm+V9l9p/+iHK8n+xZFyt75O0aOYIatSoEeXk5Ni0T3asVKtWjW655RZRyCEX6ulATom87OZbhpfYAPPsqm6052yM7v0nZVyyktfdlXyTonXv/2xmCr2/9XeLeeHep8vOCz8nBjkfbvuzxO9zbc6dp+1bhBviXRkBFCKbuePOEgKYE925dAEvyBAi/aTj8r7F6J6/JKz2730/FeWl6NL1VhE9ePHiRainA8kvLKAmm4dY3Xr+sNgQcyYjRb+3nxjxW/ri0OZ185eLniPulhaEWz7uX9yWjl+9UOrvf7Dtj1J//9jV8xBv4wp3uEXhVvK6Y3tZ/d28xHGavO87Rf73GCnzO3Pn3Zq872G6Pfe8Y5PU/rffTnkJf/2bN36v2n/cQKici7LqaIAiNDWF0HBe9mwRBsU7GeX2noFzdOvfPK/7exoVvobmHtpJL6zuobR39Z+pW/+bxVSR3M+9ov8/96+meXE76UVNXnhHv2lWf5+nWrS//8f+VdLvv7Sml9L+kwjRgngbVrhrWA+eUgTccuZwzv5P1bzv04vUEZGd8r5zIr5S+z+lBnIXnN+k9i9ibYFr0i94gSIyEw9sVNr3XYhX2l/36K9b/wNFNoncz9jI9Up7+MWjZmmCejFMzPHL/fAXh0x0ygmlnYXcGjxHL/8cC79MbOoppZ1Lv0G8DUhewqgyhVuav97znJj+uFLi97XRsAWX1DCQwrRQ9XcDXtHt+WeH/E/tP3m72v+VSLV/MR8OXBMeVcois1qMwmV4qkRuf2ZlN93651G13M/yI2rgmb3yuvmuQu6HF0lleLFSbueam9bgxUn557gosgwvtmqnniDeBoWnNUoVbjGnXJRjuWp67qG+5nnf2afFz54To+1mxRY+dXruYkpELQrxOv3U9kuaNX08Ze1rofZ/4EeonIsy9aCn2Qg74fJZSbg4BEpu/8p7nG79z4z1Uvp5bX1fihcBVCx8HfymKu2f7xijW/88xSH3U39dHykvnF02XFpNbm/tNcrq77Ngayv6xKWafl9bRIIXMyHeRhbww0MrLNzSCLfMvO9rpcrwuk37ZBymDK8bpb52jfuPZAu89ab/UOJSuf/q4i5gr77XTrhNDhw4ACV1AOy04JQ+awtud8xvo6tj4nxWGj22rEOped1eJ/XLu0/OvkJPiJF9af3zvLg1Lorff7KM399wbK/dXk+It40EXBLu7DNlT72Umvc9/l+V1fiwxQJoftISm/nA80/Okb4kuM8lA/5Ds/vIz+EaaQFTT65evUq33XablFSYnZ3ttK89uzY2Ht8n+aT7710olexKz8tyiufO4sKLbZaEZ0jIUt37Z3/3fVb6H7B3UfkGIRqftuxTT8vJKNfv8peDtbzwvsHzy/z97acipMVeS7/P0yr2BOJdJQEf8q9wP10u4VbefGJ0mx36geTyYKdJdkgLMQdumgMsTI8V8851rfjAbTMq4Tl2U974faL/u6S5cO0cvJ68/PLLVLduXalQgzPCVjBLdjt2TOw9f9gpzoFv97/zmSjNbz/8b162PSvUHE5Loh98Jknecu7/IzHVwKJeHnh+vqVFn3ZX2n2mfD5xni5qK6aK6qz6mR4S9kSe6qjIiPnolXNK3jj/PtsH2UljbyDeVYStfqVNlVSIgkwh0k85zgduBzIzM533tRYj7sYbBli9ZeYSXrz4BnSa9hMjbksbfOzlEzcaEG9DfRGM1/jA7xA+8NFSm7184KB0+PZcuyFjTOQ6aRGOR31yO0+jAP2mfLQ+cbbrzTm0w8wn3nnPdIg3xNsyu3fv1u2xc/Z/pvGBqyKQf269xgferNKPn5GRAQWoAt0DZisiMSNmm9LO28rl9rc3DcKF0onBYk5evs6jI9S6qJEXjynt9db2hnhDvC28eQYPps8++0y3x8/e11T1YV9UfaTsQFF94PUr9djx8fFUo0YN+vvvv6ECleR7MU8si8QmjSuB58HLs8kDVI2fA1RL3tJ4dRB1Kfuq0s5uFog3xFuhSMy19erVS1RIv4Z69tTvw5l7qF8xH3gSFeVeEAuKzTU+8I6VeuwxY8ZI1sD27dtDBSrJ+CgPRSTe2jiITqYnSz7p7zSizgtxQB94K7/WJ84Lj1eFy0fr0/54+yiIN8RbFe7OnTvTzTffTNdffz1NnjxZt77K5QNPC6304+/YsYNSUlIMcV337dtHI0aMsHmCoZ6wWPOiWGk+ab8zB6GyOlEen3h5XSsQbxenoKCALxD997//lUat7FFeu3atrn3mnZhm2QfOPuyjY1zm2r700kvSNfX29q7w77LPmhev2GfN3uCVCXsoM98+XwIrE/zp7oXfWhQO9h3b5X0p0vnkPGxeIOWt5vb0mWt91rLP3V554KX5xMvj07bJgE78x35xDqbiPpdUwGcO8bYTeXl5NHv2bLrzzjulkTdvLuERo+4fDpEVnh3SUrhM7pGCsHjapOCSr0tdW76D6d69O8XGxlbo99hj+86mwSU+uJzsFnrhiF2eOwcZfe09Xspwln3KvHnDXqN/S7GqHIgUdO6Q7v2fFlY8S3Y99kz72yEPnGGf+Pf/+tT5Tojzue21s5HzvD+ykOfNeeC7kqLs+hmCeJclpIWF9Nxzz1Hfvn3pkUceoaSkJNy/OoicgjxqKDI5rN0y89ZlXrxyVfiO491N1vO4HxVTCucy03Qd8b+3ZZjD8sAdrgXijsNSIYby5oFDvO3MihUrqGHDhqYPT36+NAfuDKSnp1NMTIxLvRbz47yVD8oDS9pJPuvpMVvpqRWdlfbBdtji7Sh4ekibx/1X+FqaKSyLPAKV27msl16sTQwy81mPDF8t+dx51C23s53SVeHSZ9o8b5424kXUupo88PbI8zbOqPuFF16QFvqcjd69e9N1111Hc+fOdZnXo4u/6irgzRkyfLsqtzfb4rqbmDjHQz7Pf6I3K+3B5+OU9kYbftWtf15fkPuZELVBaefpKq0LxFVhsbaU5x2TchJ53kZj9erV1KBBA6d87jyfzO6YyMhIl3k9OK5U/pBoszg4WtQeYf6O5kfff5TzXCdGwTI8D62de9aLTpo8cF64leGpGm0tR1elV6DlPO8rmjxvzjqBeDsYnh7hAKVt27Y57TmcOnXKpV4TniaQPyQcDsXFbNln/c2uCQ65bbU3kw5sUs7zDZGxckLMr3IetlbU23hP0K3/acXywHlzEguXtsjDFzvGuOz1107bcR74ETFoYJePducth1RBvB0MWwLr16/vNHPczsrixYvp7bffLlfsQKJwmtxvJY6TjzsXtKGAs7Eue614hP1QGT5zPfO4y8oD1zuP29GUJw/cnumCEG8ro+4XX3yRPD093VJQhUOS2M7+q5g+7dOHaNkyoiydbMQDBgyQ/N79+vUr18/ztui7rPiseQHN1VlzNJDuWfidw/K4PUrJA//VDUK5eLrOWh64vRdrId6W3qAeHlSvXj23HHWLCBRxx0FCUM2P2rWJwnQo8nPo0CHpLictrfwWt4iLifTVznGSNVDOo9ZzxGk0eIFM9pnLedLshLAXh0TBXZ6qYm+z7LPeVEoFGpf7jIjpEtlnzq6nFp6/SV+q9gbibYFXXnmFNm3a5FTPecuWLVXeQMTFberUKSnc8vHAA6JYayqmegAwAhDvYrBo89ZtZxp1X7x4ke6++26qVq0ahYZWPtvhn39Uob71VqI/xNrLOFGP9p571PZBSDwFAOJtRLp16yZNmzgTWWJCmkOe/ve//1XpS6dNG1WkZ8/WfqGp7WJtEQAA8Qa2hDcVVQWh/YpIb9+utkdFqe3CPQkAgHg7HmeuqWhrBg5URbpRIyKOcbl0ieiDD9T2du30vYNIxaQ6ABDvsjh27BjNmzfPaSuZ2xph/BDxt9YXLKtXJwrUaVGdX4cbb7yR+vfvjxcCAIh36cLNuR8c+Tpnzhw6evQo3g2CmTNNIl1cuEURIWkBUy94kw4vuLbhiXegK5yOx35lzucYuG+xFHhlzzxwOQ+bffkDRP8rjuyRKuIAiHeZnDx5UhrpsXDLBwv58ePHneL5c5GIL7/8slKFDMrDXhGN3LIlCQcL0e23EzVtylV49D0nTmw8f/48PpE6w5ECLYUv21F54LxL01Ksqj3zwCHeLiTczibgCxYskHYl1qpVi3Jzc/EuBuX70i8jj7vWsvZSsQE9R/y8oaW0PHD+cgEQ7xJwUJM14XYmAecKP6NHjyYvLy+8g0G54V2A2jxuLuPFeejaPHBOztMLrnZTPA97hsgj1+aBd/OfhRcK4l0STggsTbjlw5mTBAGwBtebtJTHvff8YaWdKxXpxaB9S5R+RkesU9rDkhOU9lddONIX4l0FeF6Vt5GXJty8w5JHtgC4Gh38pioiqc3i4NJlai3Grrr1ry2mwYuUMheyLpuVsgMQ7woLOITbGK+Pn58fpoR0YEr0FrM8cC5mzC4Prahz4JdecMk2bR4413tkl0vnPaqof7r9L7xQEO+KCbjRhTsnJ8ct3pC7du2SFmM5khfYFl4MfLhYHjdngJvlcZ/SL4+bF0N5UbS0/recCMULBfEuv4AbXbjZA/3ggw/Shg0bXP4Nye4ZTnXs27dvlbf7g5Jw+TRreeBco1JvNh7fZzUPnGt0Aoh3uWDBDg4ONvxUyffffy+NRkeOHIl3LKgynAfO9UB5fpnzqFvaOY+b88C5XNtTKzpLlZHYPshFHgDE2+XgEeiSJUvg6QY2h3c7unP/EG8AAAAQbwAAABBvAACAeEO89efcuXN04cIFt74GV65cESmGf4j88HZ4QwAA8XYOPv74Y6pRowb5+Pi47TXgxdlbbrlFRNFeQ2fOnMGbwsacSr+IiwDxBraEK/twLcrbRRaru4vWjBkzaOPGjaKqfTbeGDbEJ+mAFEw18cDGSv0+O0S2n4oQ+SRraUjIUlp1NMCueeDleX47T0dK+SmDxfNbmeBv07xwfvxdSVHS43NeC+ehX8m1f0UuiLdBQXEIoKdwyxtjJh3YVKHf5xwUS3ncL6zuQcHn4xx+fucy06i116gSz+85kVdui7zw81lp0hb+4o//7Kpu5HfmIMQbAGB7fJOihXD/UGJ7+szY8uXIcB73/zxHOCwPvDzPz9IXi3w8tORHOl2F6SIecVv6YpCPB8Xjc14MxBsAYJWKFs62JNxaAZ8Vu73Mx1h/LNgsj3t46HIp7EqbB94jYLbDrgnnosjPgyMAhoYuo6kHPc3ywjkEq7Jw6Tb5ce5e+K00JTNNPD5XIZLbO/pNg3gDAKzTr18/uu2226hBgwbUp08fWrFiBcXGxkqZPcXh8ma8Dd3aiLG8As71JuWfHxu5XmnfdyFeaW+wvp/Drgl/mcjPg+tjykRcTFTa663tXenH/10Uj5AfhwtJyESnnDCbPoJ4uxGrVq2SyrMB62RlOV+BWo41CAsLs3msAdtI+/fvTzfeeKOUecOOHD7++9//0qhRo0oEevFouDThlg8ukZZfWGC1305iVCn/LC9SyvA8sNzOeSWOorvmPJfE+yrtqTnpZlM7lYWrDMmPs/DwLqWdF0O1UzMQbzchLi6ObrjhBrr11ltRgNcKrVq1koSJ/e/OQpGYf/X19ZVSK9evX1+pSN/k5GTpMaZPn05du3alt99+WxSFvpvuuusuqlevnmKlZBH/5JNPpDJ/luDalZ33TC9VuN/aOIjScjJKfT48BaHNA+f548z8HLMiC5/vGOOwaz5b3DkodwAe/aT55yzx/LRfXq28Kh/uNi9up/I4r63vK+WRZxfkUu8gVdTfFyFfEG83gQXpiy++oI4dO+JiWOHDDz+k6tWr0+bNm51OuOWjLAHn0TJbI1mkxQdSEmn2+r/11lvUuXNnmjp1quT7l7/gr169Kgk3556HhJSdCMgCrh05FxduHp2WhaU88BoLvjH7u+eJMMd9loTT5NFlHUp9flVJL+SKP4+V8fjaCkUQbzfB0lylcZ+ryGUWFuEhQ4h+/ZVotZhe1NOKnZCQQJcuOUdVcUvCXV4B57nrKVOmSAUpynOXwbVWub/yYknA39w4sFzCLbNW5IHzYp2lL4E+QfMcfv052tZaXvjPAVUvbsyLotYevyqLoRBvoDtsPxdrZGKu1fyoU0csDEW497UpTbjLK+B6wwLe8V8Br6hwy/AC3Zc7x9LjyztSTbEQ2nzLcKnIg1HgvHDOK39C5JWzw4bn8rVz9FUlLvU0fe09XspDZ898M/H42pqcEG9gOFhzXnihpHDLhyj2Q2lp7nt9+HNRmnDLB+8adWSVIF6UZDdGWXPc5f0yMDJ6Pz9Hnj/EG5SbadNUoRZrZTRsGAl3A4kFNLV90CD3vT6nT5+m+fPnlyneBw8exJsJQLydDV5o+vrrryWXibPx7beqSM/QTO/x/LfcLtbX3JqyBBzCDSDeTsogMTRlb+6bb77pdM+9RQtVpL00O6pZj+T2557Tc9omh7y9vSklJcXwAj5v3rwSwh0ZGYkPAIB4OyupqanUqVMnOnDggNM994EDVZFu3FhszhCutcuXOcJWbRe1knWDo3L5i2/x4sVOMQLXCjiEG0C8gcMQu6/FZhlVqIXNWPiv1b9Xq0bk56df/+x1fumll6Qdqc4yhcICDuEGEG9gE2Sf9tChIq9igPDurjU5ScoDL1qySFtym7DvmykQO6w3bTItaPLjr1mjrw/cyPAaBwAQb1BlxH4Xiz7t558niooq32MEik1kzZoR3XGHyXXC0/eenqZ/S0wkev11yz5wDEABgHg7Dbxxwyi7J3l0zSJtzaf98MOmOezyn5tplC3D+Ut168IHDgDE2wVYuHChFCIUHh7u8OcipozNfNo8bfLnnyQyNNR2bqssbB+UH+fmm03TKCNHmvvAedETAADxNvyo+5VXXpEcEsuWLXP482nTRhXRmTPVdrFjW2lv0qTyj//dd+rjTNNk0vP8N3zgAEC8nYqMjAyaNWuWIZ5L8+aqiO7YobbzXLfcztMelaVlS/VxRGaSQkyM+dx3VYkRDzhU3CJ4eXnhDQYg3hBv14cTAGURFdHQIi/aNMfdurXa3rZt5R+ft8bLj8OLmKJmAF25QvTpp2o779KsKpMnT5buZr61xYMBAPEGRod3Qoq6D6X6tP39K//4hw6V7gPnv+/eXfXzOCqiDXv16iUeazdeVADxdinxLsqjgnMelHt4iHTwn7kNkMiLtu7TZl92VRFFX6w+/uDBrnENuUr59lMRNDpirVQ3kQvzckUVACDeVflgpR+iLP96lOH5H7MjK+Bl8W/6h0FxLcpsg+9I4dF106ZEt99e0qdtC2QfOD8+u054K72TFMEpEy791VKUuioexP/Kul8o/OJRKAqAeFeKggzK2v1kCeFWBHz3U+JnMnXrnv3cL7/8Mj399NN0+PBhw7/4xX3azvb49oZzsN/dNMRqHUguvnspGzsqAcS7wuQdHa0Ideb22yk3foR0ZG6/TWnPO6pfgVQuAPuciNV79NFHJZeJs8Ilx9jJYeuq587O8iN+ilBzhZYRYSvo76iNUsUWuX3gvsW4UADiXVGyw1opIp2ftFQdMZ1erLRnh32k63NgwYuPj3dq4ebUPrniS14e1gpkegWqVcKnRG9R2n2SDijtPDIHAOJdQbKCGioiXZgarLTzn5WpE/EzoGzh1pbsMqqAh4aGStXV27VrZ5f+vt31tyLSXIhW5sTVC0p73TU98UYCEO8Kj3oPdlNH2CEthMEkVTr4z3I7/wwoCRc4KC7c8rFJbJE0ooDzugL7vR944AG79Dc2cr0i0k03D6XzWWmUmZ9DHfymKu3f+UzEmwlAvCtKYVoIZWy9Vl2k3FrddCh/v1b8TChe9QoIt9EF3FPYZK7wTiA7kHjlnFQtXRbqO+a3obsWfmu2aLnzNKITAcTbOpKPe4NYjBwmHQXnRTh1kSm5Ly9hlBDqayy4Ta4R//aXzZ8KZ5Y4c2ZzgbCD8DmUp+o5NsUQLTrsQzUWfGPRbTLATouVReK/XUlR0p0AL5p6HNsLnznE2/jibd3H/QoVZpjseQXJOyg7+E3hMrlFOrKD35LabM2uXbvErsFrJGugMy/ssUvGUs1F7bF8+XIUFviX0AtHqLXXKHp46U+S66TZlmG0QQioPTiTkUIfbfuzxBdH/XV9KCw5AS8OxNug4l2mj/tp8TNZmiFKgenQiejoaHrttdfor7/+cvo3QmkCDuG2PgJm77fd3v5FhdR8y3CrPnO2LCZnX8ELA/E2nnjztIfq475NTJkMl6ZNMrffqvq4E8fZfdrBKMUW9BBwCLdxWH00QBHqexd9T0NCltKEqA3S5iC5vV/wAlwoiLfxxJs92pZ93As1Pu7WeFWrQEhIiCLgziTcmZmZIto2yqVfGxZmWaRZtGX8z8Yo7Y03DMCbGOJtPPHOCnxd9XGnqXOMBamBqngHvYFXtZLwHURLEcqdIApdcoV2ZxFurtL+XxFneO+991JhYaHLvj7tfCcrIu2hmWPneXC5/dlVsMJCvA0o3jkHu2h83C2F6STtXx/3/1Qfd0x3vKqVhH3THTt2lP7sbCL4+OOPS+sP58+fd9nXh7fia3dynstMoyzhM+/iP0Np/8p7HN7IEG/jiXdh2r5ivm0LPu7LYbr1z3PCLG4XL150yTcDe7nHjx/vlM/d6EmOtoB3cj6wpJ2Zz/zuYj7zrSfDCEC8DWkVzEv4064+bi2ff/65tKOvffv2ZUw/mCJQR4wwFfTdIKYnncFJyMLN2+GBvsg+7XGRHvRb2EraeHxfuX3ay47sLrExSD5+CZqPiwvxNvYmnYLk7cK73ZgyvG6WDvZ06+HjLg7PBX/yySd09uxZqz9z5AiJ2/eShQhEUizFxhr7zdCpUyeKi4vDp0JHzmamUCuvkVXKA9+ffJQ+3m7ymbPrhLfqr00MwsWFeDvR9nidfdwVv3UnevZZy1Vk+HjiCS5IbMw3Aqch8mIlYmB1HHQIn/Z7YkOPNZ/2k8KnXZE8cB7B5xUW4MJCvFHDsqr8/bcq1FyhZohIBx0+3FRRRm7/809jPvdYcVvQuXNnvIg6ssqCT3t8lIdZHviAvYtwoQDE29589pkq0sIqrSDs0kp7ixbGfO4bxMT8xInOnYhXJEr3+Pr6ii/M4dKfjQbPScsizc4RGd+kaKX97U2D8EECEO+qUlG7nIiWVkRae4nCwtT2+vWNea5jx46lLVu2OP1rVqtWLWlROSIiwnDP7QefSYpI8yKlzClRF1Nuf241rK4A4l1leAT34YcfShbB8tCzpyrSzZsTpaYScVrpRx+p7f/aqA0HO2icuQKQzKhRo6h///6UmJhouOfGUySySDfZPIQuZF2WfNqd90xX2r/ZNQGqBCDeVSE9PZ3uueceqlatGgUEBJTrd/bvJ7r2WlWoq1cv+fcgA5oCcnJypMVKlDzTl2NXz9P9mjzwOxeU9Gl7nQrHhQIQ76qSlJREc+bMqdDvjBH1jUVKrEW3Cfu+Ga6ozjMUv/1GNGwYEVurHambBw8epK5du+LTYAeWxPtazQO3V6gUu1R2n4mW8lEq6jO3FX5nDkrz/pxHznG6fAcCIN4OZ+dOosaNiW6+meimm4gaNeKKL6Z/E1Zxev11Y/jAT548KYVQjRs3jkaPHo0Xzk5wHjh7vR9a8qPkOuEpFHv5tHlL/afb/3JYHjhPFX2xY0yJ/uut7U0h4roAiLch4BG2Ni2WfeB16lj3gYt4Drv6wJOTk6lp06b0wQcfCBdMC2kDErcB+1BYZF+fNvfXcuvvVn3mjy/vqGseOI/4LRWSkI9Hl3WQvlwAxNtwTJpk7gMfNMg0bXLbbY7xgbOdrrlYWW3SpIkk3uw4Aa7L+mPBilDes/A7GijKtvEi6pMan3nfYP222G8+HqL0w3P97GnnqZunV3RR2nsGzsELBfGu3DSCnv7gL75QRVo7lb5iheN84N98840k3q1atXKpogtTp06VUgZdPeO7IrBYyiI5JnKd0h5wNlZpf0PHPHDelCT3MzJ8tdK+9/xhpb3B+n54oSDeFYMrkD/00EOSkHE1dT145x1VpLW1fNmlop37tusHesAAaepkx44dLvV6cvoj+71doUSdrWi/e4oikto59nOZqUp77ZVddOu/8x41unZlgqohF8VUjXbqBkC8K8S+ffska2DDhg11y7Lu3dvcB54mpvd4sNu6tdpeRmChzeEFyy5duhhyR2JV4E06vOmIK+wAE5OjNysi+c6mwXQ+K024TPLo54BZSjsvJurF9JitSj9vbhwozW/niP57Bc5V2jlwC0C8K8ylS5fo2LFjuj1+eLi575v/fN116t+FpZzKaSm3Gbyd/MSJE/gkuAEn05PN8sDZZ85z39pFQ56X1oukjEuSw0bbP7tttP3zvDyAeBuS0nzgvHhpb1y5bBgoyfIjflZ95j0CZuveP4dzWcsj54pAAOJtaNgH/sYbJg+4KL0o+b65eAMA9oD93GzZe1CMgtn1wWFY2jlovYm4mEitvUZJo3Du/62Ng6QiEwDi7TSwBxw70oGjYN93bmG+Q/vnOW8A8a4QvEDHFdILCtwvzJ7dc2I/jmELQ9gatkCmpWHzB4B4u4R4z58/X7KScWqgO3HgANHdd5vm2HkLv8jfcml46/91YlUY2/8BxNtFxNvLy4sefvhhWrlypdu82FyesmZN80VSVxfwdevWiTTH6tSjRw982gHE21XmvN3JA2xJuOXjzTddV8CzRaBMKgerAwDxxoKlKwm3Owg4ABBviLcEr3Fu22YKkfr9d1M2d74dF+3Zmi1mfWjkSFMmONsMrfWflUVi23/pwi0fP/5ojOvLmz3ZTimK4kgFnDnv3EhF7Tkdj/OoJx7YSKPC15DniTC3c01wHsqkA5voz/2rpY099s4DDzx3SNox+sf+VbRJ9O+OeeAQ7wpy9CiJLfQlhe/VV4kOH9a/f94AyvPUxfvn2piHDln+na1biW64oXThfuQR02M7Gq449+67JZ9f3bqmxVZHczYzRfIoF99g0sCjHx24dNzl3//W8rhfWfcL7U8+qnv/nIPytfd45IG7k3iHiSrAv4thcm4VhnBl5XE/9RTPo+t3DvzUWcQqkwe+fr35VnwjCjffPbzyivXzE+vLdPmyA++4igrpvS3DrOZRc6hTao7rzj2VlcfNoVIs7nr2/4mFQhLumgfuFuLNnu6XRVwfWwPHjKl88M7ff6tCwlVyRCCflMl9661qu56OtClT1H54d+avv4qYzSHmeeA8jWMNSwLOwm2UWr1z56rPi3ee/vKLadrkjjvUdr7elfvizZaSE318fCr9/HgXoiwUnMnBedijI9bRY0I05PbBIvbUVeFyado87v57F9LYyPVmeeB9gubp1v/Wk2FKP7zFnrPHx0V6mOWBd7fDFn+It53x8/OTCg5UxWHy2WeqiMzTvEeXLFHbRVEa3fj6a7WfmTM1orJSbX/vvdIfQyvgPJI1UpF1nnOXz2PiRI1obDRfWK0Ma9askb683+U5mUrSO0hNv+P5XpmdpyPNqsK7KoP2LVHO86/wtUo7zz/L7a979Net/+Ghy5V+fg9TLb77LsSbTd9AvDHnXQIWDllEtOl/oaHmc996IaLGlX5EGKAC75iU2198sezHEdZnaYrl6FFjXV/+4pPPg+fpZXgtQW6vXbtyj807LF8VL85vvMJbSb7d9bciErxIKcNpfXL7C6td10/e0W+acp5rjgYq7cmaPG4ehetFV/+ZSj8rjqgalJaTobTXWtbebfQI4l0BundXRaRlSy7gYLLXiSI0Snsn/d671KeP+Qib5395jlt7R9CuXfkeK8eAi/ODB6vn8fbbRFwTg90ybdqo7Xz34Sh4ikQWif95jqAUMb/Nedichie3s8C7Kv8UywPn+W3ORtHekXBxY72YEbPNLA+c88i5Bmi/4AVKO8/JQ7wh3iXgEXZZedxBOhYBF/UFzPqz1L+YHXJaYmLMXTFiYyRdf736d47SdWTBnyOXz9B9i35QhIKjVYvnYXudDHfZ97+lPPDiedwex/bq1n958sBXHw1wGz1yWfHWqzIMe7utuSHskcddWh74wIHO/4acPNn6+XElIkczL26nJBqW3A6/BM23z3tb/Cf7rGWfebadfOal5YHbI4+7tDzwDn5T3Wow6bLi/cMPP4gpjE66FNXlDTqcwc2OCB4pNmhg3zxuHn02akR0442m/kWdXfLwcJ03Jdf3ZC87O2p45M32QREAaRiCz8dRy62/0/2L2yp51PYa8fFUwWc7Rpf0mYvCvZEX7eP3DBV+6g/F9ASPwllIG4uixUvjd9vt+odfPEqtvEZKeeTcPxdNXnTYR/pSg3g7uXgnJCSIKYVrxYf/JrEop9+qnKPzuF09D5zPz0g7K4vDvm975mFz/jV/aVjzOT+1ojNdyr5q1+fjyJ2l7p4H7rIj73BRMHLp0qUEgJbk5GThFR9EXbt2dbrnvk5UfJeFmufafxU+a15E5c0xcvuAvYvwIkO8sWAJXA9OGOS7suvFfIweU2p6wptiZJHmzTEyu89Em7kwAMQb4g1cksliVZR3W+Y52ZxTW9/Jikhv0Lg6OG9Fbn9mZTe8wBBviDcARmJC1AZFpJuJjBUOaeI5X94SLrd/tXMcLhTE27nE+/z583g1gUuTeOUc1RQOF63PvLjPmeNZAcTbacT7xIkTIhzqVurQoYNwKOTjVQVOjezj5rzq0RFrpUAm2VXBljhrPnNnCmViu+WU6C2ST33LiVC754FDvA0CV4G/QRiexcngFQVODfu4P7eQl63NC997/rCZz5x91tqsDyNjLY/bXnngEG8DTpvEiVpfmDoBFSXDWgC6AyjLx83Rp9q8cPaZO5PPme8oPt4+ymF54BBvLFgCF6BA1LJ76623pI1c6QYp3rnWgo+bo1e1eeGcIe6s8DZ+bR43xwmwT12bB84hVwDiDUCpNBIZA+z3DgwMNMTz0abjcZEBmV1JUWZpfs7K0NBlynlw7UmZIE0eOE8PAYg3AKUSHx9vqI06P/hMUkSMq9bInE6/qLTXWfWz017vznumK+exMsFfab+oyQPnqRPgwuLNJa28uHw6AC7E+CgPRcS4VibnhXN2ipmP29t5fdzTDnoq5/H2pkFSEYfiedxc3Bm4sHgPHTpUKmk1gItIAuAiHC3m4+Z54ftcyMd9StxBPKjJ42afevHz01boAS4o3tOmTRNFd28ThQ+C8Ao6IRy1znWAOZucCyZv2eLa6YgVYX6ct1183DzPzD5r2UduhDxwd8vjdkvxZi5fhqXIGTl9mqhp05KFFurVM1XSAaYNLFxmjUfhcl41i54t4KkK3kLv6Dzw9//1qbOQNxRFixce3uV2edxuK97A+eDNr1yg2Vololq1iByxfshe7y1i+G8kz7d0vcR8sC1HxCyOXOPRms+aLXv2zANnn3p2AW65IN7A8Mybpwo1VyHq1ctUdPj229V2sZxhdxqLsj28hsIC7sqwg0UWat6Z2Td4fgkfOXvLAcQbADO4sr0s0n9riqyvX29eNd7e/CkKk74u6tpt4/p2LswAscFHFmkWbRm/MweVdt5qDyDeNsPT0xNVcVyAli1VkdbqpEg2UNqfeQbXSS9+2j1FEWmuyiNzLjNNaa+9sgsuFMTbNvAmigcffFC6rd2wYQNeMSdGVB9TRLpJE150JsrJIfr2W7X9669xnfSCq81r88DZR84+6z5B85R2DsUCEG+bUCR8ZYsWLaKPPvqICgsL8Yo5MdHRpmrwslBfd51p7lv++zXXEGHvlX4cu3pecniY+8h/MFu09NBU6AEQb+BisE/b15do7FiikSNNUyDljVGfMMEk0pbcJj16lL//3buJxo0z9b91q7F84uzqYB/1VLGjkDNKvE6F27zKPNsJ+fG5liU/fnnTBUvLA++8Z4bTvAf3XYin6TFbaUzkOtp2cr9b5oFDvEGFsObTfuUVokOHyvcYO3eSWCAkkcFOohgw0Usvic0by8v3u2fOEDVvblyfuLU87kYbfqWYlJNVfnxrPu3XhVc6OuVEuR6D88DZR86jbtlnvTje1yl81jzV853PxBLn/6ob5oFDvEG54dE1i7Q1n/ZjjxFVJF2VR8sipqbciBRXSfSt9f/II0RXrlT+/EJDQ6lfv34UEBBQqd/nPO4Wnr9Z9VHXXtmV0nIq7yXnx/9g2x+l5n2n5JT/BTD5yJ1nxMpfLpa+GOWDLY/ulAcO8QblZvZsc592z57CfiacZSKpQGkfPly//hcsUPvhUTtPs/AiqNYnzn+vLMPFk+eF8R7lnb8pxuqjAWZ53By4xNGntZa1V9qHhFTeNbX+WLCZT5vzsP/cv5oe1fi0Bzhx3ndZeJ0MN8tF6Rk4Ryqj9oQmD5zbIN4OEm9eoOTFyTEi+IID84Fx+OEHVSQnTVLb16xR2999V7/+27dX+xmnCddjE5LcLvbbVJoDBw5IYWchIZULf2IxlUWEEwJldpyOVNqbbB5S6efHG2jkx+EiBjI+SQeU9rc2DnLZ99+IsBXKeQ4PXW42DaSdPoF4O0i8OeqVRz81a9aktLQ0KKaB0M4179ihtsfGqu116ujX/wcfqP0I67/CkSNq+1NPOe76fLvrb0VEuKiuzClNHvfzq7tX+vHb+k5WHmeDxhVyNjNFaX9mZTeXff918Z+hnKe2ZidPRcntfJcD8XbgtImHhwdt3rwZamkw+vdXRfK990w5JOzT/uYbtZ3/rBe8lV47wmefeG6u+R3BF1847vrwaFgWEZ77vpybKfmotXncbbwnVPrxtXnfzbcMl+pZ8uNz6TC5/cudY132/cfuEm0eOBdx4HwU7c7RD0V2C8Qbc96gGJGRJm+21qd9443mPm3tiNzWHDxY0ieu7b/4iNzexF8+Y5ZPzfPS2nzu4iPyisJ531pf9l0WHn+DC/u0i+eB8/nfX+z8tSNyiDfEG2hgb7c1nzYvYOoNZ6JY679rV8dfn9mx2+mO+frlcZeW983TCq5OaXng7cS0kjsB8QYVZvt2ogYNTKNg2ae9YoX9+vf2NveJ161LpEf0DS+eVwb/szHStAaPwmUf9bIju232vHgDkMmnbXp89ngvET5tdyFE5IG3FHngfNfBX2ScRT730E63ywM3hHinpKRAEZ0Q9mlnZble/7t27RJ+9leod+/eVXt+Yj46Kz9Ht/PP1/nxjQ6ff6Ybn7/DxZttWbfccguNHz8eaggMQXBwsOR4euGFF3AxAMTbGn/88YeYw7yGBg4ciFcDGALeX8CW1SxH3lYAYHTxZvz8/PBBAQAAZxNvAAAAEG8AAIB4Q7xdDzkPm9eIR482baxBjIx7wXnYM2K2Sbs2t5+KsHneeFmw3W+m6F/OO89BFXnji3dycjJFRUXhyjsIa3nc7NuOj8f1Kc5lsQefA6tcBd5S/s2uCSU2uDSsQB54VeDI2u8t5HE38OhHkReP4Q1nZPH+VhQsvFbsrODSZsC+cB53/frW87CffJIoMxPXSSZa1G3j92rt2rVd445L/Nfaa5TN8sAr0/9nO0Zb7Z+jXfnLBRhQvLkGZU+xh/rWW2+lY8fwLWtvZs0yz+Pm2GpRe0D47NV24dwE/8KWwXvvvZfefvttUWQi3enPZ9PxELNckF6Bc+n3sJXmeeB79RtUcbkybR53DxEXwHnnj2n651hdYNCRtzx1AuzPd9+pIj1ZEwPBW9u1aYFAJTfXdWojDtSk73ERAxnfpGil/c2N+u23GCYyuOV+fhNfGjIBZ2PNyrkBA4s3cAzNmqkizfkg6vSA2v7887hOrspPu6coIrkuMUhpP5eZZjZ1ohed90xX+lmZ4K+0X8q+albKDEC8QTH69lVFukULU73J4nncPDoHrsnEAxsVkXxvyzCpiAHng3C5Nrmd56T1gqvdaysK8fx68TzuVl4j8UJBvEFxwsNNKXzaOpA332yex82V3YFrkijywLX535w3rs3H5mOtZkRua05cvWCWv22p/5UJ0BbDiPdSkdUZGBiIK20QRo607jbp1QvXx9UpLQ+8o9803fvn6Fpr/btbHrehxfvEiRNiZHczVatWTdQ5jMXVNghcbYYtg1yJpnp1Ig7PW7YM18Ua+cJjuW3bNvr9999d4nwCRR44T5vcK/LAWUhfW99XEnV75WFzwWDOO5f756LBcw7tcLs8bkOLd46YUB02bBi1bdsWV9qAZGcTZWTgOpQFWwZr1KghxcQmJia6zHlx3nh6Xpbb9g/xBsANGDJkiDQQOXv2LC4GgHgDAACAeAMAAMQb4g0AAG4i3rxA2ULs/uASUgAAAJxEvCeLwAy5cGsBAqJBFQgIIJo0iWjMGNNW/vx8XBMAdBNv9sNyFfi9e/fiyoJKce4c0fvvl9xA9NprRHFxjn9+I8UupxdffFHawwCAy4g3AFWBb9YaNbK+A/TRR015LI7ks88+k+4uZ3G+LgAQbwCIFi5Uhfr664m6dDGFaYn4d6VdWK0dyr59+2jXrl3S+g4AEG8ABO3bqyI9dqzavnat2i7qIgAAbCHeV69exVUENkE71711q9rOtTXl9qefxnUCoMrivVYMiWrWrEkeHh64kqDKDBxoXtGH57fzRFHxdu3U9i+/xHUCoMri/aX4JPHizezZs3ElQZWJijIlHWrrbGrnu/nYsgXXCYAqi3dRURGtX79e+n/gPvDLLfuwx40j8vExOUVsAfu6rblNOnc21nW4dOmSYacNw5ITaHbsdql6jk/SASnBD0C8sWDpxnCoHpdQKy6sDRua5qZtgYjOppdfNo3CRRQ81alDtGCBsa7DgAEDRBZ6dcPddXI9yO98JpYodNB4wwCKTT2FNzDEG+LtjvDoukED6yPjJ56wbT44540bdT18zpw5oozcDTRixAjj3BGJ/z7ePspilRo+aq/sKtWtBBBv4GbMm2fuw+7alahPH/M6mL/95h7XIkN8S2UYrJLF5uMhilDfJepDdg+YTcNDl9MjS39S2geKYr8A4g3cDC6IJIv0hAlq+8qVanuTJrhOjmKgpgr7yPDVSrv36Sil/a2Ng3ChIN7A3WjeXBVpbZX5Q4fU9mefxXVyFD/6/qOItMcxNV/oXGaaZuqkCy4UxBu4G/36qSLdsqVpfjs313xE3qYNrpOjmBC1QRHpFp6/0ZXcTCooKqT+excq7Z/vGIMLBfEG7kZ4ONG116pCfeONJX3Y2p2RwL4kXD5L9y36QRFqrs7+sGa+m4+1iUG4UBBv4I6IRFSrbhNewHQXOPV4yhSiP/9Mp99/30h79gQa4nmxt/uO+W0suk3a756CNzDEG+LtzmzeTPTSS6ZRuOzDXrTIPc49OZmodWvtl9YsaZfxnXd+Js39GwG/Mwfp3U1D6G7hOGEhr7+uD809tFOyEgKIN64coKws4/qw9aCwkOidd4rfcZwURyNxTKJHHiG6csU4zzenIE+a9wYQb4g3cGtWrFBFm+86OnQomTcuNl4CAPEGwEjwnL4s0n/8obZv3GgeEwAAxBsAA6Gd62bBluFylnJ7rVq4TgDiDYChGD7cfCcpz/dzVXsu1ya3t2qF6wQg3gAYioMHTZkuWp/77bebL2ByyTYAIN4AGAzOdLnmGks+91PCMvmLWMDsi4sEIN4AGBHOG69Xz+Q4YSHnupqjRp2U/N63i6F4HtduAwDiDYAxYZ/75cvq38ePH0/82ShkQzgAEG8AAAAQbwAAgHgDAACAeAMAAIB4A+Ao0tPTcREAxBsAZ+Gq2HZZv359qlGjhth9mW/15yIuJtK8uJ00JXoL7TkbQ/mFBbh4AOINgCOpXbs23XTTTRQTE1Pi31Jy0ukHn0klCiVw/vbhtCRcPADxBsBRxMfHU3Z2dol2LobQ2muUxSo3fNRZ9TPytwHEGwCjsUFUdJeFusaCb6ir/0watG8JPbjkR6V9eOhyXCgA8QbASPyqqeL+V7iaYLXt5H6lvenmobhQAOINgJH4zmeiItKbj4co7UkZl8ymTgCAeANgIEZHrFNE+qNtf1J6XhYVFBVSf82I/Kud43ChAMQbAEeSlpYm8r3XUkGByQYYl3qa7l30vSLUNRe3pVrL2pstWq5LDMKFAxBvABzJs88+K8XEBgWpgjztoCfdMb+NRbdJB7+puGgA4g2Ao+nXrx+9++67FBAQYNbufTqKGm8YQHct/FYS7ZfW9KLZsdslKyEAEG8ADE5Wfg6l5WTgQgCINwAAQLwh3gAAAPEGAAAA8QYAAIg3xBsA/fDz86Nu3bpRZGQkLgaAeAPgLHTv3l3ye48YMQIXA0C8AXAWeJPOb7/9RtHR0bgYAOINAAAQb4g3AABAvAEAAEC8AQAAQLwBcCyFhYW4CADiDYCzsGHDBnr++ecl5wkAEG8AnARPT0/J780xsQBAvAFwEjIzM8nX15fy8vJwMQDEGwAAIN4QbwAAgHgDAACAeAMAAIB4A+A4kpOT6eDBg7gQAOINgLPA+d7VqlWjhg0b4mIA+4r3woULKTw8HAcOHJU4AgMD6ZZbbqE33niDwsLCcE1wVOr4+OOPMysk3h988MEAIeDhOHDgqPwhPngRuA44qnK0atUqsGnTpnf9BwAAAAAAAAAAAAAAAAAAAAAAjML/AYdFuPGEyjOjAAAAAElFTkSuQmCC) **Step 4: Reassign the Points**\ Since the centroids change, the algorithm then re-assigns the points to the closest centroid. The image below shows the new clusters after re-assignment. ![Step4.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAW8AAAFuCAYAAABOYJmxAAAABmJLR0QA/wD/AP+gvaeTAABW20lEQVR42u2dB3gU1frGr73rFUSuYO+9/NWrXnsv13LVa8NeQQQBBQvlggUVCwpKUXoNnYQSIPRACC2EEgiQAIGEAAkE0nvO/7xnM7MzyW7qzu7M7vu7zzxeZ82emdndd8585/ve729/I4QQQgghhBBCCCGEEEKCnAceeOApub3AjRs3btwCs91///3P9OrV6+g6C/fdd9/d7KGHHirt2rVrNjdu3LhxC8z2yCOPlMjtojqLt/yPz37yyScLBCGEkIDx/PPP58qJ9MUUb0IaQEVFBS8CoXiT0CUiIkJMnjxZHDx40FHH3a5dO3HTTTeJBQsW8EMkFG8Senz55Zfigw8+EAcOHHDUcd9www1C/izEihUr+CESijcJPTZs2CDmzJkjysvLHXXchYWFYtGiRaKkpIQfIqF4E0IIoXgTQgjFm+JNCCEUb0IIIRRvQvxDamqq49IaCcWb4k18Rn5+vvjuu+9EWFiYo477vffeE0cffbQYPnw4P0RC8SahR1JSksrvhoA7ibfeekucdNJJYu3atfwQCcWbhB5FRUVi+/btYuvWrY48dqflpROKN8WbEEIo3oQQQijehBBC8aZ4E0IIxZsQBwP3QCda1xKKN8Wb+IwlS5aIHj16iKVLlzrmmF9//XVlAdu/f39+gITiTUKTsWPHqhzvhQsXOuaYBw8eLB5++GGRkJDAD5BQvEloUlpaKtLS0kR2djYvBiEUb0IIoXhTvAkhhOJNCCGE4k0IIRRvijfxH8XFxY46XuR2t27dWsTExPDDIxRvErr88MMP4tNPP1VNDZzASy+9pPK7kSpICMWbhCyff/65yvHOy8tzxPGuW7dO3XBSUlL44RGKNwltsrKyeBEIoXgTQgjFm+JNCCEUb0IIIRRvQvwEe1QSijchkoqKCrF/73ZRuPEjUbD4CpEXeYIoiL5JlCT1li+W2O54Yf166aWXimHDhvnsPQ8UHBHtlv0pbpzUQZw98nVxX0RXMTAhUpTLa+ME9uYdEq2XDhTXT/pYNJfH/+CM7mJY4nxRISr4Bad4k2Bl/+61KkXwy04vi7xZfzNtBctvFaIs31bH26pVK5Xf7Svx3nQoRVw87n1xxrCXq21PRX4jSsrLbP35rT6wXZw/5h2Px//S/B8dcwOieBNSr2l3uUiMeER80bGV6Nv98Wrija14c0dbHXJZWZlYs2aNTzrnFJeXijunf+5R+LStT/xU2358hWXF4uYpnWo8/kHyCYJQvEmQUZ6ToIt0TuSpoixzgdxZLEpSBroFfM7JUuRLg/L8V+xP1EXuwrHviVVyFgtB/3n9dH3/FWFtbHv881Lj9eO8fHwbsf7gTinoJeKrNWH6/lumfMIvOsWbBBulaWN0kS6Kf930WsGSq/XXIPLBCGalmsh9vnKU6TWItvbavvzDtjz+H+On6cf4bdxE901Zhkpajn5b7T9z+Csit4TyQPEmQUXZgdm6QBeuuMfwQr7Ij2qmv1ZRlB6U5x+WFK2L36sLftH3Hy7KE/8Y9aba33TEq6KgtMiWxz9kS5R+/Fiw1NhfcFg0Gd5K7T9HnkdZBTN0KN4kqKgoPiDyZh/nnn2ve0WUpgySQn63vi9/0cVBe/7J2fvE34e9ogtge5lxMjQxSmWbaPvw/+3KhoO79OPEDPvTFcOUoN8xrYtp0ZVQvEkQUrytp8eFSm0r2zfNFseZmZkpdu/e7fP37RI7wutiHwRx+b4ttv782kQP9Hr8eGpYl7mDX3KKNwk2kLGxa2eSyIn/WAr1UWbhnnOSKNn1u22O9eeff1Ypgl26dPHp+yIk0jZ6cDXhO1fGjMdvX2r7zzBHxrPfWdy/2vFjAXb6zlh+ySneJBiZOXOmyvGePn26KD8SJ0p2/CiKNrWR2SZ/iIr8nbY61t69e4smTZqIMWPGWPL+yDT5ZX24+CRmmBieuECk5x9y1GcZI58QsICJ0MmobYtU3JtQvEmQsmjRIvHNN9+ovGkngNL4kpISfnCE4k0IIYTiTQghFG+KNyGEULwJIYRQvC0k1KvBWA1nYtu2bcpB0Iocb0Io3o2lLE8Ub/lU+mlcKSv+jlW+0sVbPlH7Q0KvSw6L4k3w077Mdf5LrxHFW7vL9IpCy8aMj48XmzdvFkVFRba+Nt99953K7/7www+pDoTibSvhKs6QonWpx+q+gkWXSF+N/cEt3AUpIn/BuZ7Pf+m10tAv25Jxu3fvrnK89+7da+vrExERIZ577jmVk04IxdtGFMX91yxas482/XvR2ueCWbpF4coHaz7/De9ZMnJYWJjo27evKC0t5a+OEIp3fcMFWe6ybBkuUD4a0ju6bH+4+nfd1U4aKAWldOfvcAt15InSTztKnX9p2mjDdTnO0vAJIYTiXW/KDi5yW5LGPmB6rXDVo25zpIy5Qflhl+6b4p5hyycQIwXLbnH7aR9Zw18GIRRv+1CevcEd35WLdXrHlooyuXh5VdCLV9nBxe7zl2IttIaxcqadv6CF+8nDZj4jhJAQF2+Idf68v7tn36sekU52/U2z7vy5p6vWXEEZNpGLkQiX6Oe/5hnX+cvGCPr5L2jJXwUhFG9fKI5vu2mX7hlWo5906e6/gvoDL9nxU83nnz45JH8IK1euVJ3i4XhICMW7wc/3+TLvuktlHvYxKrWvOKG9z9LYSpK+kzPQ46tkXRwnSrZ/7Q4lBCvyZli89UvTAq3bT7ufz4dDXnd4eLhYt26drS9Lr169VH53hw4dqAqE4t0gbSk5JMX6co+zwvyF54uKwjSfjFOet135SOOmgH+W520NqQ8eTX4h1jh/tCKzKs69a9culd8NK1g7s3PnTjFgwACxatUqqgKheDdoprauVZUZ8TGmfy9c/QQ/NQeBdmIIRSxYsIAXg5BgFe+K0lx30YgUbRV/ld4bZRmRpjBHRcEefnKEEIq3XcS7LGu5e4Ydc6fptcI1T7nzsPdH8JMjhFC87SLeiDu749sXuFP25Oy7YOl17jzsrBX85AghFG/bxLylSOdHNTNUQd6nFtUKVz1uyIo4RU7RaSdOfDRhKKclLqF4+4TSveNqzEMuSRngrCtcUeLs9w/y8aOiokTz5s1Fjx49GvT3JeVlAT39UB+f4m2zPO+SHb+YKgHdedi9hBPysCtKjyi/7PxFFyvDJ1jNFm/upBZkfaNXWaJoUxv5/he53l+W+hdv6azy4/1yfsWZ0n3wXRXawgKz8kOXueNVDa02btyo3ATR4MCn40vb3qL1b8rxz3ONL60Nirf9r0GVsd9++63K7+7SpUud/2ZffpZ4f8kf4sqwtqLJ8FbitqmdxY/x0/wmZGl5B8U7i/uLK8I+VOPfPq2L+HVDhCjzU2ONlJwD4s1Fv4nLx7cRTUe8Kv41/TMxIGG2KK+oECTExVv9QGXecenuwVL0OjgqDxt56MhH9+oXLv3EG/X+yo+7pef3l/nxyJO3NMyQlyTy5zf34gd+jamQCsKNHO958+b5bvzczTK0dpbn8Zf9X4NuYLi51LVzzsZDKeKCse+KM4a9XG17aGYPUVRm7ZPImowkcd6YdzyO//jsryy/gazYnyhajH7L4/j/mdvbbzcQwjZoPgd56FWfGEx+2XEvNO79Vz5U8/vHv2bhnalc+qDcVfP4G993z9BSUlRYIjU11UfjSwOx5bfWOD5u9lZRXF6qZplGwTpLzjyN/95rTZh13y15Y7h1yic1jv9D/BTLxs8vLRI3TPq4xvH7bZxBEaB4O4+KonSDX/bxMj99jtpftm+6odjoKBX2aOis2+zHvUCFkUrTJ5ny430Vnqk+6000LRyXHYp2jZ82xu0HLs/bKmOv8ux1BgOxM0T54ZXqhgI/Gvf+U33uh6Ox8sA2XaQuHPueWH9wp5ppDt48V99/0dj3LPt+Ldm7SR/nsvGtxeasPWp8CKa2/6oJbS0bf86eONM4Ww+niVI508cNQ9v/f5M7Uggo3s5DFRN5qQQtjL3fnacufcUb9P64CWjvv/ZZ02sFMf9yv39WjCXnV7p3vHuGLSthTeNH3+RO5cy2xsfEaCqGmLtpfPjgaOPLm4wV/LVlni5SXWJHmF67ZmI7/bXU3ExLxu+/caY+Ro/V40yvXTLuA/21zEJrWtn1iZ+qj/FdnNvArEL+r+Xot9X+vw97ReSWUB4o3g4DPuB6/FUuoukzQJkR4Vq8rBSXnE0NE29DEVNB9A3uzu/lRa7FO+39pW+LJTenzPnu8ZffJvTFYxlnNsbBKwpTrRl//wxDGum9bvEozZFWv2e6n2yKD1oy/rSdsbp4PT/ve31/VlGuOGfUm2o/FhDzSqzpRDR++1J9/FcX/KLvz5BiffbI19V+/BPhHSsYljhfH/+9Jb/r+9PzD6nzxn7Ewxn3png7UL2liM49zewXvvNXla+uP9ZHNW34Y73sbp8352TD+z/uen9DHDp//j+EVRk5qo1c5AkGP/Cn1fgFy283FFddaGFYap8pxl209nk5fl9TFyB106wD+fn5YtOmTaKiHhkSe+SMWhMpbMj4+GPTLHHn9M/1ffdGfGnZ+ScdSRdnDn9FH+sDmfGC8W+b1lnf9+isnpaNn3Bot5pZazPsD6MHid/l+LcY4vBYtCQUb0dSuvvPmv2y08Y26v3RPKFmP+5Jlp5fSXKfGsdXfUGtHF+mi3of/6g6t7CbPXu2ShF8/PHH6zV+zzXjPWZaYIOwxuzbYun5fx470uv4SNuLy0y2dPyPl//ldfxmI15TAk8o3g6lQvqFf1vdL1wuMKIRQuPfvlzmNPeslmVhlR939fG9+YGfolI7NUaPHi2GDRsmDh70cQhDhqCKt3xares9OiCV7hle57dBGuN5550nunXrVq/hkQr4mYx3azNQbTtfpu9N3hFj+eUvLCsWHWOGVBsfC6URu6y3s0XGSdvowdWE+1IZc5+zO44/f4p3EERQZD40fLKLN3dUs3Ff+2Uj7x3Vpur99wxRmSh+PT+Zb6380GXxERYSqzo9fvLJJyrH+8iRI9aML9cN8BTiGn+4jLPvbdD7FBc3LDNmk8z3HpQQKbquGqNi0fsLDvv1+sdn7lSFMd3k+GFJ0Sru7U/WZiSrkAnGn5i8TBwqyuWPnuJNHP/sIePIaMIQExPDi0EIxZsQQgjFmxBCKN6EEEIo3oRYRFZWlvjtt99UjjchFG8nU14Y3J+w3c/Pz8c3bdo0ld99//33q38vLCsJaQEI9PmH+vWneNcTl992W5cftaoKPF/9O/YHAygPL9rwjrSOPdd1ftL3u3jLJ6pC0x8z265du4rhw73nXCs/7vjX5fG1qPQjv1QUJ37hFyFftWqVePHVl8UdHV5QftTIl4aZUu+4SZbbudqFXdKP+7WFfZUfCgqL/jn1U9F3Q7gymfIHqBR9ef5PKjdd+aHLClFUitIPnOJds7DJEmtNtKtu8AdpaL6wbc5P5pOjRN6z3/dllvt9IxyB/O6+fft6nmjnbvHux730WuVTYunx1eDHfXf4F6JAFqEEM7H7t3r14/aHHzicEf9R6QNTdXt27nf0RaF41/CYtuaZapWPxn8vXP2kk6Xb5KPi6fws9fsWrp6Q+/fv9+zhrfy4b6v5+Da2tuzYIExGHxJszSsNnbTti5WjgvbHjhvTjZM61Hj+cA60ihzpNnj1hI9qHB+FP4Ti7TGcoPtOy/JyWLgC+GEYy9kxO3fmrHuHoVz95Eq/bem3sm+Kye/bH+ETj8Kes9FUzl5+2FXOXZo22uAHfqJlfTHRhcbox73h4C5lZzpq2yJ9/7nS2jRYH9+jUuP180QLt21H9qpzNfqBXyutba1ihizf18a5bmJ7sSN7n5ppG/3AEcIhFO9qlB1c6J5hr3zQPCNf9ZjbXKmyiYLTKN031eS4Z5p1GZz/yrNWBOb4lEhXHt/6N8zHJ0MmbsvcjZaMPzxxgS4SnWKGml4zzkghKsHIL+vD9XP8eu0E02u4mWmvZVlU6o51BW0M9PzUb+ryBqKFUhCDD/bQFcW7QTO/Te6Zn/TX1ju+VJSq/o+6eBxxpsFO2aFlnv2+ywoqFwcrnyzkDD0gx4cnHN0P/HahWdcqP25plasfX/EBS8afmbJaF4+nIr/R9x8uytObCUA8coK0mcBowxPGGwt/1fejoXLTynZmzUe+YdnC5Z+GjkOtlw7Q96OhsWa0BYOvCsGFS4p3tbhCmUkkClfcI53+flTG/rqoz/u7ZY/tlp+ebH8Gdz+Tn7g8P5PftmxeLAL041BhK6Mf+Op/K4vZgmU3GxZVr7BkbMThH378UXHqG/80NTT4TYYMjH0p75Ix8WAFbcuMboRoqICu8zdP6eTxpuZrEKYy+oG3XjpQZbkY+2K+FPUjVZni7e3RfUzNftipDluwqtItvWTX7zWf3z7rFqTKymqfsZUk/1CLH7c1IasJEyao/O7L7rixRj/u5Rb7cQcatG/zdv5oJoy+nL4EFrZG2i370+v46AKUmJUqCMXbu4DI7ivwv7bEb9svM9hMGTN+Sw+FqDz1hPaVaXbwE/+mmp84GvPCotZKZs2aJdq3by8mT55c49NP8dZu1fzA0VC4NHWkZceWkZGhPMYjZs1U/R+NHXGwIX1wyo7gd0GEmH4SM6yaHzgaGvvKj3u7XAjFDBohENwQ0WkHM3wsTqJFnCcBxwLqwrQNVGSKdx0EUPpfwwe6eEuXSj/qXc44bpXH3dxznrqM40PYQXl+suqqrs5PiqJVfSWN/PDDDyrHG/+sjfK8bap5Q3HiZ2ohs6Io3e8hhKGJUaozDvyo/e2HHWiQ746u973WhImpO1eouL8vQB434uaeZtZPypCMFk/HDB9+6Bh/uuwNml2cTzWmeAczHvK4DT0tXV3dX/brESGvOy0tTf1/xJW//fZbMXfuXH5UIYinPO6qBTmYgROKd+hJt5x1m/K4ZTd5ULY/QuVvu/K4j7a8StFIVFSUaNOmjZg3bx4/oBDHmM2DPG6U4SNzxJiiCCsCQvEOOdDc153H/ZzptcKYO9156pWi7g/Cw8OVeCckJPADCnG+XzdZF2kU3rifFyv0knzE2hH3JhTv0BLvrBiDD8h1avHP9UKBSgHU89Tzk/16XJmZmfa+bmVl/PL4gSFbokxpiBp7cjPVwmWwV7BSvEkNKpSvskaMVaJIvTPncbcQgkUOOvBYOe2008Qrr7zCi2ExWAQ15nG/v+QP8fP66aY87v/O+4EXiuLtFMH1rY9IScrAWvK4pzj6/HzN1KlTVX73U089xe+iH+gYM8RnedwIt7BUnuLtV3Q/7fnn6LPhok1tfOQXjjzu70yVinoet0y988v5SeOuonWtZLXq2e48880ddSFHGGXSpEkqHm4H0tPTxdatW/nF9AMQ2w7Lh1TLI69PHnfi4VQ1Qz+vMk8cFaAwz6JdLMXbWmEr2K2Ltke/cB85FiLzBPnpaGKg8qQL0/xyfts2LhAbxl1Zg1/4YSWWyPf+9NNPRQXjmyHJRhlCgZcJDLAipJtgXfO456eur2YTq23PzOlNAad4W0fhyoer5GGfYvYLl37iTqWkuEh06/yWaN36A7Fq6Pkez69o/Zvqv42MjBRJSUkUb1JnjkiBvyLsQ5NdQdU88f4bZ/JCUbwtmHXLLjzGcvuyQ0vV/rKMeco/XPPvQFjFiZRmJ4jI/teK7774j8iJPEO6L6517YcVre4Xflzw9wUlloBKS02kYdGbKrNUEPeGfay2/7apnXmhKN6+p+zAbJNjnmlGHvuAOw9b+oo7Urz3jlPHnytvQEXxr5peK1j2f4631CWBBSEWTaRR2KOB1EKt5B6eNFXNrijeFO9GA9Fyx38vd1vLlheZ+maW52525s3J0MwCNq6aXzisaPOjmrn9uG3SB3TlypWitLSUX0yHMNjg9/2BTDPUQHMMbQEUTYvp903xtkC9i5UfuD77lj7hJUm9RUHMHe5FS5mhIRy66IJsGaOXivILx/kZZt3oBG8HduzYoVIEL7zwQn4vHUJVv2/kiaOnJlqzaaL+yvyfeaEo3haFFqRTYY152OmTHH1+yk63nn7chYX+j4HHxsaKa665Bl9wfikdRPsa/L6xeJkcpC3qKN52EriqfuEyK6Mk5Q/HnQs8sOPiDDFs+dTg2S+8uh93Xl6e+Prrr0Xnzp0DlnXCsImzyPeSJ36NnH0vTad/DsXbHyEGme8NMUPjAX/mYfuSzZs3q3ztjh07KiE2nZ/sg6nyzHF+ciGzomi/x/f48ssvVYMGu/ufEHuBUvu/tswT38VNFrNS1ojcEkoOxZvU/QYkZ8tLly5VIYiGAo9vzn4JoXgTQgiheBNCCMWbEFvw448/imXLlrE0n1C8Kd6hQ0lJiaOPPzExUeV3N2/enOJNKN4Ub++UH16lClUaClz4AolxfLgBwgkwOjra5+Pk5OSI1atXWy6oKM5BdssXX3zBL2dt3135WeQEMIsj0OPDobAxWSyN/XuKdyC//NnrZEVlE5XnXJz4ed0FszhDeoO8rpeU589vLoo2vu83IXf5cb8ix29aOf45onhTWxE+baJKDRw7dqzPx0TKIN57z549/OIEmG1H9iq/7JayJZnWLBj9J4vL/ZMVtCVrj/jP3N56X0t02kHHndJy/7Srg13tU5HfiHNk0Q/yyW+STZHr4xcen7lT/Dvya1U0hL9HU+U/Ns3ye3s3ircPhFvbSrb3ql04pQ+31sSgmt/3gnN95vft9bjztuqiXW38RReJuNVLREGB7z/eCRMmiP79+4uUlBSqZwBBoYtm8lR1e3BGd8sFPCo1XnXU8TT+E7O/slzAkTN+1ohXPY7/7NzvahXg8F0rRVMvf/+CvCH6U8Ap3j4SbreAf1Xj3xbG3mcWzHlnmv2+1/7Hyjm3yWfF0/iYkZPgBF3bjT4h8MtGtxqjAPWOs86+oapfNxwCz62c/Wtb3w3WdV86WJgjLhn3gWn8llXGH5QQ6fXvDxQcUcZY2n8LEa/69ygsonjbVrjjvc5caxPwioI9hnL5k0R51gpX7CxzvtnvuyTLoln3NlPbNM2Pu+zATDn+MZV+3MeoRsck+Ji7Z52p3Hx3bobab3Tzu3rCR5aNj1mrNg5CFXvzDqn9v8mQhbb/limfWDZ+WFK0Ps4/p34q9uUfVg6FP8RP0fffNd17+HPMtsX6f3fHtC5KzPH3uOFp+++L6ErxtiOqxZmXGXc1AyoP/SSVSOoddZ6qMiO/1+D3vcSS44cplj7DjnvR9FrB8tvclrVyEZYEH4grayLzzdqJpteMM8pDRbmWjP9t3ER9jJ/WT3NPKmSoQeuYg6cBq5oOd1s1Rh8fMWqNEhmq0UIhCKl4C918HjtS/3vc8DSKykrUcWtNlv0VOqF41xMsTNYm3AWLr5Cx6/TqM1850zX+N7rfd1mBinfr4ilnyFZQdmiZe/zo60VhQZ44cOBApR+3Ow6PJwTH3VjlDwYd4nv16uX4dEerGLt9sUdr1bS8g7p4YRHPqj6RQ7ZE6eO/s7i/vj/Z4Nd9oYV+3b9LwdbGbxvtnlxtlguo2n6EdbyBkI7238E8SwNWtsYnGs687SsTojjh43oLt0u9q/h9x9ypFjkLlt1iWLRsocaw5Mgh0jJcoo019tcXxUdtW4voUQ+aFi2dyIYNG1R+93nnnUeV9kLSkXR9hoit1YKfVZYJsk20fcgCsYqEQ7tNboFvLPxVGU8hVOMPv+61Gcn6ODiOd+UNBCGPK8LaeLypVCV2/1bT38NvHE8Tl49v47GJBMXbtgLe3oNwX15rFxk4DdYYbtk3xTxSsW/d+EpTBrlamc38mxj27T2ibZv3RNL4JgY/7khLr9ymTZvEyJEjxd69vu22k5ubKyIiIixJc/QGHo+zLAoxWEVXQ+ig6oZHfqTxWcknMcN85teN8MaR4vqtz3wYPcjr+Fg8Rd/MmoBge/t7LP7iKYbi7QgBb1cv4dZjbLv6mTrSuBYQT5MWq0Nd76zysFvp8fX8qLNE0YZ3fbSQWSFKdvyoGiTjvfdMOrUy6+TvyrrWakaNGqXyvefOnevYT35nzn7xYlQfPdMAj9qIpzqhtyLisxBw4wwc21UT2opFaRstHx/XqEvsiGp+3Zj9L9+3pU7vgTxt5FlrKY/4W8TQ65Jm6M0vHAuoKw/UHq5EUc5HywZXE27keq8+sN2vnyXF2wcCXrD4snr3bawoTJViOVYUb+sp/bDD9Pzu8vxkJdZW54Erv/G0Ma7x0yeqwiF/sH37djFv3jyRkZHhyE8cP9BzKhfXqm63ywyEfIsW23wNCnVGbVukMi0id6/1e6Vj4uFUMWLrAtXmbM6eOJXGWBdm7/aepw1Br2u8HnHuYYnzVWf6eTL3vL6LpAgBDU2MUn8f1YC/p3jbRMArSg75bmay4m6zYBsa+7rywJ/lJQ8QhXLWevOUTqY86QsNWRrYPl85ihfKIpAFc6khTxuLrBeMfdd0/ZF2GCpQvBuAVdkMFQW7TG3Tyg+vVvvLMhfI/OtjDXngh/lLDgAr9ifqIoFFqh2V8Vlj/jDEhN3NrWHyjhj9Ot8q88GRpw1+3RCh779tameKN8XbM3369DH3dPQhZfvDDXngz1SZkd/jzgM/1DDjqPXr1wdlebq/OvX8aShm+UzGbY0YMza04hfiW3quGe9xho1QSfPKkntUTSKuT/GmeJuAUx3S0dDaywow0/acB56vUgj1PHAZF68v2dnZolOnTqJNmzZBZQ5VVlYmmjVrJu666y6Rn29tZWjErlW6eDwpjY00UGmnLZ7hUd4JC5dOBKXrnlLyth5O0/cjrMKZN8XbHc6QKWEfffSROPnkk8UxxxwjysutKWIQ5UWmPPCCmH+pBcWC6JtMi5YNyQNHqGfGjBnijz/s1cG+sbNmNEnGZ3LppZdafqzp+YdEsxGv6ULx/LzvVZ4yMg20fQ/P7EGVtYj1B3easkTeXPSbyrM2+qUgd5ziTfHWhRupbaeccoqadZ999tnWilnqiBrzwMv2TWv0+dgBNDXGk4wvUgbxVIEinWr3QnmuMCPyJUYfjKobZt1xmcl+u4YIFxwKYJ45UvP8PT7S/Lxdf2QBIY3TX6CsPpB5/hTvWkBe8lFHHSWOPvpoJd7XXXed9V+KXf095IGfoYQ9WMC6AW6KgwYN8vl778o5IF6a/6Oeh33Z+NYqRu2LND4IFqryqtqCYgETKW/+AGEC2JdqKYuYef5v9TiVDeMPkGb39JxvdT8S5IhjBuwPP3Ck5H26Yli1PO0bJ3VQFZD+YE1Gknh0Vk/d2hbrHSidt8pWgOLdCJCXfMYZZygBf/rpp/0z4y9Mk/nf41X5PAylKooPBNU1LSoqUrF3Xz8JrMvcoZv8V93gWOerfGbM8MZvX6rylJF7XN9Kv4YSrfy4Pfth3x3+heUCPj91vVc/bn/4gWsgTxx56ijO8Wee9RSZ8eLNzxvWAvTzthnvv/++6N69u4qvIm5M7AmEA6liRr9mo39zVUMhp4EnB2NWC0Tk4nHvm84PM3CryJY3qCvD2prGv6hKnju8UoKV/QWHTXnlKBaqmuc/aPMcirdd2L17t2jSpInjKgKxqIqQBBryhgp4nNV+RBBtGDEB+Ehrj9lWuuZZzYK09Sbf7T2VPhyYgRrDN1aBLjTGMAUWcIHRD/x62dIsWBltuM53St/vzMLsan7g90Z8SfG2C0itQ/9Fp4Emwogpd+vWTaXTBaVYr1kj8vLy9H9HubX2I+oYY55hI2Sivbb9yF5Hnq+xaUHVGTbi+tprSF20gu/XTfE6w9Y68uAmGcimwlbizc8bT3yaVwyykejnbQMQk3XirBtAsOfMmSO2bNkSlJ8NUh9PPfVUcfzxx4usLJdhFxYMjSl7WqUjZkhaHBw/MqeKy8TkZaY0RQ042Wl+H8g3L7GoD6Rxhv/qgl/0/Yj/a+IFES+vCM4KU2MlZ/tlf+r70ZBY238t/bztQdu2bcXnn38uiLWgIUR9C4dSU1PF7bffLq6//np9H0TauJiHDuHoGGP0I/HnY62vSZFZNMbFMnSA/3rtBFNfSmSBWAXMrIxuhC/P/0l8tSZMZZto+16K+jFov6erpCmZ0c/7LZlnjqpP47pK66UDKd6BJj09XTRt2lQJS8giY8MVRdbmzaJkH+Gdvn37NvgJw0g/Q2ih6oYFTMTFnQxuRoH0467JD7y+ftxOpE30QK/nf7586tD6clK8A0j79u1F586hY3Jj0mzZBg3uhflzT6/0+m4i/cTfs8QQCyXtuM5o0OCTcJG84aBXo7ESUsv1hvWp48NFMiSC2XbVdDUsYC7ea70fN3xDuq8eq26ExvGxUBmzb0vQ/zaQ8YP1lKp55mhojDRVf0Lx9sC+fftUrBuzb0f9sH3gdliek6AKgjz6iS883xLfbyuqPtERBTFi5AHP2R3ntzxsf4FCpLCkpepGBT/quvph+wrMsMdtXyJ+WR+ucr+d4mPuK5BnjuwTFOfgphkIMyyKtwc6dOggPvnkE0cdMxYmu3TpItatW9eoMEnBspvdgj37aJE//x8mAS9a95IghAQeireXWbeveyxazfDhwxvdXqw8d4upHL88x/UYXpYxVwr5MZWCfqxyOSSEULxtxZEjR0RkZKTjjhuhh9WrVzfK8RDt2Nwz7FdMrxUsv9VtSVvZJCJQoPho9uzZqsSeEIo3xTvkQZMH3Y52qTTgqnD5VFSUHpHt2Jrqr1UUBW4tAIJ90kknKZ+ZzMxMfmiE4h2q4r1jxw61EYh0rupib/IT39pVFCy52tAk4nJLxi4sLBRLly6t1Tvm8OHDqmr0zTff5AdGKN6hKt47d+4UQ4YMEX/99ZdISkritwGhk91/1eAnfpSMf8+zZFyUucOKAIVRDIf4B6QdwmwpUKCsPJDjU7wdLtx//vmn2iDgycnJjjl+zECxWTD/FiU7f6vuJx51lrKmtZLp06eL5cuXi+JithGzkk2HUsQTs7/SrV3ROqyH8gP3z3VHOTn8sLVcfJhpIXc9VHpPUrwbAZwChw4dqgu3EwW8f//+KqXRKu8SxLVL0yeKkqTesjFyhCzQyeKvJQhATnbVAiZtu0s65VntBw7vc29+2PADt8qXheIdBOLtTbi1DbPxXbt22focMDMdMGCA6Nixo2oBRkhdgCGX0Y8bZlZXhLUxCSi8OqwCLcMuNfiA4CaCWbdxfNirEop3NZDDbQyV1CTg+G/tzqFDh/gNJnUGlaaaSN4gy9n35buepow+1RB3q5i2M1Yf51Zp0atZ1xr9wNHMmVC8q6HZpNYm3sghbmxXc+J73n33XVX5GtJmYY0AVgGaSKLnpBFjRxyrmgobTbVQ1q8B616tHyc8Q0Kt1J7i7SMBR6oahTvwGP1O+vXrJy699FJx3HHHiWOPPZahogYCLxJNPF+M6qPvh9WsZjQF33OrOg0NTYzSx39j4a/6fjRU1oye0NZN82EnFO9qQJxnzZpVTbgjIiIo3AEGFaIIW8FpUMs6gWeL/OqJV155RQwcOJAXqYHskGZSRjdANHRARx5jHBwe4VaRmJVq8gOH9zeyXIxx79cX9uUHRfGun4BDuH3hymcVsbGxYv/+/UH7RUQzhlWrVqn///333yufFq3/JoqoFi1aFLTt3PwJmifU5Met9f20ii9WjvI6PkIn6MpDKN51FnC7CzdMslC80q5du6AMGaDfJsrdb7rpJtdjfEpKraXvqMgk9QepeIg9V03X85cfN3K5Mduu6geObkfoVEMo3vUScLuHStCwYMKECWL8+PFB+SVEeKRly5bipZdeqrW6EpWwDz/8sHjxxRf5620Eu3MzlN85Og8tStvotwIdDcyww5Ki1fhL9m6yPL+c4k1jqoBS4eDGrmFhYeKpp57ymt5Y15BIWlqaMqaCbe/Bgwf5pSAUb4o3sZLHHntMLTpiMbKxwLJX6xhPCMWb4k18EP7IyPDcJg1NIuDDTRtXQijexEbMnDlTnHnmmeKNN97w2XsiTASrAjsvJhNC8SY6BQUFyngKWRdOAQuJCIvce++9PnvP3377TaUMbtq0iV8KQije9ic8PFyJ1k8//WSr40Iedq9evbwunMJm15eg2hWNF9auXVvrDL0x7d9CjXJ5vZBvvTajcQ6ayBBJz7evvw4yaDT/FksmWbKMf19+4PzIKd42BLHjqVOn2qoJMsQR6XyYXWvFNP4YszaQI37zzTcrp0hSOyg9/3TFMJVbfd6Ydxok4Osyd4hHZv5PuRLifeCL0n31WL+nG3pj5YFt4v6Ibnou+yXSyRC57b7yC0c+/D3hX+q56peNby2+i5vsdztbijeps2j27t1b2dDaqW0cUhBxQ7njjjv4wdVDuLWtvgIOZ0Jvftx3h38R8IYKU3bEVCsA0raHZ/YQpY0U2LCkpaYSf+P2uGxwYZUvDMWb1AgqTVHl+MsvvzhHkGQIALNurBOQmoW784rhHkUHAh6XWbuAHy7KU7NM7e+aj3xDXDWhrem90BEnUMBi9kKDOyLK/a8I+9B0fEY3w/qCEBGulbGcv6of+u+bZlG8if9BqAaz2AceeIAXw+agWfPXX3+t1gXQYKShwq1t59dBwGfsWqX/97cY/LhHGfzAr53YLnBPYXJWrB3HnbIrkGZtOyghUt9/29TODX7/kVsX6u9zb8SX4khxvtr/m6wSNT59ULxDaVYkZ4/+SodDnjWKXDwBvxDMvtkA2HfgicCKylgYdR1//PHijDPOECeffLLabrnlFtG1a1fVzNkIZps1Cbe2YdaaWejdO+f7dVP0/xb/34hxRppdKWr+BnF3TzNghDK01m8IqaDxcUP4LHaE/v5/ygYSGng/LZSCccr9VAlN8bYBy5YtU1kV27Zts3QcNCw+4YQTxIknniiOHDnimOuDrkbwYPd1NovVQEQRk1+wYEGjBTw9PV29D7zNW7duLW677Tblb44npaOOOkp9rs8995xITU2t9rcZUpBvn9alVvGuLaRg7Ljz0vwf9f3bj+zV/bgxgw+UH7dxhv3u4v76/o2y4bK2H2GUhmKcYX8YPUjfvyYjyWTwxZl3CPHzzz+r1MA1a9ZYPtajjz4qnnjiCVstOtaGljo5ceLEOj/J2EW4Ncvhugo4blRGkb7rrruUj0uzZs3E/fffr9wl4Wm+ePFiccopp6gZ9w033CBWr15d4/tCwG+b1tmrcKPLTm3ALta4GPifub3F5zLl0BgHb7Xg54Bdc4i0dhPB9oL0Jv88dqSpS9D7S/5o8PtjYdf4/riBYTZ+wdh39X0fLRtM8Q4lYMrkK+FeuHChePnll8WSJUs8vu7LfGhEetCr2eoUaxQrjRkzRmzZUrtlKQTzyiuvFMnJyQH7POECiRtN1WYfENyaBBz+L55E2pulwGuvvSYmTZpU55uVNwH/MX5anc+tVw1+4Hbw4+5iCG14iuvvzWtcXvrHy//y+v7oAqStA1C8Sb3p2bOnepR+7733LBtDFlPKGbyQj+pCjiXEqadCSISwg8nfO++8o84fnXgCAWLcnoRb23BT9Sa2/siYgbhg0a4hwq1u2DLVrnfcpGrpgljAXG0DP25vfuEIG60/2PiwG3LZUeBUNV0QC6QJh3b79Vwp3g4Es2dvGQYoU0dlZm0ZCA0lLk7Ix3WXaFfdWrQQItDNfnDew4cPD0jXHYgvZsK1NbiuScD9gSbgfeKnNvg90vIOiqk7V6g483JZtGI3P274lU+WOd/oTL9if2KDFym9sUv2/YQfOt4/dv9Wn78/xTsI2b59uzjnnHPE5Zdf7vex0bPiuuvcYi3Xy8SFFwrZBce977//Dd3PBgvOtQk3tlGjRonc3NyAHmteCTsROR2KdwBAL0rERRsCZpT/+Mc/VDf1AwcO+PW4N2xwi3TTpriRuPbLrDXZ1d21H//Mzw/dzxYmWjUJ98iRI2mLSyjeTgTii+IKxGTRdNcTWJj73//+pzIPPOEpHcwfyDVDXbwR4zYiU4z11/yQNGNrNm7cSOEmFO9gIycnR8Wku3fv7rUw59lnn1WLbkgXsxOyqE8X6GuvdYVRABYqpZW3/ppVDwSI5//666+qEtRpAj5ixAgKN6F4Ox0sVqFgxtuiGqockS1SW+6uv0GYVhb06SKN2bb0qRKXXGIWdatA+h/yvZFRUxdwc8QNEOX+gVjA1AQcwu2twxAhFG8HER8fL+6++27VLT2QSOdZlfZXWo+F8mHDPGeaYMPCpXRo1UGVPdKtfaWbyLJBPnzV8m/v51csLrroIvUUgxtiIEAMnDNuQvEOEhCzPloqXVO56heIFl8Q7EceEdIbwyW6skhP4D5S13CHbEWpcruNwi0TYGQrNNfrW7cK8eCDrmwU7f1ffRW+Kv6/1rNmzVLmTYRQvCnedQKx7QkTJnitaERD3kCki8lJv9c87bPPFqKu/R8QBYAmyhC0iIpyhVQAIj0nneQ9D9zPCTKEULwp3vXjWhn8xeM6CjLgAIcYd6BB+ELaYJjytGXGoTjmGPe+p59u+Psj/HL11e73wswe72/MA3/hBX43CKF42xhkk9xzzz0qPQyLbF26dAl4F/TNm90iKr2OhGZiKE0N9RAHhFY+NDSIdevc7y8tOoTmfWXMA8c4hawPIYTiHSiwEIWMEHR594RW/gzrVWQczJs3L+DHLKM4urhWXSu9/Xb3a7GxDXt/WTiov8cbb5hfkw169Ncg8o0FN8L9ga7FJ4Ti7TzgL42wyFVXXVWn/94ONqUrVrgF9IorXNkm4NAh10xce01aRzcIaYCnvwfK6LUsFsTHtRRDaT3daAMrNGZu165dnVMGNbKzs1XxE7zTCaF4BzneGiRg5vfNN9/IUMRmx5wLTOuMIn3jjUJ06GDO067jvcgjCLcY88BlU3f1/vA/MY7ZWLAI3KlTJ9UQuT5df5B3jSwfNC44wJVTQvG2LwgxQ3u1GWZ9KJXTxqvl6tuxMlgbTAUWxhJ3T3naqKJsDLIXsNf3x8LoypW+OY+Gtmrr06ePiImJ8c3NsLRI7Mje59eO4YQEtXjDMgRZE5rfNLIe/v1vmPrX733kecr0thZi+fLlQfWho9Dm9NPNwiq9rvQ87caA6NBg2UzktNPM79+yJUJNwXH90LHl4Zk9dM9oNCFoGz1Yb0hLCMW7AaCi7+9/9zzzg2AlJrr/WxTLwGdkqZfpJirj6tt55qAM6GIR0+6LaYhzz54tBOxT5s9352n7ChTk4GaA95fNfUQdiyJtz5K9m8RZVRoRGPsXZlPACcW7Ydx7r1uskZ6GhTktDQ4bsio0fvjhB7Xo2KpVKx/Oaoep1EA0BHAyaWlpKnREzGGSaye208W6+cg3xE2TO5r6GrZf9icvFKF41xesQWkijZJsVBOC+PgSUzk3+i6CXfL/wF9kNqagPgI+3eijmNPQZGkbgC7tQ4YMUX4ggc5NtxPR6Qm6SF8jRXxffpbaP2dPnC7g545+W5TbILOIULwdJd6y6bYu0A89hNDAIdUp/RKZTvHUUxX6a7S6qF24NStTOws4QlpbpZEKrAYaAvzSP/roI5X2WRcGbZ6jize6phtByER7LSWHmSyE4l0vYLhkNErKzq4QF1xwgcoYadEiQX8NHWFI7cJtFHA7hlAg3h1kLiLCVAcbkDwOb3CEzW5GTmMdmL17jS7QWLDUZth7cjPF2SNfV/ubjXhNNcElhOJdC0gZg7ggRovf0vnnuwUcbR5feCFW/jPTJOoBsHa2PQgheRJubYMznx0FfMqUKao5Q1ZWVr3/tlDW6b///vt1ztVH417EuTUBf3BGd9F5xXBxRVgbfd+Tkd/wy0Qo3rWBp/nnnntbzZ6w+KiFTlDF5y3XeNo03x4DBK2+GSl2Ax3Xh8oE7doa52JtINQXMQckzPaYaYINs+/ELP+1qEMT4WTmmVO8nSTeWh63y496hhTrm2Q59hj52O96HV2yYG9qFG00zJXriD5n+vTpqielt36UTgDHTvGuGxXyf4MSIsU/ZG63UbhvnNRBxO7f6pdjWJuRLO6L6CrOHP6KnmeOLJecEvrFUbxtJN7p0mijo+y1BRMoAKc6b3ncyCjRnoCR7IFZ+O+/u/ympa2Fz4GQwSujdevW8rh2BPXsm8JtJrMwW2WZDNkSpUS70E9xbozZ1Eue+f/JtEXMxgnF2xbijUIZLDgeJ5O1sSh1//3mPG54cWgdYbDdequrys9/oZsSsX79+qD4MngTcAq3PcDM2hhfbzH6LXHzlE6mPHPE4AnF26/inShLH72lpA2SfbdWrVqlqvY0kUa3Fs1adMsWc7m3wyfBAaVqxoldFyqtYPv27eKtt95qUOaKP5i7Z51plo3ZP4jYtUrff8m4D/glpnj7T7xRKINFx9rybY2Wo5iBG3nmGfdr4eH8YBvK5MmTVQaH04p08N2By2BDsk40/i0NcPA9ROMMO/LrhghdpHutMS/gXD7ePSPfX3CYX2SKt3/E+9tvvxVnnnmmKi2veVboFujmzdHowLVfFjXKfG73a0ESxQgInTt3ln0oVyvPbCfNuPF0hnzvaGP7+nqyTj7KIXUQnjd2ZMqOGF2g/x35tVo8Bcg4MZpkMfuE4u0z8cZsCD8ub+KMRrzFdfRtvfhit0ijP6IskBNXXml2x7NSc/DDLm6Ix6wDQMOIZ+QjjB36bdYXLBgjXzuYy/nT8g7qxUBaoVCnmKEqVKLtezGqD1WN4u078UYTXjyOXiyVt7EdZWrL47YyZALfEjxSd+3atU6P52h+gOwXp/RsRKHTK6+8wl+Fn8AC5NbD0gCsvO5VY303hHvNM8ese2cOW8NRvBsg3oVeVApFLK+99poYPXq0rG5sfHmjDMWqGXbVPG70aLQSdF9B95wff/yxxpsQbGnhrWJsuistVurtJ+5vFstFhV69evFXYTHI074n/Es9S6S5nE23iR4oDhfV7puLkvzfN81Sf2MU7n9O/VS9L6F411u83377bZlnfarf4oXwh0ZX8gEDXLNxf5n44UZUk2NgQoIQp5zi+akALcLgv2JXkF0ybtw4/iosZF5qvNc87fr4gWt55sMS54vVB7aL4nKmclK8GyjeL7zwggxnHCXGjh0b0hf0llvcYo1OPtdcY85Dv+ce+x77J598ItauXctfRSVYuP3iiy989n65Kk/7Q1OeNmbMWqUkto+X/8ULT/wr3mjU6+QScV8An3DjLHtrZbU0XAzhL669Jp1IbQfCQE9Lz4FsK0pR/UyefCxraI9LDaxvNJWxOKzVRKFE1wcsSFtvKqfX8rSNfuDnj3lHzyIhxC/iTVxtvzSBfuop82uIgWuvoe2Y3cCNF+sSTgcOg23atPFJ39F+socbZt6NyR03gli1Jt7dV5ufUK+e8JH+GrJKCKF4N3I2iuKPvDo2YNy0yS3QsKbV/gz56MYFVjvGvRfIhQMYbDkd9CNt27atvJHOtN2xGSshH53VU/cDR4aI1hcTC5El5fQsJhTvRgsBCj+QYVKXFEck0sAvXBNp9M+Uk0DZyce976KL/Ou9UlcGDhwos3UmOP4zQ7iksSETq9iXf9jkB/7AjG6iw/Ih4lJDnvZzc7+jKhGKd2PJyMhQ3Vbi4uIaFDqpuiE/HZkxGqgMxWy9wAZXE86N6zSzGGIZxtBJ1Q02s6iW9BdITUw8nFqvPHNfklWUW+88d0LxrlfopL5IexDRrJlZuBE2QX46kGu7ypvlmGPc7oiPPx6YPHBUJiI/X36mqtKVWAtCJeiFWdUP/NYpn4g1Gf6Jp63YnyjumNZFHxuhmnbSD7yuaYqNfqKVjZxvm9rZdNNCpWgu/cgp3nYAOog8dOkKICtM3fHvjRvNmSdV88CT/VxnAS8P+QVQ4g2PGaTHEc+g0fGYMWN88l6H5KwzSuZ8j9i6QIm2v/K0EXc3pib62w98YvIyk4Wtcbtd3lAKy4r5RaN423EWL8RNN7nF+sQThbj2WnMe+H33+feYevbsKR544AG1Pfjgg8qal1Rn5cqVKm2wSZMmjvR+0cIURh+Uc0e/LW6b1tkkpp/FjrBsfPQARSqkNhb+P2bgxvF7rhnPLxvF235gVq2JNLoBSftoBVwQ4U+uvSar8P0GiqrkZyoeffRR0adP8JgaIRSEGH6yDx9lXnzxRfHTTz95tYGwO7NS1ugiCdHUyvFnGLJgLhvf2rLxJxtcEe+a/rkeppkgZ+PGKlNC8a43iP/C/dCqmRVi3ppAw3/ciLFDEMIt/gIZNRBuFOjk+MtjwA/ADA2ZQij5Jy5+jJ+mi2TvuEmm1y4e977+2sFCa74HX6+doI/xy3q3gxyKks6pXAPALDy/tIgfFsW7nl9uaTiFHzy6x1gBKi095YGjBsS4wIlKTX+BzjkImSCeG0ygG87PP/+szLaIqDbDfSryG72SExknWhwcoZRyi3JZR25daLKu1cZff3CnX2b+FO8gFm/MuMePH2+ZJzR8xo154JdfLuTNwpX7bfQo9yfIie7UqVOjrXqJ/dmTmymajXhNF8qHpB84us5fOPY9fd+rC36xbPykI+kmUy4UKiHL5TxDHPz9JX/wg6J42xNM6r3lgR99tCszxd+gyTMJDX5aP81rnjlm3am51n4Xvlk70ev4uImgkIlQvG3LrFnmGXjVPHBif2ANDPvc9Q7ruYeQyB/KD/wNk3D+a/pnYsPBXX4Zf1BCpKkjkDb+xkMp/GJRvO0P8sAxy/5LOoAuW+aqtiTOAesjSB3EYq8T0fLMR29bJOIzd/q9yhGOinP3rFPj46bBvpsU73qBeG+oxnox+9caOJP6gwXR66+/XowYMYIXg1C8/c2QIUNU3m6oxXwnTnSV5N9+uxBBYN9dq8hiEXrSpEn8tROKdzCIN3KaO3fuLNq3b69+4KECNEzrrYntjjuCW8BxY0b6J4y3EKcmhOIdBKBbypYtW0Lmw4ZhllG4te1f//JfD9BAAK/yXTJ5numQhOLNBcugEe5QEXBCKN4Ubx1kjCBTLFBOqgh3YPzaGv1UDZV42+6+O3Dn4gm4FOD8Cmz6bUNlLI4PvR+Ki4vF4MGDHet70hCQMbI5a0/AOv/A6GpLAMeneDsQ+HHD/Q9FNVpxjTTk85uda0KCEHfe6R4fvuCPPebdDxz9c43GV942vIcdtAcN7G+5xdW8QvM7/89/hEhPt8fnHxsrxA03uK/bcccJ0bLl0yp1sG/fvkH//V+yd5PyH9dytFG16U8/cKQ43iTta7XxkTMein7gISXeiHs2dtEKnW+8+XGfeqr0iUi0XthgJevND1x6a3lk3jzvf2cn4YYVCcTQ0zHC/yXQAj57truJhnmLlMd9nQxPzQtqwZiUvNyrH/fNUzpZbiqFvPCa/cBLRKgQMuKNrIN27dqpJruN6XF4883uHyxmszfeaJ7VInvDKnDfueYa91innOIa/4QT3Pukx5RXvAk46kzsINw4BqPPy2mnuc7PKObPPde4MeraTNpbmKxFC/exnH66awbuDkmVizfeCO4wyQVj3zWVs6My0iimX6wcZdn46fmHREtZwq+NBSdEdAQyjv/VmjCKd7CJN2bd3bp1E8OGDWvweyAsYZzlan7cmG1j1q29lpZmzTkgMUYbo2lTt/sg3Aq1GwhCDTVlPlYVcGnlLewSpkU4Qjuuli1lw97Kdo4rVrgFEkLekOMtkIHzHj16KBOuhj59of+odnwXX4wJgXu/FsLC96AsSEOw03fGmsrZcyrDFBEGP/Arw9paNv747UtNzZu1rj9Gt0SEUyjeQRg2gVtgfiPq0CMj3T/eJ54wvwYR1F6bZ9GT85Qp7jH++1/za1hs1F6Ljq75feD+CgG3k3CDoUPd5/Dee+bXZDGj/hpayTUEiDfyvQ80sNtFv37uY+jQwfwaxFx7TbrsBiXfr5usi+QP8VNMrxk75FgV++6xepw+Rv+NM/X9sJZtXumVgll4qIROuGBZDzDD1n6geHzWiluQFdG8ufu1JIv6yKKJvTbGhRe6s0wwA5Tdueo184+JsZdwg4UL3eeA9nDFle0M9+xxrzNghtvQlEYUYzVmzcPoCIkF1dLKtpP4vLVWdghhFQdpG0bjzPeZOb113294k2ihi4tkKEXz6fb5zT0xSh//pfk/6uOsOrBd33/VhLYho0cU73qA3z2aKGg/YMRn33pLiAsuMD9OW1ULAlGA+6A21mWXucY/91z3vquvdu71xc0Q7eGM54LzM94YkY8eKHCTNIbH8DSA40MIyxiGcp1LtnxSmhJU3/+UnAMmP/D7IrqK1ksHKitZbd87i/tbNv62I3tNfuDwI2+9dIBoMfotfV/b6MEUb4q3Z5B2p6WwVd2w3+oWZuHh3jNGkAWB+LCTGTXK+/lhVtvQkImvGDjQ+/Hh6QDrIMj1bimD9kfLx4SNFh5whlxAxKzXn2GCmvzAsZi5Lz/L0vGNrdSqbpfK5spYVKV4O1y8Ed+eLfO6ii14hp0zxzzb9bcfNxwBjVkPWls1LJwFAygqMs5mtSca2W7TNsd35pnm48NTwpo17v+mgwyK33PPPZaI97L0zeKfUz/VRQuz0XfljNeq3pNGEKoYsXWB3nPS337c3vzAH5zRXc3MQ4mgFe9p06YpQ6IBAwZY8v6o+kPceORI12zX3/FjrLvCBxzjr1zpqvILJjS/c5wfcttLbLYGhRAPnrJwfLJJvR7/doe4rAl8I7ND6zdZdbt2YjtxxE+FMhhnUdpGEZYUraos/e3HnSX9yBekrVeZJolZqZb13aR4B4DU1FTx/fffq47whBhBr9IYeed1msvg4aI8cYkMDRjzrO8J/1I0Gd5K3/fx8r/4AVO8GfMmwUn37t3VU5nTbuxzdsfpIo2wiTbLRrm4P7I9CMWbkICCkNpAufKYkuKsnom/bojQxbtXlUrCK8La6K/tL2ATX4o3xZsQn7BPlouuxOJEIzBWOD46q6ceZ0bMVwudoHy8nL7lFG+KNyGNZ51c0TxFGtFcJAsDGrOQiTQ8Y9f3O6d/rvKcjdWNrRb8zAtO8XaWeK+VKQmjZJJwY4yHCLGCMml2cq0sGX322WdFRkZGnf8O4Q90di8wOPUNSJjtNc8ZxTKpuc7pxwqjqfUHd4aUEyDFuwrIGvjyyy/VItTy5cv5qRLbkVuPLhfR6QniFoNfNvK431r0mypA0fKsje562OCuh4IdJzBPLrBeN7G9fuxnyfNrEz3Qb2mOFG+bAbMhlCOzTyFxMsjj9uZXffWEj1S6IICjH0Qeec6ByLNuKGO2Lfb65OAPP3CKN2PeJAhYL3uYDRkyRBpf7bHF8WDmaczjRtof/EOMfh4fLXOud8e+/MMmHxSUs98b8aWp6KjrqjH8YlK8CamZ8ePHq1DbzJkzbXE8c/esM+Vxa9aqqGTUZuPwD3FqHvdEg+82bkraLDt810p9P8IphOJNSI3slMbbi2SN+8GauldYxHbpYLV3r9mLo9/GGbqI/U96VxtBkwPtNSz0ORGjqdQv68NNrxm9Uhg6CXLxDqUu3SS4GD58uOwMdKx45513TPuNHWlgd1pa2RXdmMcN+1OnxLergv6T2vk9O/c7PR99tcGPG8VGJIjFG4uSffr0UaZTOTk5/BSJo0hOTpZt604Sbdq0MS2wIzXQOANFQ933lvwuzjPkcb8y37l53Dtz9pv8wOHLAjdE4zkj64QEsXinyxbisNz87LPPVG9C4kyQ8oyiw+zs0Dt3b6GawZvn+i2PG+GXuMxkUx651fSJn+r1/LBYm1GYzR9GsIdN4AyHGQxxHqtWmXtSorXZY48JsXs3rw1AGbyxS7vml+2rPG5Yqd4w6WP9vRGSeXtRP7/6gRu732hhou0h5scdsuJNnAn6VGqd4KtuaHAQqKw9hC9KbGQajs7oK/YniknJy33qV42MD2955PADz/ZToYyWpz55R4wSbTohUryJjUHDCDRO1sQa/Spvu83V3kzb9+ST/j+u2NhY0blzZ9mhaFZwh2rkzNo4o0eY4oEZ3Ux+4B1jhvCLSvEmxAzi25pIo42cLIpVoFOO1n0ds/J8P1dJb9iwQeV7wyY2UMTHx1teHWzMZoGpVW6J62cMP3CjoBOKt09BXixL352NzJDTxfvtt82v3Xij+zWpY34FTn9Vc679Sbt27WTz6qPEhAkTLB3HuFjYO26S6TVjZWcmFw0p3r4CC5OtW7cWffv2pYA7GDQQ1gT6iitcfUABeiKg87q2eBlq2SdDhw6V53+y6Nevn6XjIL6sCfRjs3rp+eJYCNXi4LCXZfyZ4u0zEhISRJcuXUR4eDg/MQcDt96zznIL+KWXCtGqlXnfnXeG3nUpld2Lkf5qNXvzDpn8wOFE+I7MszY6FMK9kFC8fUq+DISWlND3107I5jCykS/SNuv+NxMnes40wYaFS3mfrrsY7XWNf+SIPa+P7IEtVqyw7kkiLe+gqk7UYtd14Y9Ns7zmWaOhMRo+OIWUnANiTUaSX/PUKd7E0WDhUfYWMAnvAw8IUdd+vvCBOucc899fdZUsk15dt79ftkx6fVzp/lsZLrZVnjjSIfFUYcxjf/pp183GJ++ftkFcb8jThivfawv7igMFdbuLhSUtreYH/uCM7mLr4TRHfP9myIXXK8I+NOWpo6OQZpdL8aZ4Ew8sXuw9T/v002H2VLf3gTWNzNCTzn5CbNyIbjN1+7s5c1xi6Gn8Jk3kbLSR+oOFy8Z0lZ861XUz8XR8zZu7qkobwxQZt/aWpw3zqqyiujV90PLIp+5c4ag86yFbomr0Aw+lWTjFm9QZtF+UbRhNedqIUZ94onvfI49YNz4WOJFiaBTrf/3LnCf+zDMNf/+N8i6ClEH45jQEhG/OPtt9LM2aCXHHHUIcd5x736uv1u29kpKSqu2DMCO0YfTDflhWJhr9vttGDw7a7x/CREYfFMy+UZlp9APvFkJ+4LYU72wZJJw3b57q/UfsA/KxNRFq2dKdpy3TpHUBxazYKq8whEu08VHso9mDINyiCST+2VDDySJZRYR2emPGjGlQVtPcueZsGi0Wjywb7Wnh1FPRtq/mY7j//vuVcVValceIyN1rTaZVWqx7yd5NQeH3XRtjt7u78Dw6q6fe+9LoB37jpA4U70CK919//aVmQFbnvZL6MXq0W5xef9382q23ul+ra+y6vvz5p3uMD6rUkhhj8Js2Beb6yExW/Rg6dTK/Zqws3bWr5vd58cUXRdOmTUVUVJT5/TeE6yLVa02Y6TVYqWqvoWNNMNJDepxr59h/o7uBBm5WzUe+rvbjJhYqDY1tKd4oyPn2229FZmamIPYBmRPGNL+8yvUh+JFgRqktHmZZlLCAhUBt/Kuvds+wEaLWQjfHHOM+Ln8zY4b7+P7v/1xhJrB5s3udQE6oZWpgze+DtMFsDykqxhmm0e9706EUPXQA58HyIK2FGJ64QD//5+d9r5/n8n1b9P3XSH8WzrwZ8yZVQMzZGNNF/Pvll8152vAqsS6c5opzG28gL71k3nfffYG7PphrnHaa+1iQEYPjO+MM33i3VPX7vm1qZ5WXbcwceV1mnQQrSUfSVad57VzvkiX+b8rzN+auf7z8L4o3xZt4Ytq0mvO0kTliJePGeR8fs9qtWwN7fWTEz+vxQdhrC5nUGjqqwe8b1ZEoxAlmvl83xev5I3RU12wbijcJSWbPdi1YGoXpkkuEiI72z/iTJrkyOYzjX3ONEGvW2OP64PhkyNp0fPAv95VnC/y+jVkn2OAO6JQ87cbgzQ8cC5g7sveF1O+Q4k0aBOK5WJgMC3MtEPo7MQj2smjqgDVtxJTLfdjWEf1RF8oA+5QpUxoVYsIaASpKExMbd3zl8o/37TMLE/KZVx7YJqZJIUc4IdT8SJCnHiNj3biR7co5EJK/QVuIN9ICkVmCNClCAg3Eu23btqrHZL6//WmrgMX76+W0/Z///CdN2Yi9xBs/jo8//li5BiZiikKIDZgtY0MrpQ9AsZYyEiDw+2jRooW44IILZFbPHn4wxF4z793SlAI/FkJIdVD5yadSYkvxJoQQQvEmhBCKN8U7eIF1BtL7Gut0R5wJsjRi92/1W8d4I8iOQWofxs8poZw4RrwLG+ocRHwC/LiRF23MQ77rLiG2beO18US5L/MQfQDi343JgpmzO07Zx2o50vADeXXBL6qC0x9EKD/uNiY/8rcX9WPvTLuLN1bvu3XrJkaOHMlFmAAAhzv4f3iqAIQ/icxMI9rsUKbmDR48WHTs2FHmbdtjdoi0WmSe4DfUEMZvX+q1QhGCbnVDg2GJ872OjwYTuZyF21e8t8r65Y8++kh88803tpvRBDswRLr4YrMf9t13C3HKKe59997L62Tk559/VmmsCfXpz2YhsbKDBTrN/0samdc37ztDzmzPkyX0xnJyVCYa/cDbLfvTsmNPzz9k8ma5akJbNT464Wj7PosdwS+dncMmB6QR9F5f9YQidWb9erdIow2Z5seN9Hr4gljtCuhE4Kl9+LC9LFYXLVrUoImP0ZXwnvAv9a4z8APX9l887n3LjjssKVofB00kNOtWhHFC0RXQkeJNAgNK2TXxhhugEXR80V5DWTcJPvrET9VFsnfcJNNrl8iuPNprVsWev5Ie5NoYv6wPN71mnJHnlzKcSvEmJoydcGTYVO9qjsI9o5UpbdSDE/S/1ATykZn/EyWVfuBxmcl+6cRj7ITzVOQ3oqzC9fQQY/DjRmszQvEmVUClt9EN8LzzhHjuObMfNzrikOBkX36W+IdhhnvrlE9Ulolx37uL+1s2/u7cDHF2Zccb5Uc+rbNoteBnvQuO1TF3incDxHsbc9BsQ2Sk9+7mxx/v6kdJnEV90gYHbZ7jNdsD8e4DBUcsPdZfN0TU6Md9KIT8uG0v3qulbyj6UQ4fPpxX2ybIbDM16zYKN9qKWdV7MhjAAiEc/uyW3jpo0CD55HSWLLaqu5m6Jz9wLCDCWtYfhCUtNXX/CVU/btuLN5zZkCe7fPlyXm0bgbRBzLKnTm2833Qo0K9fPzUJWbduna2O6+uvv5Y337+p31h9KJKZHmszksWslDUqnOFvkGmy+sB2MXv3GpGay0UW24ZNcnJy6EVMHE2kjDf17NlTxMXF2eq4cnNzxdy5c/kBUby5YEmIJzj5IBRvQgghFG9CCKF4+0C8Ufa+VLof8TGTkMBQUlLCi0Dxrp94Q7B/+OEHtSofFRXFq0saxY4dQixYIItLmEFWZ9Hu2rWruPDCC1WSAKF412vmjdX47777jp7dpMEsXCjERReZ89DhfmgHu1o8Wc6YMUPlfNuRu6QxO1wHJ0yYwC8SxZsxb+I/Zs70XgF6+umy+8uuwB4fGmXjyXLs2LG2vH7x8fGqMI5QvCnexG+gyhtWtZpYN28uxIMPmv3GH3ss8DPvMGnPSMsHQvEmpJJly9wijaYRRyptNjZudPmuYP+xx7pEnhCKN8Wb2IQhQ9zi/d575teuv979Gs2zCPGBeE+cOFEt4JTCLIOQRoD+mppAX3YZSr9d+xGhOPFE137038zL47WqD2w3SPGuJt5oZ4b+fm3btpXpXMznIo0Donz22Wa/8aefdi1UavsQAyd1IyMjQy2uyh85LwbFu/rMGw2FlyxZwisZgiQnC4F0ftnm0WfA5dBTpgk2LFyyu31dZ9tYQ9gve5OeKo477jixc+dOXhSKN2Peoc7ixeYu9Nhuu02ILVt88/4wyTv3XPP7I+Yts+BsdONKFj/99JPKPLEbuAEiU8d17SbLbbvK0klN5XeX4k3xDlkgrEcf7X1m7KsMurIyV5ZJRIQQdpw0pqSkqJBE9+7dbXVcQ4d6f3KBoLM/KcWb4h2CoImMsQMP4tOPPGJuXnzffaFxLWAFsX79eltVEsslKNP6wPnnC/HQQ0KccIJ73+uv83tM8SYhx6pVZmHIynLtl8seMrbq2o/qyCNHeK0CwZQp7s/n5ptlt5rK+wo6pGlVq2eeiRsPrxXFm4QUo0d7n8Gh67z2GquzA8O337o/g27dzK+5Y+BC7N/Pa0XxJiGFbEeqC4A0rtNn2ElJQpx8MmfegWbSJPfng5up1is5NlabeW+TlarPiwEDBvJiUbxJKIHH8JYt3QLRooUQTz5pjrPC/Y8EBpRaGNcfkBH0xBPuAqe//W2aalZ8ySWXsHCH4k1CDWmq59X1D3FvdKIPBaB9ONd58+B0WCRz3dNscVx//uk926RZswqZHfMDC+oo3iRUmT9fiAsuMAvDTTeFjucIbGvx1IHz/vvf90kflrbi7be7quYRdgBW3medZf58Hngg8Ha6hOJNbDLzRFHOrFmhJQqoyTGK4lFHVYjXXusiy/j7yNTJAmGTCbiA1RBupnhSki62hOJN8Sahy+HDQjRp4hZuVIEiz/3UU0v0fc8+y+tEKN6E2Io5c9zCfc01bnfDtWtdboda3J8Gm4TiTYiN+O03t3h36GB+zej1YpfYd22sWLFCujQ+qMzlCMWbkJCZeWsdfdatc3X4wX7kuztl5t2mTRuVOvjWW2/xw6V4ExK8VI15w+fl8cfdBUpOi3mjH2fPnj1FdnY2P1yKNyHBzfjxnnOojzqqXFx1VYIYN24GLxKheBNiR2TXP1OXe2z33FMhOnb8VNnEosMUIRRvQmwI8tw3b3bFwbU8avRyDQ8Pl+GVw7xAhOJNCCGE4k0I8cJMWff/3HPPyWwZJqpTvAkhjgCCffnll6vUwZEjR/KCULwJIU4BMft+/fqJ4uJiXgyKNyGEEIo3IQ5gtrTy69WrV40pg5rrH1wZd+/mNSMUb0ICzpAhQ1S+96JFizy+jnZlzZqZ88TvuEOIbdt47QjFm5CAsVtOpbdJJS4rK6v22vDh3jvdoLM7vbcJxZsQm5GZKcQZZ7jF+qKLhPj3v83eKM8/b49jxY0HWSdfffUVPziKNyGhTUSEuW0cGjqDuDi3H/gpp0A4A3+seHI4VlolHnfccdLidgc/PIo3IaHLTz+5xbtzZ/Nr55/vfs0uC5hwHBw7diw7zVO8CQlt0LhYE+jrrnP7ga9cKcTRR7v2n3aayzeFEIo3IQEEXtn5lSp96JBrUdLYA/Ohh4Q48UT3vpde4jUjFG9CAsrUqVNF69atxeLFi/V9Y8Z4zzY56ywh9u3jdSMUb0ICCnpEtmvXThk+VQ2ftGhhFm7MwFNSeM0IxZuQgFNSUqI2T1RUCIG+v/PnC5GRYf9zKZSpMb/88ouIjIzkB0vxJoQ4BVSNwnHw6quv9lh8RCjehBCbPkU8K7srz507lxeD4k0IIYTiTQghFG+KNyG+Ii8vT6xevVokJibyYhCKNyFOYfny5coidsCAAbwYhOJNiFM4fPiw+O2338TSpUuD6py6d+9eY8MJQvEmhNiMVq1aqdTB9u3b82JQvAkhTmHTpk1CaoGK5ROKNyGEEIo3IYRQvCnehBBC8SYk9MiUTSxHjx6tutMQQvEmxCEcOXJE+XsjQ6O0tDSozi0tLU306dOHHzLFm5DgJDo6WqSmpkpL2IqgOSe4DJ4vm3AidXDWrFn8kCnehBCn0K9fP9nC7SWRnJzMi0HxJoQQQvEmhBCKN8WbEEIo3oSEIMg+IYTiTYhDKCoqEl27dhUff/xx0KUMamzZskVMnTqVH7bdxPuJJ54oTE9PF9y4cWvY1qNHD9GpUyexYcOGoDu3mJgYceyxx4rTTjtNJCQk8PO2aHvmmWfy6ivep0jxTpGz7wxu3Lg1bJM/vIPBfH7NmzcvkrnfBVIvDvLztmZ7/PHH99x3331n/Y0QQnzIUbwEhBBCCCGEEEIIIYQQQgghhBASgvw/x5amaSTCRXEAAAAASUVORK5CYII=) The algorithm repeats the calculation of centroids and assignment of points until points stop changing clusters. When clustering large datasets, you stop the algorithm before reaching convergence, using other criteria instead. *Note: Some content in this section was [adapted](https://creativecommons.org/licenses/by/4.0/) from Google's free [Clustering in Machine Learning](https://developers.google.com/machine-learning/clustering) course. The course is a great resource if you want to explore clustering in more detail!* ### Cluster the Spotify Tracks using their Audio Features Now, we will use the `sklearn.cluster.KMeans` Python library to apply the $k$-means algorithm to our `tracks_df` data. Based on our visual inspection of the PCA plot, let's start with a guess k=3 to get 3 clusters. ``` initial_k = ____ # Scale the data, so that the units of features don't impact feature importance scaled_df = StandardScaler().fit_transform(tracks_df[audio_feature_cols]) # Cluster the data using the k means algorithm initial_cluster_results = ______(n_clusters=initial_k, n_init=25, random_state=rs).fit(scaled_df) ``` Now, let's print the cluster results. Notice that we're given a number (0 or 1) for each observation in our data set. This number is the id of the cluster assigned to each track. ``` # Print the cluster results print(initial_cluster_results._______) ``` And let's save the cluster results in our `tracks_df` dataframe as a column named `initial_cluster` so we can access them later. ``` # Save the cluster labels in our dataframe tracks_df[______________] = ['Cluster ' + str(i) for i in __________.______] ``` Let's plot the PCA plot and color each observation based on the assigned cluster to visualize our $k$-means results. ``` # Show a PCA plot of the clusters pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['initial_cluster']) ``` Does it look like our $k$-means algorithm correctly separated the tracks into clusters? Does each color map to a distinct group of points? ### How do our clusters of songs differ? One way we can evaluate our clusters is by looking how the distribution of each data feature varies by cluster. In our case, let's check to see if tracks in the different clusters tend to have different values of energy, loudness, or speechiness. ``` # Plot the distribution of audio features by cluster g = sns.pairplot(tracks_df, hue="initial_cluster", vars=['danceability', 'energy', 'loudness', 'speechiness', 'tempo'], hue_order=sorted(tracks_df.initial_cluster.unique()), palette='Set1') g.fig.suptitle('Distribution of Audio Features by Cluster', y=1.05) plt.show() ``` ### Experiment with different values of $k$ Use the slider to select different values of $k$, then run the cell below to see how the choice of the number of clusters affects our results. ``` trial_k = 10 #@param {type:"slider", min:1, max:10, step:1} # Cluster the data using the k means algorithm trial_cluster_results = KMeans(n_clusters=trial_k, n_init=25, random_state=rs).fit(scaled_df) # Save the cluster labels in our dataframe tracks_df['trial_cluster'] = ['Cluster ' + str(i) for i in trial_cluster_results.labels_] # Show a PCA plot of the clusters pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['trial_cluster']) # Plot the distribution of audio features by cluster g = sns.pairplot(tracks_df, hue="trial_cluster", vars=['danceability', 'energy', 'loudness', 'speechiness', 'tempo'], hue_order=sorted(tracks_df.trial_cluster.unique()), palette='Set1') g.fig.suptitle('Distribution of Audio Features by Cluster', y=1.05) plt.show() ``` ### Which value of $k$ works best for our data? You may have noticed that the $k$-means algorithm requires you to choose $k$ and decide the number of clusters before you run the algorithm. But how do we know which value of $k$ is the best fit for our data? One approach is to track the total distance from points to their cluster centroid as we increase the number of clusters, $k$. Usually, the total distance decreases as we increase $k$, but we reach a value of $k$ where increasing $k$ only marginally decreases the total distance. An elbow plot helps us to find that value of $k$; it's the value of $k$ where the slope of the line in the elbow plot crosses the threshold of slope $=-1$. When you plot distance vs $k$, this point often looks like an "elbow". Let's build an elbow plot to select the value of $k$ that will give us the highest quality clusters that best explain the variation in our data. ``` # Calculate the Total Distance for each value of k between 1 and 10 scores = [] k_list = np.arange(____,____) for i in k_list: fit_k = _____(n_clusters=i, n_init=5, random_state=rs).fit(scaled_df) scores.append(fit_k.inertia_) # Plot this in an elbow plot plt.figure(figsize=(11,8.5)) sns.lineplot(______, ______) plt.xlabel('Number of clusters $k$') plt.ylabel('Total Point to Centroid Distance') plt.grid() plt.title('The Elbow Method showing the optimal $k$') plt.show() ``` Do you see the "elbow"? At what value of $k$ does it occur? ### Evaluate the results of our clustering algorithm for the best $k$ Use the slider below to choose the "best" $k$ that you determined from looking at the elbow plot. Evaluate the results in the PCA plot. Does this look like a good value of $k$ to separate the data into meaningful clusters? ``` best_k = 1 #@param {type:"slider", min:1, max:10, step:1} # Cluster the data using the k means algorithm best_cluster_results = KMeans(n_clusters=best_k, n_init=25, random_state=rs).fit(scaled_df) # Save the cluster labels in our dataframe tracks_df['best_cluster'] = ['Cluster ' + str(i) for i in best_cluster_results.labels_] # Show a PCA plot of the clusters pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['best_cluster']) ``` ## How did we do? In addition to the mathematical ways to validate the selection of the best $k$ parameter for our model and the quality of our resulting clusters, there's another very important way to evaluate our results: listening to the tracks! Let's listen to the tracks in each cluster! What do you notice about the attributes that tracks in each cluster have in common? What do you notice about how the clusters are different? What makes each cluster unique? ``` play_cluster_tracks(tracks_df, cluster_column='best_cluster') ``` ## Wrap Up and Next Session That's a wrap! Now that you've learned some practical skills in data science, please join us tomorrow afternoon for the third and final session in our series, where we'll talk about how to continue your studies and/or pursue a career in Data Science! **Making Your Next Professional Play in Data Science**\ Friday, October 2 | 11:30am - 12:45pm PT\ [https://sched.co/dtqZ](https://sched.co/dtqZ)
true
code
0.553385
null
null
null
null
## These notebooks can be found at https://github.com/jaspajjr/pydata-visualisation if you want to follow along https://matplotlib.org/users/intro.html Matplotlib is a library for making 2D plots of arrays in Python. * Has it's origins in emulating MATLAB, it can also be used in a Pythonic, object oriented way. * Easy stuff should be easy, difficult stuff should be possible ``` import matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inline ``` Everything in matplotlib is organized in a hierarchy. At the top of the hierarchy is the matplotlib “state-machine environment” which is provided by the matplotlib.pyplot module. At this level, simple functions are used to add plot elements (lines, images, text, etc.) to the current axes in the current figure. Pyplot’s state-machine environment behaves similarly to MATLAB and should be most familiar to users with MATLAB experience. The next level down in the hierarchy is the first level of the object-oriented interface, in which pyplot is used only for a few functions such as figure creation, and the user explicitly creates and keeps track of the figure and axes objects. At this level, the user uses pyplot to create figures, and through those figures, one or more axes objects can be created. These axes objects are then used for most plotting actions. ## Scatter Plot To start with let's do a really basic scatter plot: ``` plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 6, 8, 10]) x = [0, 1, 2, 3, 4, 5] y = [0, 2, 4, 6, 8, 10] plt.plot(x, y) ``` What if we don't want a line? ``` plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 5, 7, 8, 10], marker='o', linestyle='') plt.xlabel('The X Axis') plt.ylabel('The Y Axis') plt.show(); ``` #### Simple example from matplotlib https://matplotlib.org/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py ``` def example_plot(ax, fontsize=12): ax.plot([1, 2]) ax.locator_params(nbins=5) ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) fig, ax = plt.subplots() example_plot(ax, fontsize=24) fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2) # fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True) ax1.plot([0, 1, 2, 3, 4, 5], [0, 2, 5, 7, 8, 10]) ax2.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 9, 16, 25]) ax3.plot([0, 1, 2, 3, 4, 5], [0, 13, 18, 21, 23, 25]) ax4.plot([0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5]) plt.tight_layout() ``` ## Date Plotting ``` import pandas_datareader as pdr df = pdr.get_data_fred('GS10') df = df.reset_index() print(df.info()) df.head() fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) ax.plot_date(df['DATE'], df['GS10']) ``` ## Bar Plot ``` fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) x_data = [0, 1, 2, 3, 4] values = [20, 35, 30, 35, 27] ax.bar(x_data, values) ax.set_xticks(x_data) ax.set_xticklabels(('A', 'B', 'C', 'D', 'E')) ; ``` ## Matplotlib basics http://pbpython.com/effective-matplotlib.html ### Behind the scenes * matplotlib.backend_bases.FigureCanvas is the area onto which the figure is drawn * matplotlib.backend_bases.Renderer is the object which knows how to draw on the FigureCanvas * matplotlib.artist.Artist is the object that knows how to use a renderer to paint onto the canvas The typical user will spend 95% of their time working with the Artists. https://matplotlib.org/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py ``` fig, (ax1, ax2) = plt.subplots( nrows=1, ncols=2, sharey=True, figsize=(12, 8)) fig.suptitle("Main Title", fontsize=14, fontweight='bold'); x_data = [0, 1, 2, 3, 4] values = [20, 35, 30, 35, 27] ax1.barh(x_data, values); ax1.set_xlim([0, 55]) #ax1.set(xlabel='Unit of measurement', ylabel='Groups') ax1.set(title='Foo', xlabel='Unit of measurement') ax1.grid() ax2.barh(x_data, [y / np.sum(values) for y in values], color='r'); ax2.set_title('Transformed', fontweight='light') ax2.axvline(x=.1, color='k', linestyle='--') ax2.set(xlabel='Unit of measurement') # Worth noticing this ax2.set_axis_off(); fig.savefig('example_plot.png', dpi=80, bbox_inches="tight") ```
true
code
0.476641
null
null
null
null
<a href="https://colab.research.google.com/github/dauparas/tensorflow_examples/blob/master/VAE_cell_cycle.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> https://github.com/PMBio/scLVM/blob/master/tutorials/tcell_demo.ipynb Variational Autoencoder Model (VAE) with latent subspaces based on: https://arxiv.org/pdf/1812.06190.pdf ``` #Step 1: import dependencies from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt import seaborn as sns import tensorflow as tf from keras import regularizers import time from __future__ import division import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions %matplotlib inline plt.style.use('dark_background') import pandas as pd import os from matplotlib import cm import h5py import scipy as SP import pylab as PL data = os.path.join('data_Tcells_normCounts.h5f') f = h5py.File(data,'r') Y = f['LogNcountsMmus'][:] # gene expression matrix tech_noise = f['LogVar_techMmus'][:] # technical noise genes_het_bool=f['genes_heterogen'][:] # index of heterogeneous genes geneID = f['gene_names'][:] # gene names cellcyclegenes_filter = SP.unique(f['cellcyclegenes_filter'][:].ravel() -1) # idx of cell cycle genes from GO cellcyclegenes_filterCB = f['ccCBall_gene_indices'][:].ravel() -1 # idx of cell cycle genes from cycle base ... # filter cell cycle genes idx_cell_cycle = SP.union1d(cellcyclegenes_filter,cellcyclegenes_filterCB) # determine non-zero counts idx_nonzero = SP.nonzero((Y.mean(0)**2)>0)[0] idx_cell_cycle_noise_filtered = SP.intersect1d(idx_cell_cycle,idx_nonzero) # subset gene expression matrix Ycc = Y[:,idx_cell_cycle_noise_filtered] plt = PL.subplot(1,1,1); PL.imshow(Ycc,cmap=cm.RdBu,vmin=-3,vmax=+3,interpolation='None'); #PL.colorbar(); plt.set_xticks([]); plt.set_yticks([]); PL.xlabel('genes'); PL.ylabel('cells'); X = np.delete(Y, idx_cell_cycle_noise_filtered, axis=1) X = Y #base case U = Y[:,idx_cell_cycle_noise_filtered] mean = np.mean(X, axis=0) variance = np.var(X, axis=0) indx_small_mean = np.argwhere(mean < 0.00001) X = np.delete(X, indx_small_mean, axis=1) mean = np.mean(X, axis=0) variance = np.var(X, axis=0) fano = variance/mean print(fano.shape) indx_small_fano = np.argwhere(fano < 1.0) X = np.delete(X, indx_small_fano, axis=1) mean = np.mean(X, axis=0) variance = np.var(X, axis=0) fano = variance/mean print(fano.shape) #Reconstruction loss def x_given_z(z, output_size): with tf.variable_scope('M/x_given_w_z'): act = tf.nn.leaky_relu h = z h = tf.layers.dense(h, 8, act) h = tf.layers.dense(h, 16, act) h = tf.layers.dense(h, 32, act) h = tf.layers.dense(h, 64, act) h = tf.layers.dense(h, 128, act) h = tf.layers.dense(h, 256, act) loc = tf.layers.dense(h, output_size) #log_variance = tf.layers.dense(x, latent_size) #scale = tf.nn.softplus(log_variance) scale = 0.01*tf.ones(tf.shape(loc)) return tfd.MultivariateNormalDiag(loc, scale) #KL term for z def z_given_x(x, latent_size): #+ with tf.variable_scope('M/z_given_x'): act = tf.nn.leaky_relu h = x h = tf.layers.dense(h, 256, act) h = tf.layers.dense(h, 128, act) h = tf.layers.dense(h, 64, act) h = tf.layers.dense(h, 32, act) h = tf.layers.dense(h, 16, act) h = tf.layers.dense(h, 8, act) loc = tf.layers.dense(h,latent_size) log_variance = tf.layers.dense(h, latent_size) scale = tf.nn.softplus(log_variance) # scale = 0.01*tf.ones(tf.shape(loc)) return tfd.MultivariateNormalDiag(loc, scale) def z_given(latent_size): with tf.variable_scope('M/z_given'): loc = tf.zeros(latent_size) scale = 0.01*tf.ones(tf.shape(loc)) return tfd.MultivariateNormalDiag(loc, scale) #Connect encoder and decoder and define the loss function tf.reset_default_graph() x_in = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_in') x_out = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_out') z_latent_size = 2 beta = 0.000001 #KL_z zI = z_given(z_latent_size) zIx = z_given_x(x_in, z_latent_size) zIx_sample = zIx.sample() zIx_mean = zIx.mean() #kl_z = tf.reduce_mean(zIx.log_prob(zIx_sample)- zI.log_prob(zIx_sample)) kl_z = tf.reduce_mean(tfd.kl_divergence(zIx, zI)) #analytical #Reconstruction xIz = x_given_z(zIx_sample, X.shape[1]) rec_out = xIz.mean() rec_loss = tf.losses.mean_squared_error(x_out, rec_out) loss = rec_loss + beta*kl_z optimizer = tf.train.AdamOptimizer(0.001).minimize(loss) #Helper function def batch_generator(features, x, u, batch_size): """Function to create python generator to shuffle and split features into batches along the first dimension.""" idx = np.arange(features.shape[0]) np.random.shuffle(idx) for start_idx in range(0, features.shape[0], batch_size): end_idx = min(start_idx + batch_size, features.shape[0]) part = idx[start_idx:end_idx] yield features[part,:], x[part,:] , u[part, :] n_epochs = 5000 batch_size = X.shape[0] start = time.time() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(n_epochs): gen = batch_generator(X, X, U, batch_size) #create batch generator rec_loss_ = 0 kl_z_ = 0 for j in range(np.int(X.shape[0]/batch_size)): x_in_batch, x_out_batch, u_batch = gen.__next__() _, rec_loss__, kl_z__= sess.run([optimizer, rec_loss, kl_z], feed_dict={x_in: x_in_batch, x_out: x_out_batch}) rec_loss_ += rec_loss__ kl_z_ += kl_z__ if (i+1)% 50 == 0 or i == 0: zIx_mean_, rec_out_= sess.run([zIx_mean, rec_out], feed_dict ={x_in:X, x_out:X}) end = time.time() print('epoch: {0}, rec_loss: {1:.3f}, kl_z: {2:.2f}'.format((i+1), rec_loss_/(1+np.int(X.shape[0]/batch_size)), kl_z_/(1+np.int(X.shape[0]/batch_size)))) start = time.time() from sklearn.decomposition import TruncatedSVD svd = TruncatedSVD(n_components=2, n_iter=7, random_state=42) svd.fit(U.T) print(svd.explained_variance_ratio_) print(svd.explained_variance_ratio_.sum()) print(svd.singular_values_) U_ = svd.components_ U_ = U_.T import matplotlib.pyplot as plt fig, axs = plt.subplots(1, 2, figsize=(14,5)) axs[0].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,0], cmap='viridis', s=5.0); axs[0].set_xlabel('z1') axs[0].set_ylabel('z2') fig.suptitle('X1') plt.show() fig, axs = plt.subplots(1, 2, figsize=(14,5)) axs[0].scatter(wIxy_mean_[:,0],wIxy_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0); axs[0].set_xlabel('w1') axs[0].set_ylabel('w2') axs[1].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0); axs[1].set_xlabel('z1') axs[1].set_ylabel('z2') fig.suptitle('X1') plt.show() error = np.abs(X-rec_out_) plt.plot(np.reshape(error, -1), '*', markersize=0.1); plt.hist(np.reshape(error, -1), bins=50); ```
true
code
0.767646
null
null
null
null
### Cell Painting morphological (CP) and L1000 gene expression (GE) profiles for the following datasets: - **CDRP**-BBBC047-Bray-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 30,430 unique compounds for CP dataset, median number of replicates --> 4 * $\bf{GE}$ There are 21,782 unique compounds for GE dataset, median number of replicates --> 3 * 20,131 compounds are present in both datasets. - **CDRP-bio**-BBBC036-Bray-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 2,242 unique compounds for CP dataset, median number of replicates --> 8 * $\bf{GE}$ There are 1,917 unique compounds for GE dataset, median number of replicates --> 2 * 1916 compounds are present in both datasets. - **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) : * $\bf{CP}$ There are 593 unique alleles for CP dataset, median number of replicates --> 8 * $\bf{GE}$ There are 529 unique alleles for GE dataset, median number of replicates --> 8 * 525 alleles are present in both datasets. - **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 323 unique alleles for CP dataset, median number of replicates --> 5 * $\bf{GE}$ There are 327 unique alleles for GE dataset, median number of replicates --> 2 * 150 alleles are present in both datasets. - **LINCS**-Pilot1-CP-GE (Cell line: U2OS) : * $\bf{CP}$ There are 1570 unique compounds across 7 doses for CP dataset, median number of replicates --> 5 * $\bf{GE}$ There are 1402 unique compounds for GE dataset, median number of replicates --> 3 * $N_{p/d}$: 6984 compounds are present in both datasets. -------------------------------------------- #### Link to the processed profiles: https://cellpainting-datasets.s3.us-east-1.amazonaws.com/Rosetta-GE-CP ``` %matplotlib notebook %load_ext autoreload %autoreload 2 import numpy as np import scipy.spatial import pandas as pd import sklearn.decomposition import matplotlib.pyplot as plt import seaborn as sns import os from cmapPy.pandasGEXpress.parse import parse from utils.replicateCorrs import replicateCorrs from utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile,saveDF_to_CSV_GZ_no_timestamp from importlib import reload from utils.normalize_funcs import standardize_per_catX # sns.set_style("whitegrid") # np.__version__ pd.__version__ ``` ### Input / ouput files: - **CDRPBIO**-BBBC047-Bray-CP-GE (Cell line: U2OS) : * $\bf{CP}$ * Input: * Output: * $\bf{GE}$ * Input: .mat files that are generated using https://github.com/broadinstitute/2014_wawer_pnas * Output: - **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) : * $\bf{CP}$ * Input: * Output: * $\bf{GE}$ * Input: * Output: - **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) : * $\bf{CP}$ * Input: * Output: * $\bf{GE}$ * Input: https://data.broadinstitute.org/icmap/custom/TA/brew/pc/TA.OE005_U2OS_72H/ * Output: ### Reformat Cell-Painting Data Sets - CDRP and TA-ORF are in /storage/data/marziehhaghighi/Rosetta/raw-profiles/ - Luad is already processed by Juan, source of the files is at /storage/luad/profiles_cp in case you want to reformat ``` fileName='RepCorrDF' ### dirs on gpu cluster # rawProf_dir='/storage/data/marziehhaghighi/Rosetta/raw-profiles/' # procProf_dir='/home/marziehhaghighi/workspace_rosetta/workspace/' ### dirs on ec2 rawProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/' # procProf_dir='./' procProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/' # s3://imaging-platform/projects/2018_04_20_Rosetta/workspace/preprocessed_data # aws s3 sync preprocessed_data s3://cellpainting-datasets/Rosetta-GE-CP/preprocessed_data --profile jumpcpuser filename='../../results/RepCor/'+fileName+'.xlsx' # ls ../../ # https://cellpainting-datasets.s3.us-east-1.amazonaws.com/ ``` # CDRP-BBBC047-Bray ### GE - L1000 - CDRP ``` os.listdir(rawProf_dir+'/l1000_CDRP/') cdrp_dataDir=rawProf_dir+'/l1000_CDRP/' cpd_info = pd.read_csv(cdrp_dataDir+"/compounds.txt", sep="\t", dtype=str) cpd_info.columns from scipy.io import loadmat x = loadmat(cdrp_dataDir+'cdrp.all.prof.mat') k1=x['metaWell']['pert_id'][0][0] k2=x['metaGen']['AFFX_PROBE_ID'][0][0] k3=x['metaWell']['pert_dose'][0][0] k4=x['metaWell']['det_plate'][0][0] # pert_dose # x['metaWell']['pert_id'][0][0][0][0][0] pertID = [] probID=[] for r in range(len(k1)): v = k1[r][0][0] pertID.append(v) # probID.append(k2[r][0][0]) for r in range(len(k2)): probID.append(k2[r][0][0]) pert_dose=[] det_plate=[] for r in range(len(k3)): pert_dose.append(k3[r][0]) det_plate.append(k4[r][0][0]) dataArray=x['pclfc']; cdrp_l1k_rep = pd.DataFrame(data=dataArray,columns=probID) cdrp_l1k_rep['pert_id']=pertID cdrp_l1k_rep['pert_dose']=pert_dose cdrp_l1k_rep['det_plate']=det_plate cdrp_l1k_rep['BROAD_CPD_ID']=cdrp_l1k_rep['pert_id'].str[:13] cdrp_l1k_rep2=pd.merge(cdrp_l1k_rep, cpd_info, how='left',on=['BROAD_CPD_ID']) l1k_features_cdrp=cdrp_l1k_rep2.columns[cdrp_l1k_rep2.columns.str.contains("_at")] cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['BROAD_CPD_ID']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str) cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_id']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str) # cdrp_l1k_df.head() print(cpd_info.shape,cdrp_l1k_rep.shape,cdrp_l1k_rep2.shape) cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['pert_id_dose'].replace('DMSO_-666.0', 'DMSO') cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_sample_dose'].replace('DMSO_-666.0', 'DMSO') saveDF_to_CSV_GZ_no_timestamp(cdrp_l1k_rep2,procProf_dir+'preprocessed_data/CDRP-BBBC047-Bray/L1000/replicate_level_l1k.csv.gz'); # cdrp_l1k_rep2.head() # cpd_info ``` ### CP - CDRP ``` profileType=['_augmented','_normalized'] bioactiveFlag="";# either "-bioactive" or "" plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/') for pt in profileType[1:2]: repLevelCDRP0=[] for p in plates: # repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv')) repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive repLevelCDRP = pd.concat(repLevelCDRP0) metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv') # metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower() repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample']) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str) repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO') repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO') # repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip') # , if bioactiveFlag: dataFolderName='CDRPBIO-BBBC036-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') else: # sgfsgf dataFolderName='CDRP-BBBC047-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape) dataFolderName='CDRP-BBBC047-Bray' cp_feats=repLevelCDRP.columns[repLevelCDRP.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist() features_to_remove =find_correlation(repLevelCDRP2[cp_feats], threshold=0.9, remove_negative=False) repLevelCDRP2_var_sel=repLevelCDRP2.drop(columns=features_to_remove) saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2_var_sel,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+'_normalized_variable_selected'+'.csv.gz') # features_to_remove # features_to_remove # features_to_remove repLevelCDRP2['Nuclei_Texture_Variance_RNA_3_0'] # repLevelCDRP2.shape # cp_scaled.columns[cp_scaled.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist() ``` # CDRP-bio-BBBC036-Bray ### GE - L1000 - CDRPBIO ``` bioactiveFlag="-bioactive";# either "-bioactive" or "" plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/') # plates cdrp_l1k_rep2_bioactive=cdrp_l1k_rep2[cdrp_l1k_rep2["pert_sample_dose"].isin(repLevelCDRP2.Metadata_Sample_Dose.unique().tolist())] cdrp_l1k_rep.det_plate ``` ### CP - CDRPBIO ``` profileType=['_augmented','_normalized','_normalized_variable_selected'] bioactiveFlag="-bioactive";# either "-bioactive" or "" plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/') for pt in profileType: repLevelCDRP0=[] for p in plates: # repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv')) repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive repLevelCDRP = pd.concat(repLevelCDRP0) metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv') # metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower() repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample']) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str) # repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str) repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str) repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO') repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO') # repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip') # , if bioactiveFlag: dataFolderName='CDRPBIO-BBBC036-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') else: dataFolderName='CDRP-BBBC047-Bray' saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\ '/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape) ``` # LUAD-BBBC041-Caicedo ### GE - L1000 - LUAD ``` os.listdir(rawProf_dir+'/l1000_LUAD/input/') os.listdir(rawProf_dir+'/l1000_LUAD/output/') luad_dataDir=rawProf_dir+'/l1000_LUAD/' luad_info1 = pd.read_csv(luad_dataDir+"/input/TA.OE014_A549_96H.map", sep="\t", dtype=str) luad_info2 = pd.read_csv(luad_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str) luad_info=pd.concat([luad_info1, luad_info2], ignore_index=True) luad_info.head() luad_l1k_df = parse(luad_dataDir+"/output/high_rep_A549_8reps_141230_ZSPCINF_n4232x978.gctx").data_df.T.reset_index() luad_l1k_df=luad_l1k_df.rename(columns={"cid":"id"}) # cdrp_l1k_df['XX']=cdrp_l1k_df['cid'].str[0] # cdrp_l1k_df['BROAD_CPD_ID']=cdrp_l1k_df['cid'].str[2:15] luad_l1k_df2=pd.merge(luad_l1k_df, luad_info, how='inner',on=['id']) luad_l1k_df2=luad_l1k_df2.rename(columns={"x_mutation_status":"allele"}) l1k_features=luad_l1k_df2.columns[luad_l1k_df2.columns.str.contains("_at")] luad_l1k_df2['allele']=luad_l1k_df2['allele'].replace('UnTrt', 'DMSO') print(luad_info.shape,luad_l1k_df.shape,luad_l1k_df2.shape) saveDF_to_CSV_GZ_no_timestamp(luad_l1k_df2,procProf_dir+'/preprocessed_data/LUAD-BBBC041-Caicedo/L1000/replicate_level_l1k.csv.gz') luad_l1k_df_scaled = standardize_per_catX(luad_l1k_df2,'det_plate',l1k_features.tolist()); x_l1k_luad=replicateCorrs(luad_l1k_df_scaled.reset_index(drop=True),'allele',l1k_features,1) # x_l1k_luad=replicateCorrs(luad_l1k_df2[luad_l1k_df2['allele']!='DMSO'].reset_index(drop=True),'allele',l1k_features,1) # saveAsNewSheetToExistingFile(filename,x_l1k_luad[2],'l1k-luad') ``` ### CP - LUAD ``` profileType=['_augmented','_normalized','_normalized_variable_selected'] plates=os.listdir('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/') for pt in profileType[1:2]: repLevelLuad0=[] for p in plates: repLevelLuad0.append(pd.read_csv('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/'+p+'/'+p+pt+'.csv')) repLevelLuad = pd.concat(repLevelLuad0) metaLuad1=pd.read_csv(rawProf_dir+'/CP_LUAD/metadata/combined_platemaps_AHB_20150506_ssedits.csv') metaLuad1=metaLuad1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) metaLuad1['Metadata_Well']=metaLuad1['Metadata_Well'].str.lower() # metaLuad2=pd.read_csv('~/workspace_rosetta/workspace/raw_profiles/CP_LUAD/metadata/barcode_platemap.csv') # Y[Y['Metadata_Well']=='g05']['Nuclei_Texture_Variance_Mito_5_0'] repLevelLuad2=pd.merge(repLevelLuad, metaLuad1, how='inner',on=['Metadata_Plate_Map_Name','Metadata_Well']) repLevelLuad2['x_mutation_status']=repLevelLuad2['x_mutation_status'].replace(np.nan, 'DMSO') cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # repLevelLuad2.to_csv(procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip') saveDF_to_CSV_GZ_no_timestamp(repLevelLuad2,procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaLuad1.shape,repLevelLuad.shape,repLevelLuad2.shape) pt=['_normalized'] # Read save data repLevelLuad2=pd.read_csv('./preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz') # repLevelTA.head() cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] cols2remove0=[i for i in cp_features if ((repLevelLuad2[i].isnull()).sum(axis=0)/repLevelLuad2.shape[0])>0.05] print(cols2remove0) repLevelLuad2=repLevelLuad2.drop(cols2remove0, axis=1); cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] repLevelLuad2 = repLevelLuad2.interpolate() repLevelLuad2 = standardize_per_catX(repLevelLuad2,'Metadata_Plate',cp_features.tolist()); df1=repLevelLuad2[~repLevelLuad2['x_mutation_status'].isnull()].reset_index(drop=True) x_cp_luad=replicateCorrs(df1,'x_mutation_status',cp_features,1) saveAsNewSheetToExistingFile(filename,x_cp_luad[2],'cp-luad') ``` # TA-ORF-BBBC037-Rohban ### GE - L1000 ``` taorf_datadir=rawProf_dir+'/l1000_TA_ORF/' gene_info = pd.read_csv(taorf_datadir+"TA.OE005_U2OS_72H.map.txt", sep="\t", dtype=str) # gene_info.columns # TA.OE005_U2OS_72H_INF_n729x22268.gctx # TA.OE005_U2OS_72H_QNORM_n729x978.gctx # TA.OE005_U2OS_72H_ZSPCINF_n729x22268.gctx # TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx") # taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_QNORM_n729x978.gctx") taorf_l1k_df0=taorf_l1k0.data_df taorf_l1k_df=taorf_l1k_df0.T.reset_index() l1k_features=taorf_l1k_df.columns[taorf_l1k_df.columns.str.contains("_at")] taorf_l1k_df=taorf_l1k_df.rename(columns={"cid":"id"}) taorf_l1k_df2=pd.merge(taorf_l1k_df, gene_info, how='inner',on=['id']) # print(taorf_l1k_df.shape,gene_info.shape,taorf_l1k_df2.shape) taorf_l1k_df2.head() # x_genesymbol_mutation taorf_l1k_df2['pert_id']=taorf_l1k_df2['pert_id'].replace('CMAP-000', 'DMSO') # compression_opts = dict(method='zip',archive_name='out.csv') # taorf_l1k_df2.to_csv(procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz',index=False,compression=compression_opts) saveDF_to_CSV_GZ_no_timestamp(taorf_l1k_df2,procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz') print(gene_info.shape,taorf_l1k_df.shape,taorf_l1k_df2.shape) # gene_info.head() taorf_l1k_df2.groupby(['x_genesymbol_mutation']).size().describe() taorf_l1k_df2.groupby(['pert_id']).size().describe() ``` #### Check Replicate Correlation ``` # df1=taorf_l1k_df2[taorf_l1k_df2['pert_id']!='CMAP-000'] df1_scaled = standardize_per_catX(taorf_l1k_df2,'det_plate',l1k_features.tolist()); df1_scaled2=df1_scaled[df1_scaled['pert_id']!='DMSO'] x=replicateCorrs(df1_scaled2,'pert_id',l1k_features,1) ``` ### CP - TAORF ``` profileType=['_augmented','_normalized','_normalized_variable_selected'] plates=os.listdir(rawProf_dir+'TA-ORF-BBBC037-Rohban/') for pt in profileType[0:1]: repLevelTA0=[] for p in plates: repLevelTA0.append(pd.read_csv(rawProf_dir+'TA-ORF-BBBC037-Rohban/'+p+'/'+p+pt+'.csv')) repLevelTA = pd.concat(repLevelTA0) metaTA1=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA.csv') metaTA2=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA_2.csv') # metaTA2=metaTA2.rename(columns={"Metadata_broad_sample":"Metadata_broad_sample_2",'Metadata_Treatment':'Gene Allele Name'}) metaTA=pd.merge(metaTA2, metaTA1, how='left',on=['Metadata_broad_sample']) # metaTA2=metaTA2.rename(columns={"Metadata_Treatment":"Metadata_pert_name"}) # repLevelTA2=pd.merge(repLevelTA, metaTA2, how='left',on=['Metadata_pert_name']) repLevelTA2=pd.merge(repLevelTA, metaTA, how='left',on=['Metadata_broad_sample']) # repLevelTA2=repLevelTA2.rename(columns={"Gene Allele Name":"Allele"}) repLevelTA2['Metadata_broad_sample']=repLevelTA2['Metadata_broad_sample'].replace(np.nan, 'DMSO') saveDF_to_CSV_GZ_no_timestamp(repLevelTA2,procProf_dir+'/preprocessed_data/TA-ORF-BBBC037-Rohban/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(metaTA.shape,repLevelTA.shape,repLevelTA2.shape) # repLevelTA.head() cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] cols2remove0=[i for i in cp_features if ((repLevelTA2[i].isnull()).sum(axis=0)/repLevelTA2.shape[0])>0.05] print(cols2remove0) repLevelTA2=repLevelTA2.drop(cols2remove0, axis=1); # cp_features=list(set(cp_features)-set(cols2remove0)) # repLevelTA2=repLevelTA2.replace('nan', np.nan) repLevelTA2 = repLevelTA2.interpolate() cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] repLevelTA2 = standardize_per_catX(repLevelTA2,'Metadata_Plate',cp_features.tolist()); df1=repLevelTA2[~repLevelTA2['Metadata_broad_sample'].isnull()].reset_index(drop=True) x_taorf_cp=replicateCorrs(df1,'Metadata_broad_sample',cp_features,1) # saveAsNewSheetToExistingFile(filename,x_taorf_cp[2],'cp-taorf') # plates ``` # LINCS-Pilot1 ### GE - L1000 - LINCS ``` os.listdir(rawProf_dir+'/l1000_LINCS/2016_04_01_a549_48hr_batch1_L1000/') os.listdir(rawProf_dir+'/l1000_LINCS/metadata/') data_meta_match_ls=[['level_3','level_3_q2norm_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'], ['level_4W','level_4W_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'], ['level_4','level_4_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'], ['level_5_modz','level_5_modz_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt'], ['level_5_rank','level_5_rank_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt']] lincs_dataDir=rawProf_dir+'/l1000_LINCS/' lincs_pert_info = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str) lincs_meta_level3 = pd.read_csv(lincs_dataDir+"/metadata/col_meta_level_3_REP.A_A549_only_n27837.txt", sep="\t", dtype=str) # lincs_info1 = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str) print(lincs_meta_level3.shape) lincs_meta_level3.head() # lincs_info2 = pd.read_csv(lincs_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str) # lincs_info=pd.concat([lincs_info1, lincs_info2], ignore_index=True) # lincs_info.head() # lincs_meta_level3.groupby('distil_id').size() lincs_meta_level3['distil_id'].unique().shape # lincs_meta_level3.columns.tolist() # lincs_meta_level3.pert_id ls /home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/preprocessed_data/LINCS-Pilot1/CellPainting # procProf_dir+'preprocessed_data/LINCS-Pilot1/' procProf_dir for el in data_meta_match_ls: lincs_l1k_df=parse(lincs_dataDir+"/2016_04_01_a549_48hr_batch1_L1000/"+el[1]).data_df.T.reset_index() lincs_meta0 = pd.read_csv(lincs_dataDir+"/metadata/"+el[2], sep="\t", dtype=str) lincs_meta=pd.merge(lincs_meta0, lincs_pert_info, how='left',on=['pert_id']) lincs_meta=lincs_meta.rename(columns={"distil_id":"cid"}) lincs_l1k_df2=pd.merge(lincs_l1k_df, lincs_meta, how='inner',on=['cid']) lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id']+'_'+lincs_l1k_df2['nearest_dose'].astype(str) lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id_dose'].replace('DMSO_-666', 'DMSO') # lincs_l1k_df2.to_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz',index=False,compression='gzip') saveDF_to_CSV_GZ_no_timestamp(lincs_l1k_df2,procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz') # lincs_l1k_df2 lincs_l1k_rep['pert_id_dose'].unique() lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[1][0]+'.csv.gz') # l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] # x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1) # # saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs') # # lincs_l1k_rep.head() lincs_l1k_rep.pert_id.unique().shape lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains('dose')] lincs_l1k_rep[['pert_dose', 'pert_dose_unit', 'pert_idose', 'nearest_dose']] lincs_l1k_rep['nearest_dose'].unique() # lincs_l1k_rep.rna_plate.unique() lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist()); x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1) lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist()); x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1) saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs') lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz') l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")] lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist()); x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1) saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs') saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs') ``` raw data ``` # set(repLevelLuad2)-set(Y1.columns) # Y1[['Allele', 'Category', 'Clone ID', 'Gene Symbol']].head() # repLevelLuad2[repLevelLuad2['PublicID']=='BRDN0000553807'][['Col','InsertLength','NCBIGeneID','Name','OtherDescriptions','PublicID','Row','Symbol','Transcript','Vector','pert_type','x_mutation_status']].head() ``` #### Check Replicate Correlation ### CP - LINCS ``` # Ran the following on: # https://ec2-54-242-99-61.compute-1.amazonaws.com:5006/notebooks/workspace_nucleolar/2020_07_20_Nucleolar_Calico/1-NucleolarSizeMetrics.ipynb # Metadata def recode_dose(x, doses, return_level=False): closest_index = np.argmin([np.abs(dose - x) for dose in doses]) if np.isnan(x): return 0 if return_level: return closest_index + 1 else: return doses[closest_index] primary_dose_mapping = [0.04, 0.12, 0.37, 1.11, 3.33, 10, 20] metadata=pd.read_csv("/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/CP_LINCS/metadata/matadata_lincs_2.csv") metadata['Metadata_mmoles_per_liter']=metadata.mmoles_per_liter.values.round(2) metadata=metadata.rename(columns={"Assay_Plate_Barcode": "Metadata_Plate",'broad_sample':'Metadata_broad_sample','well_position':'Metadata_Well'}) lincs_submod_root_dir="/home/ubuntu/datasetsbucket/lincs-cell-painting/" profileType=['_augmented','_normalized','_normalized_dmso',\ '_normalized_feature_select','_normalized_feature_select_dmso'] # profileType=['_normalized'] # plates=metadata.Assay_Plate_Barcode.unique().tolist() plates=metadata.Metadata_Plate.unique().tolist() for pt in profileType[4:5]: repLevelLINCS0=[] for p in plates: profile_add=lincs_submod_root_dir+"/profiles/2016_04_01_a549_48hr_batch1/"+p+"/"+p+pt+".csv.gz" if os.path.exists(profile_add): repLevelLINCS0.append(pd.read_csv(profile_add)) repLevelLINCS = pd.concat(repLevelLINCS0) meta_lincs1=metadata.rename(columns={"broad_sample": "Metadata_broad_sample"}) # metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'}) # metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower() repLevelLINCS2=pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample","Metadata_Well","Metadata_Plate",'Metadata_mmoles_per_liter']) repLevelLINCS2 = repLevelLINCS2.assign(Metadata_dose_recode=(repLevelLINCS2.Metadata_mmoles_per_liter.apply( lambda x: recode_dose(x, primary_dose_mapping, return_level=False)))) repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str) # repLevelLINCS2['Metadata_Sample_Dose']=repLevelLINCS2['Metadata_broad_sample']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str) repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id_dose'].replace(np.nan, 'DMSO') # saveDF_to_CSV_GZ_no_timestamp(repLevelLINCS2,procProf_dir+'/preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt+'.csv.gz') print(meta_lincs1.shape,repLevelLINCS.shape,repLevelLINCS2.shape) # (8120, 15) (52223, 1810) (688699, 1825) # repLevelLINCS # pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample"]).shape repLevelLINCS.shape,meta_lincs1.shape (8120, 15) (52223, 1238) (52223, 1253) csv_l1k_lincs=pd.read_csv('./preprocessed_data/LINCS-Pilot1/L1000/replicate_level_l1k'+'.csv.gz') csv_pddf=pd.read_csv('./preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz') csv_l1k_lincs.head() csv_l1k_lincs.pert_id_dose.unique() csv_pddf.Metadata_pert_id_dose.unique() ``` #### Read saved data ``` repLevelLINCS2.groupby(['Metadata_pert_id']).size() repLevelLINCS2.groupby(['Metadata_pert_id_dose']).size().describe() repLevelLINCS2.Metadata_Plate.unique().shape repLevelLINCS2['Metadata_pert_id_dose'].unique().shape # csv_pddf['Metadata_mmoles_per_liter'].round(0).unique() # np.sort(csv_pddf['Metadata_mmoles_per_liter'].unique()) csv_pddf.groupby(['Metadata_dose_recode']).size()#.median() # repLevelLincs2=csv_pddf.copy() import gc cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05] print(cols2remove0) repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1); print('here0') # cp_features=list(set(cp_features)-set(cols2remove0)) # repLevelTA2=repLevelTA2.replace('nan', np.nan) del repLevelLincs2 gc.collect() print('here0') cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] repLevelLincs3[cp_features] = repLevelLincs3[cp_features].interpolate() print('here1') repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist()); print('here1') # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True) # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index() repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id_dose']).size().reset_index() highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id_dose.tolist() highRepComp.remove('DMSO') # df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\ # (repLevelLincs3['Metadata_dose_recode']==1.11)] df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id_dose'].isin(highRepComp))] x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id_dose',cp_features,1) # saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs') repSizeDF # repLevelLincs2=csv_pddf.copy() # cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05] # print(cols2remove0) # repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1); # # cp_features=list(set(cp_features)-set(cols2remove0)) # # repLevelTA2=repLevelTA2.replace('nan', np.nan) # repLevelLincs3 = repLevelLincs3.interpolate() # repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist()); # cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True) # # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index() repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id']).size().reset_index() highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id.tolist() # highRepComp.remove('DMSO') # df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\ # (repLevelLincs3['Metadata_dose_recode']==1.11)] df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id'].isin(highRepComp))] x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id',cp_features,1) # saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs') # x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1) # highRepComp[-1] saveAsNewSheetToExistingFile(filename,x[2],'cp-lincs') # repLevelLincs3.Metadata_Plate repLevelLincs3.head() # csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595")][['Metadata_Plate','Metadata_Well']].drop_duplicates() # csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595") & # (csv_pddf['Metadata_Plate']=='SQ00015196') & (csv_pddf['Metadata_Well']=="B12")][csv_pddf.columns[1820:]].drop_duplicates() # def standardize_per_catX(df,column_name): column_name='Metadata_Plate' repLevelLincs_scaled_perPlate=repLevelLincs3.copy() repLevelLincs_scaled_perPlate[cp_features.tolist()]=repLevelLincs3[cp_features.tolist()+[column_name]].groupby(column_name).transform(lambda x: (x - x.mean()) / x.std()).values # def standardize_per_catX(df,column_name): # # column_name='Metadata_Plate' # cp_features=df.columns[df.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")] # df_scaled_perPlate=df.copy() # df_scaled_perPlate[cp_features.tolist()]=\ # df[cp_features.tolist()+[column_name]].groupby(column_name)\ # .transform(lambda x: (x - x.mean()) / x.std()).values # return df_scaled_perPlate df0=repLevelLincs_scaled_perPlate[(repLevelLincs_scaled_perPlate['Metadata_Sample_Dose'].isin(highRepComp))] x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1) ```
true
code
0.294196
null
null
null
null
# NOAA Wave Watch 3 and NDBC Buoy Data Comparison *Note: this notebook requires python3.* This notebook demostrates how to compare [WaveWatch III Global Ocean Wave Model](http://data.planetos.com/datasets/noaa_ww3_global_1.25x1d:noaa-wave-watch-iii-nww3-ocean-wave-model?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) and [NOAA NDBC buoy data](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) using the Planet OS API. API documentation is available at http://docs.planetos.com. If you have questions or comments, join the [Planet OS Slack community](http://slack.planetos.com/) to chat with our development team. For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/. This notebook also makes use of the [matplotlib basemap toolkit.](http://matplotlib.org/basemap/index.html) ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt import dateutil.parser import datetime from urllib.request import urlopen, Request import simplejson as json from datetime import date, timedelta, datetime import matplotlib.dates as mdates from mpl_toolkits.basemap import Basemap ``` **Important!** You'll need to replace apikey below with your actual Planet OS API key, which you'll find [on the Planet OS account settings page.](#http://data.planetos.com/account/settings/?utm_source=github&utm_medium=notebook&utm_campaign=ww3-api-notebook) and NDBC buoy station name in which you are intrested. ``` dataset_id = 'noaa_ndbc_stdmet_stations' ## stations with wave height available: '46006', '46013', '46029' ## stations without wave height: icac1', '41047', 'bepb6', '32st0', '51004' ## stations too close to coastline (no point to compare to ww3)'sacv4', 'gelo1', 'hcef1' station = '46029' apikey = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>' ``` Let's first query the API to see what stations are available for the [NDBC Standard Meteorological Data dataset.](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) ``` API_url = 'http://api.planetos.com/v1/datasets/%s/stations?apikey=%s' % (dataset_id, apikey) request = Request(API_url) response = urlopen(request) API_data_locations = json.loads(response.read()) # print(API_data_locations) ``` Now we'll use matplotlib to visualize the stations on a simple basemap. ``` m = Basemap(projection='merc',llcrnrlat=-80,urcrnrlat=80,\ llcrnrlon=-180,urcrnrlon=180,lat_ts=20,resolution='c') fig=plt.figure(figsize=(15,10)) m.drawcoastlines() ##m.fillcontinents() for i in API_data_locations['station']: x,y=m(API_data_locations['station'][i]['SpatialExtent']['coordinates'][0], API_data_locations['station'][i]['SpatialExtent']['coordinates'][1]) plt.scatter(x,y,color='r') x,y=m(API_data_locations['station'][station]['SpatialExtent']['coordinates'][0], API_data_locations['station'][station]['SpatialExtent']['coordinates'][1]) plt.scatter(x,y,s=100,color='b') ``` Let's examine the last five days of data. For the WaveWatch III forecast, we'll use the reference time parameter to pull forecast data from the 18:00 model run from five days ago. ``` ## Find suitable reference time values atthemoment = datetime.utcnow() atthemoment = atthemoment.strftime('%Y-%m-%dT%H:%M:%S') before5days = datetime.utcnow() - timedelta(days=5) before5days_long = before5days.strftime('%Y-%m-%dT%H:%M:%S') before5days_short = before5days.strftime('%Y-%m-%d') start = before5days_long end = atthemoment reftime_start = str(before5days_short) + 'T18:00:00' reftime_end = reftime_start ``` API request for NOAA NDBC buoy station data ``` API_url = "http://api.planetos.com/v1/datasets/{0}/point?station={1}&apikey={2}&start={3}&end={4}&count=1000".format(dataset_id,station,apikey,start,end) print(API_url) request = Request(API_url) response = urlopen(request) API_data_buoy = json.loads(response.read()) buoy_variables = [] for k,v in set([(j,i['context']) for i in API_data_buoy['entries'] for j in i['data'].keys()]): buoy_variables.append(k) ``` Find buoy station coordinates to use them later for finding NOAA Wave Watch III data ``` for i in API_data_buoy['entries']: #print(i['axes']['time']) if i['context'] == 'time_latitude_longitude': longitude = (i['axes']['longitude']) latitude = (i['axes']['latitude']) print ('Latitude: '+ str(latitude)) print ('Longitude: '+ str(longitude)) ``` API request for NOAA WaveWatch III (NWW3) Ocean Wave Model near the point of selected station. Note that data may not be available at the requested reference time. If the response is empty, try removing the reference time parameters `reftime_start` and `reftime_end` from the query. ``` API_url = 'http://api.planetos.com/v1/datasets/noaa_ww3_global_1.25x1d/point?lat={0}&lon={1}&verbose=true&apikey={2}&count=100&end={3}&reftime_start={4}&reftime_end={5}'.format(latitude,longitude,apikey,end,reftime_start,reftime_end) request = Request(API_url) response = urlopen(request) API_data_ww3 = json.loads(response.read()) print(API_url) ww3_variables = [] for k,v in set([(j,i['context']) for i in API_data_ww3['entries'] for j in i['data'].keys()]): ww3_variables.append(k) ``` Manually review the list of WaveWatch and NDBC data variables to determine which parameters are equivalent for comparison. ``` print(ww3_variables) print(buoy_variables) ``` Next we'll build a dictionary of corresponding variables that we want to compare. ``` buoy_model = {'wave_height':'Significant_height_of_combined_wind_waves_and_swell_surface', 'mean_wave_dir':'Primary_wave_direction_surface', 'average_wpd':'Primary_wave_mean_period_surface', 'wind_spd':'Wind_speed_surface'} ``` Read data from the JSON responses and convert the values to floats for plotting. Note that depending on the dataset, some variables have different timesteps than others, so a separate time array for each variable is recommended. ``` def append_data(in_string): if in_string == None: return np.nan elif in_string == 'None': return np.nan else: return float(in_string) ww3_data = {} ww3_times = {} buoy_data = {} buoy_times = {} for k,v in buoy_model.items(): ww3_data[v] = [] ww3_times[v] = [] buoy_data[k] = [] buoy_times[k] = [] for i in API_data_ww3['entries']: for j in i['data']: if j in buoy_model.values(): ww3_data[j].append(append_data(i['data'][j])) ww3_times[j].append(dateutil.parser.parse(i['axes']['time'])) for i in API_data_buoy['entries']: for j in i['data']: if j in buoy_model.keys(): buoy_data[j].append(append_data(i['data'][j])) buoy_times[j].append(dateutil.parser.parse(i['axes']['time'])) for i in ww3_data: ww3_data[i] = np.array(ww3_data[i]) ww3_times[i] = np.array(ww3_times[i]) ``` Finally, let's plot the data using matplotlib. ``` buoy_label = "NDBC Station %s" % station ww3_label = "WW3 at %s" % reftime_start for k,v in buoy_model.items(): if np.abs(np.nansum(buoy_data[k]))>0: fig=plt.figure(figsize=(10,5)) plt.title(k+' '+v) plt.plot(ww3_times[v],ww3_data[v], label=ww3_label) plt.plot(buoy_times[k],buoy_data[k],'*',label=buoy_label) plt.legend(bbox_to_anchor=(1.5, 0.22), loc=1, borderaxespad=0.) plt.xlabel('Time') plt.ylabel(k) fig.autofmt_xdate() plt.grid() ```
true
code
0.350005
null
null
null
null
``` import json import numpy as np import matplotlib.pyplot as plt from scipy.integrate import quad from scipy.special import comb from tabulate import tabulate %matplotlib inline ``` ## Expected numbers on Table 3. ``` rows = [] datasets = { 'Binary': 2, 'AG news': 4, 'CIFAR10': 10, 'CIFAR100': 100, 'Wiki3029': 3029, } def expectations(C: int) -> float: """ C is the number of latent classes. """ e = 0. for k in range(1, C + 1): e += C / k return e for dataset_name, C in datasets.items(): e = expectations(C) rows.append((dataset_name, C, np.ceil(e))) # ImageNet is non-uniform label distribution on the training dataset data = json.load(open("./imagenet_count.json")) counts = np.array(list(data.values())) total_num = np.sum(counts) prob = counts / total_num def integrand(t: float, prob: np.ndarray) -> float: return 1. - np.prod(1 - np.exp(-prob * t)) rows.append(("ImageNet", len(prob), np.ceil(quad(integrand, 0, np.inf, args=(prob))[0]))) print(tabulate(rows, headers=["Dataset", "\# classes", "\mathbb{E}[K+1]"])) ``` ## Probability $\upsilon$ ``` def prob(C, N): """ C: the number of latent class N: the number of samples to draw """ theoretical = [] for n in range(C, N + 1): p = 0. for m in range(C - 1): p += comb(C - 1, m) * ((-1) ** m) * np.exp((n - 1) * np.log(1. - (m + 1) / C)) theoretical.append((n, max(p, 0.))) return np.array(theoretical) # example of CIFAR-10 C = 10 for N in [32, 63, 128, 256, 512]: p = np.sum(prob(C, N).T[1]) print("{:3d} {:.7f}".format(N, p)) # example of CIFAR-100 C = 100 ps = [] ns = [] for N in 128 * np.arange(1, 9): p = np.sum(prob(C, N).T[1]) print("{:4d} {}".format(N, p)) ps.append(p) ns.append(N) ``` ## Simulation ``` n_loop = 10 rnd = np.random.RandomState(7) labels = np.arange(C).repeat(100) results = {} for N in ns: num_iters = int(len(labels) / N) total_samples_for_bounds = float(num_iters * N * (n_loop)) for _ in range(n_loop): rnd.shuffle(labels) for batch_id in range(len(labels) // N): if len(set(labels[N * batch_id:N * (batch_id + 1)])) == C: results[N] = results.get(N, 0.) + N / total_samples_for_bounds else: results[N] = results.get(N, 0.) + 0. xs = [] ys = [] for k, v in results.items(): print(k, v) ys.append(v) xs.append(k) plt.plot(ns, ps, label="Theoretical") plt.plot(xs, ys, label="Empirical") plt.ylabel("probability") plt.xlabel("$K+1$") plt.title("CIFAR-100 simulation") plt.legend() ```
true
code
0.620133
null
null
null
null
# PageRank Performance Benchmarking # Skip notebook test This notebook benchmarks performance of running PageRank within cuGraph against NetworkX. NetworkX contains several implementations of PageRank. This benchmark will compare cuGraph versus the defaukt Nx implementation as well as the SciPy version Notebook Credits Original Authors: Bradley Rees Last Edit: 08/16/2020 RAPIDS Versions: 0.15 Test Hardware GV100 32G, CUDA 10,0 Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz 32GB system memory ### Test Data | File Name | Num of Vertices | Num of Edges | |:---------------------- | --------------: | -----------: | | preferentialAttachment | 100,000 | 999,970 | | caidaRouterLevel | 192,244 | 1,218,132 | | coAuthorsDBLP | 299,067 | 1,955,352 | | dblp-2010 | 326,186 | 1,615,400 | | citationCiteseer | 268,495 | 2,313,294 | | coPapersDBLP | 540,486 | 30,491,458 | | coPapersCiteseer | 434,102 | 32,073,440 | | as-Skitter | 1,696,415 | 22,190,596 | ### Timing What is not timed: Reading the data What is timmed: (1) creating a Graph, (2) running PageRank The data file is read in once for all flavors of PageRank. Each timed block will craete a Graph and then execute the algorithm. The results of the algorithm are not compared. If you are interested in seeing the comparison of results, then please see PageRank in the __notebooks__ repo. ## NOTICE _You must have run the __dataPrep__ script prior to running this notebook so that the data is downloaded_ See the README file in this folder for a discription of how to get the data ## Now load the required libraries ``` # Import needed libraries import gc import time import rmm import cugraph import cudf # NetworkX libraries import networkx as nx from scipy.io import mmread try: import matplotlib except ModuleNotFoundError: os.system('pip install matplotlib') import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np ``` ### Define the test data ``` # Test File data = { 'preferentialAttachment' : './data/preferentialAttachment.mtx', 'caidaRouterLevel' : './data/caidaRouterLevel.mtx', 'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx', 'dblp' : './data/dblp-2010.mtx', 'citationCiteseer' : './data/citationCiteseer.mtx', 'coPapersDBLP' : './data/coPapersDBLP.mtx', 'coPapersCiteseer' : './data/coPapersCiteseer.mtx', 'as-Skitter' : './data/as-Skitter.mtx' } ``` ### Define the testing functions ``` # Data reader - the file format is MTX, so we will use the reader from SciPy def read_mtx_file(mm_file): print('Reading ' + str(mm_file) + '...') M = mmread(mm_file).asfptype() return M # CuGraph PageRank def cugraph_call(M, max_iter, tol, alpha): gdf = cudf.DataFrame() gdf['src'] = M.row gdf['dst'] = M.col print('\tcuGraph Solving... ') t1 = time.time() # cugraph Pagerank Call G = cugraph.DiGraph() G.from_cudf_edgelist(gdf, source='src', destination='dst', renumber=False) df = cugraph.pagerank(G, alpha=alpha, max_iter=max_iter, tol=tol) t2 = time.time() - t1 return t2 # Basic NetworkX PageRank def networkx_call(M, max_iter, tol, alpha): nnz_per_row = {r: 0 for r in range(M.get_shape()[0])} for nnz in range(M.getnnz()): nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]] for nnz in range(M.getnnz()): M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]]) M = M.tocsr() if M is None: raise TypeError('Could not read the input graph') if M.shape[0] != M.shape[1]: raise TypeError('Shape is not square') # should be autosorted, but check just to make sure if not M.has_sorted_indices: print('sort_indices ... ') M.sort_indices() z = {k: 1.0/M.shape[0] for k in range(M.shape[0])} print('\tNetworkX Solving... ') # start timer t1 = time.time() Gnx = nx.DiGraph(M) pr = nx.pagerank(Gnx, alpha, z, max_iter, tol) t2 = time.time() - t1 return t2 # SciPy PageRank def networkx_scipy_call(M, max_iter, tol, alpha): nnz_per_row = {r: 0 for r in range(M.get_shape()[0])} for nnz in range(M.getnnz()): nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]] for nnz in range(M.getnnz()): M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]]) M = M.tocsr() if M is None: raise TypeError('Could not read the input graph') if M.shape[0] != M.shape[1]: raise TypeError('Shape is not square') # should be autosorted, but check just to make sure if not M.has_sorted_indices: print('sort_indices ... ') M.sort_indices() z = {k: 1.0/M.shape[0] for k in range(M.shape[0])} # SciPy Pagerank Call print('\tSciPy Solving... ') t1 = time.time() Gnx = nx.DiGraph(M) pr = nx.pagerank_scipy(Gnx, alpha, z, max_iter, tol) t2 = time.time() - t1 return t2 ``` ### Run the benchmarks ``` # arrays to capture performance gains time_cu = [] time_nx = [] time_sp = [] perf_nx = [] perf_sp = [] names = [] # init libraries by doing a simple task v = './data/preferentialAttachment.mtx' M = read_mtx_file(v) trapids = cugraph_call(M, 100, 0.00001, 0.85) del M for k,v in data.items(): gc.collect() # Saved the file Name names.append(k) # read the data M = read_mtx_file(v) # call cuGraph - this will be the baseline trapids = cugraph_call(M, 100, 0.00001, 0.85) time_cu.append(trapids) # Now call NetworkX tn = networkx_call(M, 100, 0.00001, 0.85) speedUp = (tn / trapids) perf_nx.append(speedUp) time_nx.append(tn) # Now call SciPy tsp = networkx_scipy_call(M, 100, 0.00001, 0.85) speedUp = (tsp / trapids) perf_sp.append(speedUp) time_sp.append(tsp) print("cuGraph (" + str(trapids) + ") Nx (" + str(tn) + ") SciPy (" + str(tsp) + ")" ) del M ``` ### plot the output ``` %matplotlib inline plt.figure(figsize=(10,8)) bar_width = 0.35 index = np.arange(len(names)) _ = plt.bar(index, perf_nx, bar_width, color='g', label='vs Nx') _ = plt.bar(index + bar_width, perf_sp, bar_width, color='b', label='vs SciPy') plt.xlabel('Datasets') plt.ylabel('Speedup') plt.title('PageRank Performance Speedup') plt.xticks(index + (bar_width / 2), names) plt.xticks(rotation=90) # Text on the top of each barplot for i in range(len(perf_nx)): plt.text(x = (i - 0.55) + bar_width, y = perf_nx[i] + 25, s = round(perf_nx[i], 1), size = 12) for i in range(len(perf_sp)): plt.text(x = (i - 0.1) + bar_width, y = perf_sp[i] + 25, s = round(perf_sp[i], 1), size = 12) plt.legend() plt.show() ``` # Dump the raw stats ``` perf_nx perf_sp time_cu time_nx time_sp ``` ___ Copyright (c) 2020, NVIDIA CORPORATION. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ___
true
code
0.387893
null
null
null
null
<a href="https://colab.research.google.com/github/mjvakili/MLcourse/blob/master/day2/nn_qso_finder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Let's start by importing the libraries that we need for this exercise. ``` import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import matplotlib from sklearn.model_selection import train_test_split #matplotlib settings matplotlib.rcParams['xtick.major.size'] = 7 matplotlib.rcParams['xtick.labelsize'] = 'x-large' matplotlib.rcParams['ytick.major.size'] = 7 matplotlib.rcParams['ytick.labelsize'] = 'x-large' matplotlib.rcParams['xtick.top'] = False matplotlib.rcParams['ytick.right'] = False matplotlib.rcParams['ytick.direction'] = 'in' matplotlib.rcParams['xtick.direction'] = 'in' matplotlib.rcParams['font.size'] = 15 matplotlib.rcParams['figure.figsize'] = [7,7] #We need the astroml library to fetch the photometric datasets of sdss qsos and stars pip install astroml from astroML.datasets import fetch_dr7_quasar from astroML.datasets import fetch_sdss_sspp quasars = fetch_dr7_quasar() stars = fetch_sdss_sspp() # Data procesing taken from #https://www.astroml.org/book_figures/chapter9/fig_star_quasar_ROC.html by Jake Van der Plus # stack colors into matrix X Nqso = len(quasars) Nstars = len(stars) X = np.empty((Nqso + Nstars, 4), dtype=float) X[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g'] X[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r'] X[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i'] X[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z'] X[Nqso:, 0] = stars['upsf'] - stars['gpsf'] X[Nqso:, 1] = stars['gpsf'] - stars['rpsf'] X[Nqso:, 2] = stars['rpsf'] - stars['ipsf'] X[Nqso:, 3] = stars['ipsf'] - stars['zpsf'] y = np.zeros(Nqso + Nstars, dtype=int) y[:Nqso] = 1 X = X/np.max(X, axis=0) # split into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.9) #Now let's build a simple Sequential model in which fully connected layers come after one another model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), #this flattens input tf.keras.layers.Dense(128, activation = "relu"), tf.keras.layers.Dense(64, activation = "relu"), tf.keras.layers.Dense(32, activation = "relu"), tf.keras.layers.Dense(32, activation = "relu"), tf.keras.layers.Dense(1, activation="sigmoid") ]) model.compile(optimizer='adam', loss='binary_crossentropy') history = model.fit(X_train, y_train, validation_data = (X_test, y_test), batch_size = 32, epochs=20, verbose = 1) loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.plot(epochs, loss, lw = 5, label='Training loss') plt.plot(epochs, val_loss, lw = 5, label='validation loss') plt.title('Loss') plt.legend(loc=0) plt.show() prob = model.predict_proba(X_test) #model probabilities from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_test, prob) plt.loglog(fpr, tpr, lw = 4) plt.xlabel('false positive rate') plt.ylabel('true positive rate') plt.xlim(0.0, 0.15) plt.ylim(0.6, 1.01) plt.show() plt.plot(thresholds, tpr, lw = 4) plt.plot(thresholds, fpr, lw = 4) plt.xlim(0,1) plt.yscale("log") plt.show() #plt.xlabel('false positive rate') #plt.ylabel('true positive rate') ##plt.xlim(0.0, 0.15) #plt.ylim(0.6, 1.01) #Now let's look at the confusion matrix y_pred = model.predict(X_test) z_pred = np.zeros(y_pred.shape[0], dtype = int) mask = np.where(y_pred>.5)[0] z_pred[mask] = 1 confusion_matrix(y_test, z_pred.astype(int)) import os, signal os.kill(os.getpid(), signal.SIGKILL) ``` #Exercise1: Try to change the number of layers, batchsize, as well as the default learning rate, one at a time. See which one can make a more significant impact on the performance of the model. #Exercise 2: Write a simple function for visualizing the predicted decision boundaries in the feature space. Try to identify the regions of the parameter space which contribute significantly to the false positive rates. #Exercise 3: This dataset is a bit imbalanced in that the QSOs are outnumbered by the stars. Can you think of a wighting scheme to pass to the loss function, such that the detection rate of QSOs increases?
true
code
0.693161
null
null
null
null
# Exercise: Find correspondences between old and modern english The purpose of this execise is to use two vecsigrafos, one built on UMBC and Wordnet and another one produced by directly running Swivel against a corpus of Shakespeare's complete works, to try to find corelations between old and modern English, e.g. "thou" -> "you", "dost" -> "do", "raiment" -> "clothing". For example, you can try to pick a set of 100 words in "ye olde" English corpus and see how they correlate to UMBC over WordNet. ![William Shaespeare](https://github.com/HybridNLP2018/tutorial/blob/master/images/220px-Shakespeare.jpg?raw=1) Next, we prepare the embeddings from the Shakespeare corpus and load a UMBC vecsigrafo, which will provide the two vector spaces to correlate. ## Download a small text corpus First, we download the corpus into our environment. We will use the Shakespeare's complete works corpus, published as part of Project Gutenberg and pbublicly available. ``` import os %ls #!rm -r tutorial !git clone https://github.com/HybridNLP2018/tutorial ``` Let us see if the corpus is where we think it is: ``` %cd tutorial/lit %ls ``` Downloading Swivel ``` !wget http://expertsystemlab.com/hybridNLP18/swivel.zip !unzip swivel.zip !rm swivel/* !rm swivel.zip ``` ## Learn the Swivel embeddings over the Old Shakespeare corpus ### Calculating the co-occurrence matrix ``` corpus_path = '/content/tutorial/lit/shakespeare_complete_works.txt' coocs_path = '/content/tutorial/lit/coocs' shard_size = 512 freq=3 !python /content/tutorial/scripts/swivel/prep.py --input={corpus_path} --output_dir={coocs_path} --shard_size={shard_size} --min_count={freq} %ls {coocs_path} | head -n 10 ``` ### Learning the embeddings from the matrix ``` vec_path = '/content/tutorial/lit/vec/' !python /content/tutorial/scripts/swivel/swivel.py --input_base_path={coocs_path} \ --output_base_path={vec_path} \ --num_epochs=20 --dim=300 \ --submatrix_rows={shard_size} --submatrix_cols={shard_size} ``` Checking the context of the 'vec' directory. Should contain checkpoints of the model plus tsv files for column and row embeddings. ``` os.listdir(vec_path) ``` Converting tsv to bin: ``` !python /content/tutorial/scripts/swivel/text2bin.py --vocab={vec_path}vocab.txt --output={vec_path}vecs.bin \ {vec_path}row_embedding.tsv \ {vec_path}col_embedding.tsv %ls {vec_path} ``` ### Read stored binary embeddings and inspect them ``` import importlib.util spec = importlib.util.spec_from_file_location("vecs", "/content/tutorial/scripts/swivel/vecs.py") m = importlib.util.module_from_spec(spec) spec.loader.exec_module(m) shakespeare_vecs = m.Vecs(vec_path + 'vocab.txt', vec_path + 'vecs.bin') ``` ##Basic method to print the k nearest neighbors for a given word ``` def k_neighbors(vec, word, k=10): res = vec.neighbors(word) if not res: print('%s is not in the vocabulary, try e.g. %s' % (word, vecs.random_word_in_vocab())) else: for word, sim in res[:10]: print('%0.4f: %s' % (sim, word)) k_neighbors(shakespeare_vecs, 'strife') k_neighbors(shakespeare_vecs,'youth') ``` ## Load vecsigrafo from UMBC over WordNet ``` %ls !wget https://zenodo.org/record/1446214/files/vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz %ls !tar -xvzf vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz !rm vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz umbc_wn_vec_path = '/content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/' ``` Extracting the vocabulary from the .tsv file: ``` with open(umbc_wn_vec_path + 'vocab.txt', 'w', encoding='utf_8') as f: with open(umbc_wn_vec_path + 'row_embedding.tsv', 'r', encoding='utf_8') as vec_lines: vocab = [line.split('\t')[0].strip() for line in vec_lines] for word in vocab: print(word, file=f) ``` Converting tsv to bin: ``` !python /content/tutorial/scripts/swivel/text2bin.py --vocab={umbc_wn_vec_path}vocab.txt --output={umbc_wn_vec_path}vecs.bin \ {umbc_wn_vec_path}row_embedding.tsv %ls umbc_wn_vecs = m.Vecs(umbc_wn_vec_path + 'vocab.txt', umbc_wn_vec_path + 'vecs.bin') k_neighbors(umbc_wn_vecs, 'lem_California') ``` # Add your solution to the proposed exercise here Follow the instructions given in the prvious lesson (*Vecsigrafos for curating and interlinking knowledge graphs*) to find correlation between terms in old Enlgish extracted from the Shakespeare corpus and terms in modern English extracted from UMBC. You will need to generate a dictionary relating pairs of lemmas between the two vocabularies and use to produce a pair of translation matrices to transform vectors from one vector space to the other. Then apply the k_neighbors method to identify the correlations. # Conclusion This notebook proposes the use of Shakespeare's complete works and UMBC to provide the student with embeddings that can be exploited for different operations between the two vector spaces. Particularly, we propose to identify terms and their correlations over such spaces. # Acknowledgements In memory of Dr. Jack Brandabur, whose passion for Shakespeare and Cervantes inspired this notebook.
true
code
0.421254
null
null
null
null
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 4: Training for Tabular Data** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 4 Material * Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb) * Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb) * **Part 4.3: Keras Regression for Deep Neural Networks with RMSE** [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb) * Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb) * Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb) # Google CoLab Instructions The following code ensures that Google CoLab is running the correct version of TensorFlow. ``` try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ``` # Part 4.3: Keras Regression for Deep Neural Networks with RMSE Regression results are evaluated differently than classification. Consider the following code that trains a neural network for regression on the data set **jh-simple-dataset.csv**. ``` import pandas as pd from scipy.stats import zscore from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt # Read the data set df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv", na_values=['NA','?']) # Generate dummies for job df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1) df.drop('job', axis=1, inplace=True) # Generate dummies for area df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1) df.drop('area', axis=1, inplace=True) # Generate dummies for product df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1) df.drop('product', axis=1, inplace=True) # Missing values for income med = df['income'].median() df['income'] = df['income'].fillna(med) # Standardize ranges df['income'] = zscore(df['income']) df['aspect'] = zscore(df['aspect']) df['save_rate'] = zscore(df['save_rate']) df['subscriptions'] = zscore(df['subscriptions']) # Convert to numpy - Classification x_columns = df.columns.drop('age').drop('id') x = df[x_columns].values y = df['age'].values # Create train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Build the neural network model = Sequential() model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1 model.add(Dense(10, activation='relu')) # Hidden 2 model.add(Dense(1)) # Output model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000) ``` ### Mean Square Error The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired. $ \mbox{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $ ``` from sklearn import metrics # Predict pred = model.predict(x_test) # Measure MSE error. score = metrics.mean_squared_error(pred,y_test) print("Final score (MSE): {}".format(score)) ``` ### Root Mean Square Error The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired. $ \mbox{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $ ``` import numpy as np # Measure RMSE error. RMSE is common for regression. score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score)) ``` ### Lift Chart To generate a lift chart, perform the following activities: * Sort the data by expected output. Plot the blue line above. * For every point on the x-axis plot the predicted value for that same data point. This is the green line above. * The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high. * The y-axis is ranged according to the values predicted. Reading a lift chart: * The expected and predict lines should be close. Notice where one is above the ot other. * The below chart is the most accurate on lower age. ``` # Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Plot the chart chart_regression(pred.flatten(),y_test) ```
true
code
0.592136
null
null
null
null
# About this Notebook In this notebook, we provide the tensor factorization implementation using an iterative Alternating Least Square (ALS), which is a good starting point for understanding tensor factorization. ``` import numpy as np from numpy.linalg import inv as inv ``` # Part 1: Matrix Computation Concepts ## 1) Kronecker product - **Definition**: Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as $$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$ where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows). - **Example**: If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have $$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$ $$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$ ## 2) Khatri-Rao product (`kr_prod`) - **Definition**: Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows, $$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r},$$ where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product. - **Example**: If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have $$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$ $$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$ $$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$ ``` def kr_prod(a, b): return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1) A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8], [9, 10]]) print(kr_prod(A, B)) ``` ## 3) CP decomposition ### CP Combination (`cp_combination`) - **Definition**: The CP decomposition factorizes a tensor into a sum of outer products of vectors. For example, for a third-order tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, the CP decomposition can be written as $$\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s},$$ or element-wise, $$\hat{y}_{ijt}=\sum_{s=1}^{r}u_{is}v_{js}x_{ts},\forall (i,j,t),$$ where vectors $\boldsymbol{u}_{s}\in\mathbb{R}^{m},\boldsymbol{v}_{s}\in\mathbb{R}^{n},\boldsymbol{x}_{s}\in\mathbb{R}^{f}$ are columns of factor matrices $U\in\mathbb{R}^{m\times r},V\in\mathbb{R}^{n\times r},X\in\mathbb{R}^{f\times r}$, respectively. The symbol $\circ$ denotes vector outer product. - **Example**: Given matrices $U=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]\in\mathbb{R}^{2\times 2}$, $V=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ 5 & 6 \\ \end{array} \right]\in\mathbb{R}^{3\times 2}$ and $X=\left[ \begin{array}{cc} 1 & 5 \\ 2 & 6 \\ 3 & 7 \\ 4 & 8 \\ \end{array} \right]\in\mathbb{R}^{4\times 2}$, then if $\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s}$, then, we have $$\hat{Y}_1=\hat{\mathcal{Y}}(:,:,1)=\left[ \begin{array}{ccc} 31 & 42 & 65 \\ 63 & 86 & 135 \\ \end{array} \right],$$ $$\hat{Y}_2=\hat{\mathcal{Y}}(:,:,2)=\left[ \begin{array}{ccc} 38 & 52 & 82 \\ 78 & 108 & 174 \\ \end{array} \right],$$ $$\hat{Y}_3=\hat{\mathcal{Y}}(:,:,3)=\left[ \begin{array}{ccc} 45 & 62 & 99 \\ 93 & 130 & 213 \\ \end{array} \right],$$ $$\hat{Y}_4=\hat{\mathcal{Y}}(:,:,4)=\left[ \begin{array}{ccc} 52 & 72 & 116 \\ 108 & 152 & 252 \\ \end{array} \right].$$ ``` def cp_combine(U, V, X): return np.einsum('is, js, ts -> ijt', U, V, X) U = np.array([[1, 2], [3, 4]]) V = np.array([[1, 3], [2, 4], [5, 6]]) X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]]) print(cp_combine(U, V, X)) print() print('tensor size:') print(cp_combine(U, V, X).shape) ``` ## 4) Tensor Unfolding (`ten2mat`) Using numpy reshape to perform 3rd rank tensor unfold operation. [[**link**](https://stackoverflow.com/questions/49970141/using-numpy-reshape-to-perform-3rd-rank-tensor-unfold-operation)] ``` def ten2mat(tensor, mode): return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F') X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]], [[5, 6, 7, 8], [7, 8, 9, 10]], [[9, 10, 11, 12], [11, 12, 13, 14]]]) print('tensor size:') print(X.shape) print('original tensor:') print(X) print() print('(1) mode-1 tensor unfolding:') print(ten2mat(X, 0)) print() print('(2) mode-2 tensor unfolding:') print(ten2mat(X, 1)) print() print('(3) mode-3 tensor unfolding:') print(ten2mat(X, 2)) ``` # Part 2: Tensor CP Factorization using ALS (TF-ALS) Regarding CP factorization as a machine learning problem, we could perform a learning task by minimizing the loss function over factor matrices, that is, $$\min _{U, V, X} \sum_{(i, j, t) \in \Omega}\left(y_{i j t}-\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}\right)^{2}.$$ Within this optimization problem, multiplication among three factor matrices (acted as parameters) makes this problem difficult. Alternatively, we apply the ALS algorithm for CP factorization. In particular, the optimization problem for each row $\boldsymbol{u}_{i}\in\mathbb{R}^{R},\forall i\in\left\{1,2,...,M\right\}$ of factor matrix $U\in\mathbb{R}^{M\times R}$ is given by $$\min _{\boldsymbol{u}_{i}} \sum_{j,t:(i, j, t) \in \Omega}\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]^\top.$$ The least square for this optimization is $$u_{i} \Leftarrow\left(\sum_{j, t, i, j, t ) \in \Omega} \left(x_{t} \odot v_{j}\right)\left(x_{t} \odot v_{j}\right)^{\top}\right)^{-1}\left(\sum_{j, t :(i, j, t) \in \Omega} y_{i j t} \left(x_{t} \odot v_{j}\right)\right), \forall i \in\{1,2, \ldots, M\}.$$ The alternating least squares for $V\in\mathbb{R}^{N\times R}$ and $X\in\mathbb{R}^{T\times R}$ are $$\boldsymbol{v}_{j}\Leftarrow\left(\sum_{i,t:(i,j,t)\in\Omega}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,t:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\right),\forall j\in\left\{1,2,...,N\right\},$$ $$\boldsymbol{x}_{t}\Leftarrow\left(\sum_{i,j:(i,j,t)\in\Omega}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,j:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\right),\forall t\in\left\{1,2,...,T\right\}.$$ ``` def CP_ALS(sparse_tensor, rank, maxiter): dim1, dim2, dim3 = sparse_tensor.shape dim = np.array([dim1, dim2, dim3]) U = 0.1 * np.random.rand(dim1, rank) V = 0.1 * np.random.rand(dim2, rank) X = 0.1 * np.random.rand(dim3, rank) pos = np.where(sparse_tensor != 0) binary_tensor = np.zeros((dim1, dim2, dim3)) binary_tensor[pos] = 1 tensor_hat = np.zeros((dim1, dim2, dim3)) for iters in range(maxiter): for order in range(dim.shape[0]): if order == 0: var1 = kr_prod(X, V).T elif order == 1: var1 = kr_prod(X, U).T else: var1 = kr_prod(V, U).T var2 = kr_prod(var1, var1) var3 = np.matmul(var2, ten2mat(binary_tensor, order).T).reshape([rank, rank, dim[order]]) var4 = np.matmul(var1, ten2mat(sparse_tensor, order).T) for i in range(dim[order]): var_Lambda = var3[ :, :, i] inv_var_Lambda = inv((var_Lambda + var_Lambda.T)/2 + 10e-12 * np.eye(rank)) vec = np.matmul(inv_var_Lambda, var4[:, i]) if order == 0: U[i, :] = vec.copy() elif order == 1: V[i, :] = vec.copy() else: X[i, :] = vec.copy() tensor_hat = cp_combine(U, V, X) mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0] rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0]) if (iters + 1) % 100 == 0: print('Iter: {}'.format(iters + 1)) print('Training MAPE: {:.6}'.format(mape)) print('Training RMSE: {:.6}'.format(rmse)) print() return tensor_hat, U, V, X ``` # Part 3: Data Organization ## 1) Matrix Structure We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals), $$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$ ## 2) Tensor Structure We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day), $$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$ therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$. **How to transform a data set into something we can use for time series imputation?** # Part 4: Experiments on Guangzhou Data Set ``` import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.2 # ============================================================================= ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario: # binary_tensor = np.zeros(dense_tensor.shape) # for i1 in range(dense_tensor.shape[0]): # for i2 in range(dense_tensor.shape[1]): # binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) ``` **Question**: Given only the partially observed data $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, how can we impute the unknown missing values? The main influential factors for such imputation model are: - `rank`. - `maxiter`. ``` import time start = time.time() rank = 80 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) ``` **Experiment results** of missing data imputation using TF-ALS: | scenario |`rank`| `maxiter`| mape | rmse | |:----------|-----:|---------:|-----------:|----------:| |**20%, RM**| 80 | 1000 | **0.0833** | **3.5928**| |**40%, RM**| 80 | 1000 | **0.0837** | **3.6190**| |**20%, NM**| 10 | 1000 | **0.1027** | **4.2960**| |**40%, NM**| 10 | 1000 | **0.1028** | **4.3274**| # Part 5: Experiments on Birmingham Data Set ``` import scipy.io tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.3 # ============================================================================= ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario: # binary_tensor = np.zeros(dense_tensor.shape) # for i1 in range(dense_tensor.shape[0]): # for i2 in range(dense_tensor.shape[1]): # binary_tensor[i1, i2, :] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 30 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) ``` **Experiment results** of missing data imputation using TF-ALS: | scenario |`rank`| `maxiter`| mape | rmse | |:----------|-----:|---------:|-----------:|-----------:| |**10%, RM**| 30 | 1000 | **0.0615** | **18.5005**| |**30%, RM**| 30 | 1000 | **0.0583** | **18.9148**| |**10%, NM**| 10 | 1000 | **0.1447** | **41.6710**| |**30%, NM**| 10 | 1000 | **0.1765** | **63.8465**| # Part 6: Experiments on Hangzhou Data Set ``` import scipy.io tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.4 # ============================================================================= ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario: # binary_tensor = np.zeros(dense_tensor.shape) # for i1 in range(dense_tensor.shape[0]): # for i2 in range(dense_tensor.shape[1]): # binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 50 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) ``` **Experiment results** of missing data imputation using TF-ALS: | scenario |`rank`| `maxiter`| mape | rmse | |:----------|-----:|---------:|-----------:|----------:| |**20%, RM**| 50 | 1000 | **0.1991** |**111.303**| |**40%, RM**| 50 | 1000 | **0.2098** |**100.315**| |**20%, NM**| 5 | 1000 | **0.2837** |**42.6136**| |**40%, NM**| 5 | 1000 | **0.2811** |**38.4201**| # Part 7: Experiments on New York Data Set ``` import scipy.io tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat') dense_tensor = tensor['tensor'] rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat') rm_tensor = rm_tensor['rm_tensor'] nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat') nm_tensor = nm_tensor['nm_tensor'] missing_rate = 0.1 # ============================================================================= ### Random missing (RM) scenario ### Set the RM scenario by: # binary_tensor = np.round(rm_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario ### Set the NM scenario by: binary_tensor = np.zeros(dense_tensor.shape) for i1 in range(dense_tensor.shape[0]): for i2 in range(dense_tensor.shape[1]): for i3 in range(61): binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 30 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) ``` **Experiment results** of missing data imputation using TF-ALS: | scenario |`rank`| `maxiter`| mape | rmse | |:----------|-----:|---------:|-----------:|----------:| |**10%, RM**| 30 | 1000 | **0.5262** | **6.2444**| |**30%, RM**| 30 | 1000 | **0.5488** | **6.8968**| |**10%, NM**| 30 | 1000 | **0.5170** | **5.9863**| |**30%, NM**| 30 | 100 | **-** | **-**| # Part 8: Experiments on Seattle Data Set ``` import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0) dense_mat = dense_mat.values RM_mat = RM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288]) missing_rate = 0.2 # ============================================================================= ### Random missing (RM) scenario ### Set the RM scenario by: binary_tensor = np.round(RM_tensor + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 50 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0) dense_mat = dense_mat.values RM_mat = RM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288]) missing_rate = 0.4 # ============================================================================= ### Random missing (RM) scenario ### Set the RM scenario by: binary_tensor = np.round(RM_tensor + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 50 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0) dense_mat = dense_mat.values NM_mat = NM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) missing_rate = 0.2 # ============================================================================= ### Non-random missing (NM) scenario ### Set the NM scenario by: binary_tensor = np.zeros((dense_mat.shape[0], 28, 288)) for i1 in range(binary_tensor.shape[0]): for i2 in range(binary_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 10 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0) dense_mat = dense_mat.values NM_mat = NM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) missing_rate = 0.4 # ============================================================================= ### Non-random missing (NM) scenario ### Set the NM scenario by: binary_tensor = np.zeros((dense_mat.shape[0], 28, 288)) for i1 in range(binary_tensor.shape[0]): for i2 in range(binary_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 10 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) ``` **Experiment results** of missing data imputation using TF-ALS: | scenario |`rank`| `maxiter`| mape | rmse | |:----------|-----:|---------:|-----------:|----------:| |**20%, RM**| 50 | 1000 | **0.0742** |**4.4929**| |**40%, RM**| 50 | 1000 | **0.0758** |**4.5574**| |**20%, NM**| 10 | 1000 | **0.0995** |**5.6331**| |**40%, NM**| 10 | 1000 | **0.1004** |**5.7034**|
true
code
0.442757
null
null
null
null
# Communication in Crisis ## Acquire Data: [Los Angeles Parking Citations](https://www.kaggle.com/cityofLA/los-angeles-parking-citations)<br> Load the dataset and filter for: - Citations issued from 2017-01-01 to 2021-04-12. - Street Sweeping violations - `Violation Description` == __"NO PARK/STREET CLEAN"__ Let's acquire the parking citations data from our file. 1. Import libraries. 1. Load the dataset. 1. Display the shape and first/last 2 rows. 1. Display general infomation about the dataset - w/ the # of unique values in each column. 1. Display the number of missing values in each column. 1. Descriptive statistics for all numeric features. ``` # Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from scipy import stats import sys import time import folium.plugins as plugins from IPython.display import HTML import json import datetime import calplot import folium import math sns.set() from tqdm.notebook import tqdm import src # Filter warnings from warnings import filterwarnings filterwarnings('ignore') # Load the data df = src.get_sweep_data(prepared=False) # Display the shape and dtypes of each column print(df.shape) df.info() # Display the first two citations df.head(2) # Display the last two citations df.tail(2) # Display descriptive statistics of numeric columns df.describe() df.hist(figsize=(16, 8), bins=15) plt.tight_layout(); ``` __Initial findings__ - `Issue time` and `Marked Time` are quasi-normally distributed. Note: Poisson Distribution - It's interesting to see the distribution of our activity on earth follows a normal distribution. - Agencies 50+ write the most parking citations. - Most fine amounts are less than $100.00 - There are a few null or invalid license plates. # Prepare - Remove spaces + capitalization from each column name. - Cast `Plate Expiry Date` to datetime data type. - Cast `Issue Date` and `Issue Time` to datetime data types. - Drop columns missing >=74.42\% of their values. - Drop missing values. - Transform Latitude and Longitude columns from NAD1983StatePlaneCaliforniaVFIPS0405 feet projection to EPSG:4326 World Geodetic System 1984: used in GPS [Standard] - Filter data for street sweeping citations only. ``` # Prepare the data using a function stored in prepare.py df_citations = src.get_sweep_data(prepared=True) # Display the first two rows df_citations.head(2) # Check the column data types and non-null counts. df_citations.info() ``` # Exploration ## How much daily revenue is generated from street sweeper citations? ### Daily Revenue from Street Sweeper Citations Daily street sweeper citations increased in 2020. ``` # Daily street sweeping citation revenue daily_revenue = df_citations.groupby('issue_date').fine_amount.sum() daily_revenue.index = pd.to_datetime(daily_revenue.index) df_sweep = src.street_sweep(data=df_citations) df_d = src.resample_period(data=df_sweep) df_m = src.resample_period(data=df_sweep, period='M') df_d.head() sns.set_context('talk') # Plot daily revenue from street sweeping citations df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue') plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue') plt.title("Daily Revenue from Street Sweeping Citations") plt.xlabel('') plt.ylabel("Revenue (in thousand's)") plt.xticks(rotation=0, horizontalalignment='center', fontsize=13) plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',]) plt.ylim(0, 1_000_000) plt.legend(loc=2, framealpha=.8); ``` > __Anomaly__: Between March 2020 and October 2020 a Local Emergency was Declared by the Mayor of Los Angeles in response to COVID-19. Street Sweeping was halted to help Angelenos Shelter in Place. _Street Sweeping resumed on 10/15/2020_. ### Anomaly: Declaration of Local Emergency ``` sns.set_context('talk') # Plot daily revenue from street sweeping citations df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue') plt.axvspan('2020-03-16', '2020-10-14', color='grey', alpha=.25) plt.text('2020-03-29', 890_000, 'Declaration of\nLocal Emergency', fontsize=11) plt.title("Daily Revenue from Street Sweeping Citations") plt.xlabel('') plt.ylabel("Revenue (in thousand's)") plt.xticks(rotation=0, horizontalalignment='center', fontsize=13) plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',]) plt.ylim(0, 1_000_000) plt.legend(loc=2, framealpha=.8); sns.set_context('talk') # Plot daily revenue from street sweeping citations df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue') plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue') plt.axvline(datetime.datetime(2020, 10, 15), color='red', linestyle="--", label='October 15, 2020') plt.title("Daily Revenue from Street Sweeping Citations") plt.xlabel('') plt.ylabel("Revenue (in thousand's)") plt.xticks(rotation=0, horizontalalignment='center', fontsize=13) plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200K', '$400K', '$600K', '$800K',]) plt.ylim(0, 1_000_000) plt.legend(loc=2, framealpha=.8); ``` ## Hypothesis Test ### General Inquiry Is the daily citation revenue after 10/15/2020 significantly greater than average? ### Z-Score $H_0$: The daily citation revenue after 10/15/2020 is less than or equal to the average daily revenue. $H_a$: The daily citation revenue after 10/15/2020 is significantly greater than average. ``` confidence_interval = .997 # Directional Test alpha = (1 - confidence_interval)/2 # Data to calculate z-scores using precovid values to calculate the mean and std daily_revenue_precovid = df_d.loc[df_d.index < '2020-03-16']['revenue'] mean_precovid, std_precovid = daily_revenue_precovid.agg(['mean', 'std']).values mean, std = df_d.agg(['mean', 'std']).values # Calculating Z-Scores using precovid mean and std z_scores_precovid = (df_d.revenue - mean_precovid)/std_precovid z_scores_precovid.index = pd.to_datetime(z_scores_precovid.index) sig_zscores_pre_covid = z_scores_precovid[z_scores_precovid>3] # Calculating Z-Scores using entire data z_scores = (df_d.revenue - mean)/std z_scores.index = pd.to_datetime(z_scores.index) sig_zscores = z_scores[z_scores>3] sns.set_context('talk') plt.figure(figsize=(12, 6)) sns.histplot(data=z_scores_precovid, bins=50, label='preCOVID z-scores') sns.histplot(data=z_scores, bins=50, color='orange', label='z-scores') plt.title('Daily citation revenue after 10/15/2020 is significantly greater than average', fontsize=16) plt.xlabel('Standard Deviations') plt.ylabel('# of Days') plt.axvline(3, color='Black', linestyle="--", label='3 Standard Deviations') plt.xticks(np.linspace(-1, 9, 11)) plt.legend(fontsize=13); a = stats.zscore(daily_revenue) fig, ax = plt.subplots(figsize=(8, 8)) stats.probplot(a, plot=ax) plt.xlabel("Quantile of Normal Distribution") plt.ylabel("z-score"); ``` ### p-values ``` p_values_precovid = z_scores_precovid.apply(stats.norm.cdf) p_values = z_scores_precovid.apply(stats.norm.cdf) significant_dates_precovid = p_values_precovid[(1-p_values_precovid) < alpha] significant_dates = p_values[(1-p_values) < alpha] # The chance of an outcome occuring by random chance print(f'{alpha:0.3%}') ``` ### Cohen's D ``` fractions = [.1, .2, .5, .7, .9] cohen_d = [] for percentage in fractions: cohen_d_trial = [] for i in range(10000): sim = daily_revenue.sample(frac=percentage) sim_mean = sim.mean() d = (sim_mean - mean) / (std/math.sqrt(int(len(daily_revenue)*percentage))) cohen_d_trial.append(d) cohen_d.append(np.mean(cohen_d_trial)) cohen_d fractions = [.1, .2, .5, .7, .9] cohen_d_precovid = [] for percentage in fractions: cohen_d_trial = [] for i in range(10000): sim = daily_revenue_precovid.sample(frac=percentage) sim_mean = sim.mean() d = (sim_mean - mean_precovid) / (std_precovid/math.sqrt(int(len(daily_revenue_precovid)*percentage))) cohen_d_trial.append(d) cohen_d_precovid.append(np.mean(cohen_d_trial)) cohen_d_precovid ``` ### Significant Dates with less than a 0.15% chance of occuring - All dates that are considered significant occur after 10/15/2020 - In the two weeks following 10/15/2020 significant events occured on __Tuesday's and Wednesday's__. ``` dates_precovid = set(list(sig_zscores_pre_covid.index)) dates = set(list(sig_zscores.index)) common_dates = list(dates.intersection(dates_precovid)) common_dates = pd.to_datetime(common_dates).sort_values() sig_zscores pd.Series(common_dates.day_name(), common_dates) np.random.seed(sum(map(ord, 'calplot'))) all_days = pd.date_range('1/1/2020', '12/22/2020', freq='D') significant_events = pd.Series(np.ones_like(len(common_dates)), index=common_dates) calplot.calplot(significant_events, figsize=(18, 12), cmap='coolwarm_r'); ``` ## Which parts of the city were impacted the most? ``` df_outliers = df_citations.loc[df_citations.issue_date.isin(list(common_dates.astype('str')))] df_outliers.reset_index(drop=True, inplace=True) print(df_outliers.shape) df_outliers.head() m = folium.Map(location=[34.0522, -118.2437], min_zoom=8, max_bounds=True) mc = plugins.MarkerCluster() for index, row in df_outliers.iterrows(): mc.add_child( folium.Marker(location=[str(row['latitude']), str(row['longitude'])], popup='Cited {} {} at {}'.format(row['day_of_week'], row['issue_date'], row['issue_time'][:-3]), control_scale=True, clustered_marker=True ) ) m.add_child(mc) ``` Transfering map to Tablaeu # Conclusions # Appendix ## What time(s) are Street Sweeping citations issued? Most citations are issued during the hours of 8am, 10am, and 12pm. ### Citation Times ``` # Filter street sweeping data for citations issued between # 8 am and 2 pm, 8 and 14 respectively. df_citation_times = df_citations.loc[(df_citations.issue_hour >= 8)&(df_citations.issue_hour < 14)] sns.set_context('talk') # Issue Hour Plot df_citation_times.issue_hour.value_counts().sort_index().plot.bar(figsize=(8, 6)) # Axis labels plt.title('Most Street Sweeper Citations are Issued at 8am') plt.xlabel('Issue Hour (24HR)') plt.ylabel('# of Citations (in thousands)') # Chart Formatting plt.xticks(rotation=0) plt.yticks(range(100_000, 400_001,100_000), ['100', '200', '300', '400']) plt.show() sns.set_context('talk') # Issue Minute Plot df_citation_times.issue_minute.value_counts().sort_index().plot.bar(figsize=(20, 9)) # Axis labels plt.title('Most Street Sweeper Citations are Issued in the First 30 Minutes') plt.xlabel('Issue Minute') plt.ylabel('# of Citations (in thousands)') # plt.axvspan(0, 30, facecolor='grey', alpha=0.1) # Chart Formatting plt.xticks(rotation=0) plt.yticks(range(5_000, 40_001, 5_000), ['5', '10', '15', '20', '25', '30', '35', '40']) plt.tight_layout() plt.show() ``` ## Which state has the most Street Sweeping violators? ### License Plate Over 90% of all street sweeping citations are issued to California Residents. ``` sns.set_context('talk') fig = df_citations.rp_state_plate.value_counts(normalize=True).nlargest(3).plot.bar(figsize=(12, 6)) # Chart labels plt.title('California residents receive the most street sweeping citations', fontsize=16) plt.xlabel('State') plt.ylabel('% of all Citations') # Tick Formatting plt.xticks(rotation=0) plt.yticks(np.linspace(0, 1, 11), labels=[f'{i:0.0%}' for i in np.linspace(0, 1, 11)]) plt.grid(axis='x', alpha=.5) plt.tight_layout(); ``` ## Which street has the most Street Sweeping citations? The characteristics of the top 3 streets: 1. Vehicles are parked bumper to bumper leaving few parking spaces available 2. Parking spaces have a set time limit ``` df_citations['street_name'] = df_citations.location.str.replace('^[\d+]{2,}', '').str.strip() sns.set_context('talk') # Removing the street number and white space from the address df_citations.street_name.value_counts().nlargest(3).plot.barh(figsize=(16, 6)) # Chart formatting plt.title('Streets with the Most Street Sweeping Citations', fontsize=24) plt.xlabel('# of Citations'); ``` ### __Abbot Kinney Blvd: "Small Boutiques, No Parking"__ > [Abbot Kinney Blvd on Google Maps](https://www.google.com/maps/@33.9923689,-118.4731719,3a,75y,112.99h,91.67t/data=!3m6!1e1!3m4!1sKD3cG40eGmdWxhwqLD1BvA!2e0!7i16384!8i8192) <img src="./visuals/abbot.png" alt="Abbot" style="width: 450px;" align="left"/> - Near Venice Beach - Small businesses and name brand stores line both sides of the street - Little to no parking in this area - Residential area inland - Multiplex style dwellings with available parking spaces - Weekly Street Sweeping on Monday from 7:30 am - 9:30 am ### __Clinton Street: "Packed Street"__ > [Clinton Street on Google Maps](https://www.google.com/maps/@34.0816611,-118.3306842,3a,75y,70.72h,57.92t/data=!3m9!1e1!3m7!1sdozFgC7Ms3EvaOF4-CeNAg!2e0!7i16384!8i8192!9m2!1b1!2i37) <img src="./visuals/clinton.png" alt="Clinton" style="width: 600px;" align="Left"/> - All parking spaces on the street are filled - Residential Area - Weekly Street Sweeping on Friday from 8:00 am - 11:00 am ### __Kelton Ave: "2 Hour Time Limit"__ > [Kelton Ave on Google Maps](https://www.google.com/maps/place/Kelton+Ave,+Los+Angeles,+CA/@34.0475262,-118.437594,3a,49.9y,183.92h,85.26t/data=!3m9!1e1!3m7!1s5VICHNYMVEk9utaV5egFYg!2e0!7i16384!8i8192!9m2!1b1!2i25!4m5!3m4!1s0x80c2bb7efb3a05eb:0xe155071f3fe49df3!8m2!3d34.0542999!4d-118.4434919) <img src="./visuals/kelton.png" width="600" height="600" align="left"/> - Most parking spaces on this street are available. This is due to the strict 2 hour time limit for parked vehicles without the proper exception permit. - Multiplex, Residential Area - Weekly Street Sweeping on Thursday from 10:00 am - 1:00 pm - Weekly Street Sweeping on Friday from 8:00 am - 10:00 am ## Which street has the most Street Sweeping citations, given the day of the week? - __Abbot Kinney Blvd__ is the most cited street on __Monday and Tuesday__ - __4th Street East__ is the most cited street on __Saturday and Sunday__ ``` # Group by the day of the week and street name df_day_street = df_citations.groupby(by=['day_of_week', 'street_name'])\ .size()\ .sort_values()\ .groupby(level=0)\ .tail(1)\ .reset_index()\ .rename(columns={0:'count'}) # Create a new column to sort the values by the day of the # week starting with Monday df_day_street['order'] = [5, 6, 4, 3, 0, 2, 1] # Display the street with the most street sweeping citations # given the day of the week. df_day_street.sort_values('order').set_index('order') ``` ## Which Agencies issue the most street sweeping citations? The Department of Transportation's __Western, Hollywood, and Valley__ subdivisions issue the most street sweeping citations. ``` sns.set_context('talk') df_citations.agency.value_counts().nlargest(5).plot.barh(figsize=(12, 6)); # plt.axhspan(2.5, 5, facecolor='0.5', alpha=.8) plt.title('Agencies With the Most Street Sweeper Citations') plt.xlabel('# of Citations (in thousands)') plt.xticks(np.arange(0, 400_001, 100_000), list(np.arange(0, 401, 100))) plt.yticks([0, 1, 2, 3, 4], labels=['DOT-WESTERN', 'DOT-HOLLYWOOD', 'DOT-VALLEY', 'DOT-SOUTHERN', 'DOT-CENTRAL']); ``` When taking routes into consideration, __"Western"__ Subdivision, route 00500, has issued the most street sweeping citations. - Is route 00500 larger than other street sweeping routes? ``` top_3_routes = df_citations.groupby(['agency', 'route'])\ .size()\ .nlargest(3)\ .sort_index()\ .rename('num_citations')\ .reset_index()\ .sort_values(by='num_citations', ascending=False) top_3_routes.agency = ["DOT-WESTERN", "DOT-SOUTHERN", "DOT-CENTRAL"] data = top_3_routes.set_index(['agency', 'route']) data.plot(kind='barh', stacked=True, figsize=(12, 6), legend=None) plt.title("Agency-Route ID's with the most Street Sweeping Citations") plt.ylabel('') plt.xlabel('# of Citations (in thousands)') plt.xticks(np.arange(0, 70_001, 10_000), [str(i) for i in np.arange(0, 71, 10)]); df_citations['issue_time_num'] = df_citations.issue_time.str.replace(":00", '') df_citations['issue_time_num'] = df_citations.issue_time_num.str.replace(':', '').astype(np.int) ``` ## What is the weekly distibution of citation times? ``` sns.set_context('talk') plt.figure(figsize=(13, 12)) sns.boxplot(data=df_citations, x="day_of_week", y="issue_time_num", order=["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"], whis=3); plt.title("Distribution Citation Issue Times Throughout the Week") plt.xlabel('') plt.ylabel('Issue Time (24HR)') plt.yticks(np.arange(0, 2401, 200), [str(i) + ":00" for i in range(0, 25, 2)]); ```
true
code
0.614972
null
null
null
null
##### Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). # Neural Machine Translation with Attention <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a> </td> </table> # This notebook is still under construction! Please come back later. This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using TF 2.0 APIs. This is an advanced example that assumes some knowledge of sequence to sequence models. After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"* The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating: <img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot"> Note: This example takes approximately 10 mintues to run on a single P100 GPU. ``` import collections import io import itertools import os import random import re import time import unicodedata import numpy as np import tensorflow as tf assert tf.__version__.startswith('2') import matplotlib.pyplot as plt print(tf.__version__) ``` ## Download and prepare the dataset We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format: ``` May I borrow this book? ¿Puedo tomar prestado este libro? ``` There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data: 1. Clean the sentences by removing special characters. 1. Add a *start* and *end* token to each sentence. 1. Create a word index and reverse word index (dictionaries mapping from word → id and id → word). 1. Pad each sentence to a maximum length. ``` # TODO(brianklee): This preprocessing should ideally be implemented in TF # because preprocessing should be exported as part of the SavedModel. # Converts the unicode file to ascii # https://stackoverflow.com/a/518232/2809427 def unicode_to_ascii(s): return ''.join(c for c in unicodedata.normalize('NFD', s) if unicodedata.category(c) != 'Mn') START_TOKEN = u'<start>' END_TOKEN = u'<end>' def preprocess_sentence(w): # remove accents; lowercase everything w = unicode_to_ascii(w.strip()).lower() # creating a space between a word and the punctuation following it # eg: "he is a boy." => "he is a boy ." # https://stackoverflow.com/a/3645931/3645946 w = re.sub(r'([?.!,¿])', r' \1 ', w) # replacing everything with space except (a-z, '.', '?', '!', ',') w = re.sub(r'[^a-z?.!,¿]+', ' ', w) # adding a start and an end token to the sentence # so that the model know when to start and stop predicting. w = '<start> ' + w + ' <end>' return w en_sentence = u"May I borrow this book?" sp_sentence = u"¿Puedo tomar prestado este libro?" print(preprocess_sentence(en_sentence)) print(preprocess_sentence(sp_sentence)) ``` Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset (of course, translation quality degrades with less data). ``` def load_anki_data(num_examples=None): # Download the file path_to_zip = tf.keras.utils.get_file( 'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip', extract=True) path_to_file = os.path.dirname(path_to_zip) + '/spa-eng/spa.txt' with io.open(path_to_file, 'rb') as f: lines = f.read().decode('utf8').strip().split('\n') # Data comes as tab-separated strings; one per line. eng_spa_pairs = [[preprocess_sentence(w) for w in line.split('\t')] for line in lines] # The translations file is ordered from shortest to longest, so slicing from # the front will select the shorter examples. This also speeds up training. if num_examples is not None: eng_spa_pairs = eng_spa_pairs[:num_examples] eng_sentences, spa_sentences = zip(*eng_spa_pairs) eng_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='') spa_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='') eng_tokenizer.fit_on_texts(eng_sentences) spa_tokenizer.fit_on_texts(spa_sentences) return (eng_spa_pairs, eng_tokenizer, spa_tokenizer) NUM_EXAMPLES = 30000 sentence_pairs, english_tokenizer, spanish_tokenizer = load_anki_data(NUM_EXAMPLES) # Turn our english/spanish pairs into TF Datasets by mapping words -> integers. def make_dataset(eng_spa_pairs, eng_tokenizer, spa_tokenizer): eng_sentences, spa_sentences = zip(*eng_spa_pairs) eng_ints = eng_tokenizer.texts_to_sequences(eng_sentences) spa_ints = spa_tokenizer.texts_to_sequences(spa_sentences) padded_eng_ints = tf.keras.preprocessing.sequence.pad_sequences( eng_ints, padding='post') padded_spa_ints = tf.keras.preprocessing.sequence.pad_sequences( spa_ints, padding='post') dataset = tf.data.Dataset.from_tensor_slices((padded_eng_ints, padded_spa_ints)) return dataset # Train/test split train_size = int(len(sentence_pairs) * 0.8) random.shuffle(sentence_pairs) train_sentence_pairs, test_sentence_pairs = sentence_pairs[:train_size], sentence_pairs[train_size:] # Show length len(train_sentence_pairs), len(test_sentence_pairs) _english, _spanish = train_sentence_pairs[0] _eng_ints, _spa_ints = english_tokenizer.texts_to_sequences([_english])[0], spanish_tokenizer.texts_to_sequences([_spanish])[0] print("Source language: ") print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_eng_ints, _english.split()))) print("Target language: ") print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_spa_ints, _spanish.split()))) # Set up datasets BATCH_SIZE = 64 train_ds = make_dataset(train_sentence_pairs, english_tokenizer, spanish_tokenizer) test_ds = make_dataset(test_sentence_pairs, english_tokenizer, spanish_tokenizer) train_ds = train_ds.shuffle(len(train_sentence_pairs)).batch(BATCH_SIZE, drop_remainder=True) test_ds = test_ds.batch(BATCH_SIZE, drop_remainder=True) print("Dataset outputs elements with shape ({}, {})".format( *train_ds.output_shapes)) ``` ## Write the encoder and decoder model Here, we'll implement an encoder-decoder model with attention. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence. <img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism"> The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*. ``` ENCODER_SIZE = DECODER_SIZE = 1024 EMBEDDING_DIM = 256 MAX_OUTPUT_LENGTH = train_ds.output_shapes[1][1] def gru(units): return tf.keras.layers.GRU(units, return_sequences=True, return_state=True, recurrent_activation='sigmoid', recurrent_initializer='glorot_uniform') class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, encoder_size): super(Encoder, self).__init__() self.embedding_dim = embedding_dim self.encoder_size = encoder_size self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(encoder_size) def call(self, x, hidden): x = self.embedding(x) output, state = self.gru(x, initial_state=hidden) return output, state def initial_hidden_state(self, batch_size): return tf.zeros((batch_size, self.encoder_size)) ``` For the decoder, we're using *Bahdanau attention*. Here are the equations that are implemented: <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800"> <img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800"> Lets decide on notation before writing the simplified form: * FC = Fully connected (dense) layer * EO = Encoder output * H = hidden state * X = input to the decoder And the pseudo-code: * `score = FC(tanh(FC(EO) + FC(H)))` * `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis. * `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1. * `embedding output` = The input to the decoder X is passed through an embedding layer. * `merged vector = concat(embedding output, context vector)` * This merged vector is then given to the GRU The shapes of all the vectors at each step have been specified in the comments in the code: ``` class BahdanauAttention(tf.keras.Model): def __init__(self, units): super(BahdanauAttention, self).__init__() self.W1 = tf.keras.layers.Dense(units) self.W2 = tf.keras.layers.Dense(units) self.V = tf.keras.layers.Dense(1) def call(self, hidden_state, enc_output): # enc_output shape = (batch_size, max_length, hidden_size) # (batch_size, hidden_size) -> (batch_size, 1, hidden_size) hidden_with_time = tf.expand_dims(hidden_state, 1) # score shape == (batch_size, max_length, 1) score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time))) # attention_weights shape == (batch_size, max_length, 1) attention_weights = tf.nn.softmax(score, axis=1) # context_vector shape after sum = (batch_size, hidden_size) context_vector = attention_weights * enc_output context_vector = tf.reduce_sum(context_vector, axis=1) return context_vector, attention_weights class Decoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, decoder_size): super(Decoder, self).__init__() self.vocab_size = vocab_size self.embedding_dim = embedding_dim self.decoder_size = decoder_size self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = gru(decoder_size) self.fc = tf.keras.layers.Dense(vocab_size) self.attention = BahdanauAttention(decoder_size) def call(self, x, hidden, enc_output): context_vector, attention_weights = self.attention(hidden, enc_output) # x shape after passing through embedding == (batch_size, 1, embedding_dim) x = self.embedding(x) # x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size) x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1) # passing the concatenated vector to the GRU output, state = self.gru(x) # output shape == (batch_size, hidden_size) output = tf.reshape(output, (-1, output.shape[2])) # output shape == (batch_size, vocab) x = self.fc(output) return x, state, attention_weights ``` ## Define a translate function Now, let's put the encoder and decoder halves together. The encoder step is fairly straightforward; we'll just reuse Keras's dynamic unroll. For the decoder, we have to make some choices about how to feed the decoder RNN. Overall the process goes as follows: 1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*. 2. The encoder output, encoder hidden state and the &lt;START&gt; token is passed to the decoder. 3. The decoder returns the *predictions* and the *decoder hidden state*. 4. The encoder output, hidden state and next token is then fed back into the decoder repeatedly. This has two different behaviors under training and inference: - during training, we use *teacher forcing*, where the correct next token is fed into the decoder, regardless of what the decoder emitted. - during inference, we use `tf.argmax(predictions)` to select the most likely continuation and feed it back into the decoder. Another strategy that yields more robust results is called *beam search*. 5. Repeat step 4 until either the decoder emits an &lt;END&gt; token, indicating that it's done translating, or we run into a hardcoded length limit. ``` class NmtTranslator(tf.keras.Model): def __init__(self, encoder, decoder, start_token_id, end_token_id): super(NmtTranslator, self).__init__() self.encoder = encoder self.decoder = decoder # (The token_id should match the decoder's language.) # Uses start_token_id to initialize the decoder. self.start_token_id = tf.constant(start_token_id) # Check for sequence completion using this token_id self.end_token_id = tf.constant(end_token_id) @tf.function def call(self, inp, target=None, max_output_length=MAX_OUTPUT_LENGTH): '''Translate an input. If target is provided, teacher forcing is used to generate the translation. ''' batch_size = inp.shape[0] hidden = self.encoder.initial_hidden_state(batch_size) enc_output, enc_hidden = self.encoder(inp, hidden) dec_hidden = enc_hidden if target is not None: output_length = target.shape[1] else: output_length = max_output_length predictions_array = tf.TensorArray(tf.float32, size=output_length - 1) attention_array = tf.TensorArray(tf.float32, size=output_length - 1) # Feed <START> token to start decoder. dec_input = tf.cast([self.start_token_id] * batch_size, tf.int32) # Keep track of which sequences have emitted an <END> token is_done = tf.zeros([batch_size], dtype=tf.bool) for i in tf.range(output_length - 1): dec_input = tf.expand_dims(dec_input, 1) predictions, dec_hidden, attention_weights = self.decoder(dec_input, dec_hidden, enc_output) predictions = tf.where(is_done, tf.zeros_like(predictions), predictions) # Write predictions/attention for later visualization. predictions_array = predictions_array.write(i, predictions) attention_array = attention_array.write(i, attention_weights) # Decide what to pass into the next iteration of the decoder. if target is not None: # if target is known, use teacher forcing dec_input = target[:, i + 1] else: # Otherwise, pick the most likely continuation dec_input = tf.argmax(predictions, axis=1, output_type=tf.int32) # Figure out which sentences just completed. is_done = tf.logical_or(is_done, tf.equal(dec_input, self.end_token_id)) # Exit early if all our sentences are done. if tf.reduce_all(is_done): break # [time, batch, predictions] -> [batch, time, predictions] return tf.transpose(predictions_array.stack(), [1, 0, 2]), tf.transpose(attention_array.stack(), [1, 0, 2, 3]) ``` ## Define the loss function Our loss function is a word-for-word comparison between true answer and model prediction. real = [<start>, 'This', 'is', 'the', 'correct', 'answer', '.', '<end>', '<oov>'] pred = ['This', 'is', 'what', 'the', 'model', 'emitted', '.', '<end>'] results in comparing This/This, is/is, the/what, correct/the, answer/model, ./emitted, <end>/. and ignoring the rest of the prediction. ``` def loss_fn(real, pred): # The prediction doesn't include the <start> token. real = real[:, 1:] # Cut down the prediction to the correct shape (We ignore extra words). pred = pred[:, :real.shape[1]] # If real == <OOV>, then mask out the loss. mask = 1 - np.equal(real, 0) loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask # Sum loss over the time dimension, but average it over the batch dimension. return tf.reduce_mean(tf.reduce_sum(loss_, axis=1)) ``` ## Configure model directory We'll use one directory to save all of our relevant artifacts (summary logs, checkpoints, SavedModel exports, etc.) ``` # Where to save checkpoints, tensorboard summaries, etc. MODEL_DIR = '/tmp/tensorflow/nmt_attention' def apply_clean(): if tf.io.gfile.exists(MODEL_DIR): print('Removing existing model dir: {}'.format(MODEL_DIR)) tf.io.gfile.rmtree(MODEL_DIR) # Optional: remove existing data apply_clean() # Summary writers train_summary_writer = tf.summary.create_file_writer( os.path.join(MODEL_DIR, 'summaries', 'train'), flush_millis=10000) test_summary_writer = tf.summary.create_file_writer( os.path.join(MODEL_DIR, 'summaries', 'eval'), flush_millis=10000, name='test') # Set up all stateful objects encoder = Encoder(len(english_tokenizer.word_index) + 1, EMBEDDING_DIM, ENCODER_SIZE) decoder = Decoder(len(spanish_tokenizer.word_index) + 1, EMBEDDING_DIM, DECODER_SIZE) start_token_id = spanish_tokenizer.word_index[START_TOKEN] end_token_id = spanish_tokenizer.word_index[END_TOKEN] model = NmtTranslator(encoder, decoder, start_token_id, end_token_id) # TODO(brianklee): Investigate whether Adam defaults have changed and whether it affects training. optimizer = tf.keras.optimizers.Adam(epsilon=1e-8)# tf.keras.optimizers.SGD(learning_rate=0.01)#Adam() # Checkpoints checkpoint_dir = os.path.join(MODEL_DIR, 'checkpoints') checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt') checkpoint = tf.train.Checkpoint( encoder=encoder, decoder=decoder, optimizer=optimizer) # Restore variables on creation if a checkpoint exists. checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) # SavedModel exports export_path = os.path.join(MODEL_DIR, 'export') ``` # Visualize the model's output Let's visualize our model's output. (It hasn't been trained yet, so it will output gibberish.) We'll use this visualization to check on the model's progress. ``` def plot_attention(attention, sentence, predicted_sentence): fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1) ax.matshow(attention, cmap='viridis') fontdict = {'fontsize': 14} ax.set_xticklabels([''] + sentence.split(), fontdict=fontdict, rotation=90) ax.set_yticklabels([''] + predicted_sentence.split(), fontdict=fontdict) ax.xaxis.set_major_locator(ticker.MultipleLocator(1)) ax.yaxis.set_major_locator(ticker.MultipleLocator(1)) plt.show() def ints_to_words(tokenizer, ints): return ' '.join(tokenizer.index_word[int(i)] if int(i) != 0 else '<OOV>' for i in ints) def sentence_to_ints(tokenizer, sentence): sentence = preprocess_sentence(sentence) return tf.constant(tokenizer.texts_to_sequences([sentence])[0]) def translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, ints, target_ints=None): """Run translation on a sentence and plot an attention matrix. Sentence should be passed in as list of integers. """ ints = tf.expand_dims(ints, 0) predictions, attention = model(ints) prediction_ids = tf.squeeze(tf.argmax(predictions, axis=-1)) attention = tf.squeeze(attention) sentence = ints_to_words(english_tokenizer, ints[0]) predicted_sentence = ints_to_words(spanish_tokenizer, prediction_ids) print(u'Input: {}'.format(sentence)) print(u'Predicted translation: {}'.format(predicted_sentence)) if target_ints is not None: print(u'Correct translation: {}'.format(ints_to_words(spanish_tokenizer, target_ints))) plot_attention(attention, sentence, predicted_sentence) def translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, sentence, target_sentence=None): """Same as translate_and_plot_ints, but pass in a sentence as a string.""" english_ints = sentence_to_ints(english_tokenizer, sentence) spanish_ints = sentence_to_ints(spanish_tokenizer, target_sentence) if target_sentence is not None else None translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, english_ints, target_ints=spanish_ints) translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, u"it's really cold here", u'hace mucho frio aqui') ``` # Train the model ``` def train(model, optimizer, dataset): """Trains model on `dataset` using `optimizer`.""" start = time.time() avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32) for inp, target in dataset: with tf.GradientTape() as tape: predictions, _ = model(inp, target=target) loss = loss_fn(target, predictions) avg_loss(loss) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) if tf.equal(optimizer.iterations % 10, 0): tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations) avg_loss.reset_states() rate = 10 / (time.time() - start) print('Step #%d\tLoss: %.6f (%.2f steps/sec)' % (optimizer.iterations, loss, rate)) start = time.time() if tf.equal(optimizer.iterations % 100, 0): # translate_and_plot_words(model, english_index, spanish_index, u"it's really cold here.", u'hace mucho frio aqui.') translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, inp[0], target[0]) def test(model, dataset, step_num): """Perform an evaluation of `model` on the examples from `dataset`.""" avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32) for inp, target in dataset: predictions, _ = model(inp) loss = loss_fn(target, predictions) avg_loss(loss) print('Model test set loss: {:0.4f}'.format(avg_loss.result())) tf.summary.scalar('loss', avg_loss.result(), step=step_num) NUM_TRAIN_EPOCHS = 10 for i in range(NUM_TRAIN_EPOCHS): start = time.time() with train_summary_writer.as_default(): train(model, optimizer, train_ds) end = time.time() print('\nTrain time for epoch #{} ({} total steps): {}'.format( i + 1, optimizer.iterations, end - start)) with test_summary_writer.as_default(): test(model, test_ds, optimizer.iterations) checkpoint.save(checkpoint_prefix) # TODO(brianklee): This seems to be complaining about input shapes not being set? # tf.saved_model.save(model, export_path) ``` ## Next steps * [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French. * Experiment with training on a larger dataset, or using more epochs ``` ```
true
code
0.482978
null
null
null
null
# Implementing TF-IDF ------------------------------------ Here we implement TF-IDF, (Text Frequency - Inverse Document Frequency) for the spam-ham text data. We will use a hybrid approach of encoding the texts with sci-kit learn's TFIDF vectorizer. Then we will use the regular TensorFlow logistic algorithm outline. Creating the TF-IDF vectors requires us to load all the text into memory and count the occurrences of each word before we can start training our model. Because of this, it is not implemented fully in Tensorflow, so we will use Scikit-learn for creating our TF-IDF embedding, but use Tensorflow to fit the logistic model. We start by loading the necessary libraries. ``` import tensorflow as tf import matplotlib.pyplot as plt import csv import numpy as np import os import string import requests import io import nltk from zipfile import ZipFile from sklearn.feature_extraction.text import TfidfVectorizer from tensorflow.python.framework import ops ops.reset_default_graph() ``` Start a computational graph session. ``` sess = tf.Session() ``` We set two parameters, `batch_size` and `max_features`. `batch_size` is the size of the batch we will train our logistic model on, and `max_features` is the maximum number of tf-idf textual words we will use in our logistic regression. ``` batch_size = 200 max_features = 1000 ``` Check if data was downloaded, otherwise download it and save for future use ``` save_file_name = 'temp_spam_data.csv' if os.path.isfile(save_file_name): text_data = [] with open(save_file_name, 'r') as temp_output_file: reader = csv.reader(temp_output_file) for row in reader: text_data.append(row) else: zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip' r = requests.get(zip_url) z = ZipFile(io.BytesIO(r.content)) file = z.read('SMSSpamCollection') # Format Data text_data = file.decode() text_data = text_data.encode('ascii',errors='ignore') text_data = text_data.decode().split('\n') text_data = [x.split('\t') for x in text_data if len(x)>=1] # And write to csv with open(save_file_name, 'w') as temp_output_file: writer = csv.writer(temp_output_file) writer.writerows(text_data) ``` We now clean our texts. This will decrease our vocabulary size by converting everything to lower case, removing punctuation and getting rid of numbers. ``` texts = [x[1] for x in text_data] target = [x[0] for x in text_data] # Relabel 'spam' as 1, 'ham' as 0 target = [1. if x=='spam' else 0. for x in target] # Normalize text # Lower case texts = [x.lower() for x in texts] # Remove punctuation texts = [''.join(c for c in x if c not in string.punctuation) for x in texts] # Remove numbers texts = [''.join(c for c in x if c not in '0123456789') for x in texts] # Trim extra whitespace texts = [' '.join(x.split()) for x in texts] ``` Define tokenizer function and create the TF-IDF vectors with SciKit-Learn. ``` import nltk nltk.download('punkt') def tokenizer(text): words = nltk.word_tokenize(text) return words # Create TF-IDF of texts tfidf = TfidfVectorizer(tokenizer=tokenizer, stop_words='english', max_features=max_features) sparse_tfidf_texts = tfidf.fit_transform(texts) ``` Split up data set into train/test. ``` train_indices = np.random.choice(sparse_tfidf_texts.shape[0], round(0.8*sparse_tfidf_texts.shape[0]), replace=False) test_indices = np.array(list(set(range(sparse_tfidf_texts.shape[0])) - set(train_indices))) texts_train = sparse_tfidf_texts[train_indices] texts_test = sparse_tfidf_texts[test_indices] target_train = np.array([x for ix, x in enumerate(target) if ix in train_indices]) target_test = np.array([x for ix, x in enumerate(target) if ix in test_indices]) ``` Now we create the variables and placeholders necessary for logistic regression. After which, we declare our logistic regression operation. Remember that the sigmoid part of the logistic regression will be in the loss function. ``` # Create variables for logistic regression A = tf.Variable(tf.random_normal(shape=[max_features,1])) b = tf.Variable(tf.random_normal(shape=[1,1])) # Initialize placeholders x_data = tf.placeholder(shape=[None, max_features], dtype=tf.float32) y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32) # Declare logistic model (sigmoid in loss function) model_output = tf.add(tf.matmul(x_data, A), b) ``` Next, we declare the loss function (which has the sigmoid in it), and the prediction function. The prediction function will have to have a sigmoid inside of it because it is not in the model output. ``` # Declare loss function (Cross Entropy loss) loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=model_output, labels=y_target)) # Prediction prediction = tf.round(tf.sigmoid(model_output)) predictions_correct = tf.cast(tf.equal(prediction, y_target), tf.float32) accuracy = tf.reduce_mean(predictions_correct) ``` Now we create the optimization function and initialize the model variables. ``` # Declare optimizer my_opt = tf.train.GradientDescentOptimizer(0.0025) train_step = my_opt.minimize(loss) # Intitialize Variables init = tf.global_variables_initializer() sess.run(init) ``` Finally, we perform our logisitic regression on the 1000 TF-IDF features. ``` train_loss = [] test_loss = [] train_acc = [] test_acc = [] i_data = [] for i in range(10000): rand_index = np.random.choice(texts_train.shape[0], size=batch_size) rand_x = texts_train[rand_index].todense() rand_y = np.transpose([target_train[rand_index]]) sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) # Only record loss and accuracy every 100 generations if (i+1)%100==0: i_data.append(i+1) train_loss_temp = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y}) train_loss.append(train_loss_temp) test_loss_temp = sess.run(loss, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])}) test_loss.append(test_loss_temp) train_acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x, y_target: rand_y}) train_acc.append(train_acc_temp) test_acc_temp = sess.run(accuracy, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])}) test_acc.append(test_acc_temp) if (i+1)%500==0: acc_and_loss = [i+1, train_loss_temp, test_loss_temp, train_acc_temp, test_acc_temp] acc_and_loss = [np.round(x,2) for x in acc_and_loss] print('Generation # {}. Train Loss (Test Loss): {:.2f} ({:.2f}). Train Acc (Test Acc): {:.2f} ({:.2f})'.format(*acc_and_loss)) ``` Here is matplotlib code to plot the loss and accuracies. ``` # Plot loss over time plt.plot(i_data, train_loss, 'k-', label='Train Loss') plt.plot(i_data, test_loss, 'r--', label='Test Loss', linewidth=4) plt.title('Cross Entropy Loss per Generation') plt.xlabel('Generation') plt.ylabel('Cross Entropy Loss') plt.legend(loc='upper right') plt.show() # Plot train and test accuracy plt.plot(i_data, train_acc, 'k-', label='Train Set Accuracy') plt.plot(i_data, test_acc, 'r--', label='Test Set Accuracy', linewidth=4) plt.title('Train and Test Accuracy') plt.xlabel('Generation') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.show() test complete; Gopal ```
true
code
0.469642
null
null
null
null
# Mixture Density Networks with Edward, Keras and TensorFlow This notebook explains how to implement Mixture Density Networks (MDN) with Edward, Keras and TensorFlow. Keep in mind that if you want to use Keras and TensorFlow, like we do in this notebook, you need to set the backend of Keras to TensorFlow, [here](http://keras.io/backend/) it is explained how to do that. In you are not familiar with MDNs have a look at the [following blog post](http://cbonnett.github.io/MDN.html) or at orginal [paper](http://research.microsoft.com/en-us/um/people/cmbishop/downloads/Bishop-NCRG-94-004.pdf) by Bishop. Edward implements many probability distribution functions that are TensorFlow compatible, this makes it attractive to use Edward for MDNs. Here are all the distributions that are currently implemented in Edward, there are more to come: 1. [Bernoulli](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L49) 2. [Beta](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L58) 3. [Binomial](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L68) 4. [Chi Squared](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L79) 5. [Dirichlet](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L89) 6. [Exponential](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L109) 7. [Gamma](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L118) 8. [Geometric](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L129) 9. [Inverse Gamma](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L138) 10. [log Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L155) 11. [Multinomial](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L165) 12. [Multivariate Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L194) 13. [Negative Binomial](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L283) 14. [Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L294) 15. [Poisson](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L310) 16. [Student-t](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L319) 17. [Truncated Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L333) 18. [Uniform](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L352) Let's start with the necessary imports. ``` # imports %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import edward as ed import numpy as np import tensorflow as tf from edward.stats import norm # Normal distribution from Edward. from keras import backend as K from keras.layers import Dense from sklearn.cross_validation import train_test_split ``` We will need some functions to plot the results later on, these are defined in the next code block. ``` from scipy.stats import norm as normal def plot_normal_mix(pis, mus, sigmas, ax, label='', comp=True): """ Plots the mixture of Normal models to axis=ax comp=True plots all components of mixture model """ x = np.linspace(-10.5, 10.5, 250) final = np.zeros_like(x) for i, (weight_mix, mu_mix, sigma_mix) in enumerate(zip(pis, mus, sigmas)): temp = normal.pdf(x, mu_mix, sigma_mix) * weight_mix final = final + temp if comp: ax.plot(x, temp, label='Normal ' + str(i)) ax.plot(x, final, label='Mixture of Normals ' + label) ax.legend(fontsize=13) def sample_from_mixture(x, pred_weights, pred_means, pred_std, amount): """ Draws samples from mixture model. Returns 2 d array with input X and sample from prediction of Mixture Model """ samples = np.zeros((amount, 2)) n_mix = len(pred_weights[0]) to_choose_from = np.arange(n_mix) for j,(weights, means, std_devs) in enumerate(zip(pred_weights, pred_means, pred_std)): index = np.random.choice(to_choose_from, p=weights) samples[j,1]= normal.rvs(means[index], std_devs[index], size=1) samples[j,0]= x[j] if j == amount -1: break return samples ``` ## Making some toy-data to play with. This is the same toy-data problem set as used in the [blog post](http://blog.otoro.net/2015/11/24/mixture-density-networks-with-tensorflow/) by Otoro where he explains MDNs. This is an inverse problem as you can see, for every ```X``` there are multiple ```y``` solutions. ``` def build_toy_dataset(nsample=40000): y_data = np.float32(np.random.uniform(-10.5, 10.5, (1, nsample))).T r_data = np.float32(np.random.normal(size=(nsample, 1))) # random noise x_data = np.float32(np.sin(0.75 * y_data) * 7.0 + y_data * 0.5 + r_data * 1.0) return train_test_split(x_data, y_data, random_state=42, train_size=0.1) X_train, X_test, y_train, y_test = build_toy_dataset() print("Size of features in training data: {:s}".format(X_train.shape)) print("Size of output in training data: {:s}".format(y_train.shape)) print("Size of features in test data: {:s}".format(X_test.shape)) print("Size of output in test data: {:s}".format(y_test.shape)) sns.regplot(X_train, y_train, fit_reg=False) ``` ### Building a MDN using Edward, Keras and TF We will define a class that can be used to construct MDNs. In this notebook we will be using a mixture of Normal Distributions. The advantage of defining a class is that we can easily reuse this to build other MDNs with different amount of mixture components. Furthermore, this makes it play nicely with Edward. ``` class MixtureDensityNetwork: """ Mixture density network for outputs y on inputs x. p((x,y), (z,theta)) = sum_{k=1}^K pi_k(x; theta) Normal(y; mu_k(x; theta), sigma_k(x; theta)) where pi, mu, sigma are the output of a neural network taking x as input and with parameters theta. There are no latent variables z, which are hidden variables we aim to be Bayesian about. """ def __init__(self, K): self.K = K # here K is the amount of Mixtures def mapping(self, X): """pi, mu, sigma = NN(x; theta)""" hidden1 = Dense(15, activation='relu')(X) # fully-connected layer with 15 hidden units hidden2 = Dense(15, activation='relu')(hidden1) self.mus = Dense(self.K)(hidden2) # the means self.sigmas = Dense(self.K, activation=K.exp)(hidden2) # the variance self.pi = Dense(self.K, activation=K.softmax)(hidden2) # the mixture components def log_prob(self, xs, zs=None): """log p((xs,ys), (z,theta)) = sum_{n=1}^N log p((xs[n,:],ys[n]), theta)""" # Note there are no parameters we're being Bayesian about. The # parameters are baked into how we specify the neural networks. X, y = xs self.mapping(X) result = tf.exp(norm.logpdf(y, self.mus, self.sigmas)) result = tf.mul(result, self.pi) result = tf.reduce_sum(result, 1) result = tf.log(result) return tf.reduce_sum(result) ``` We can set a seed in Edward so we can reproduce all the random components. The following line: ```ed.set_seed(42)``` sets the seed in Numpy and TensorFlow under the [hood](https://github.com/blei-lab/edward/blob/master/edward/util.py#L191). We use the class we defined above to initiate the MDN with 20 mixtures, this now can be used as an Edward model. ``` ed.set_seed(42) model = MixtureDensityNetwork(20) ``` In the following code cell we define the TensorFlow placeholders that are then used to define the Edward data model. The following line passes the ```model``` and ```data``` to ```MAP``` from Edward which is then used to initialise the TensorFlow variables. ```inference = ed.MAP(model, data)``` MAP is a Bayesian concept and stands for Maximum A Posteriori, it tries to find the set of parameters which maximizes the posterior distribution. In the example here we don't have a prior, in a Bayesian context this means we have a flat prior. For a flat prior MAP is equivalent to Maximum Likelihood Estimation. Edward is designed to be Bayesian about its statistical inference. The cool thing about MDN's with Edward is that we could easily include priors! ``` X = tf.placeholder(tf.float32, shape=(None, 1)) y = tf.placeholder(tf.float32, shape=(None, 1)) data = ed.Data([X, y]) # Make Edward Data model inference = ed.MAP(model, data) # Make the inference model sess = tf.Session() # Start TF session K.set_session(sess) # Pass session info to Keras inference.initialize(sess=sess) # Initialize all TF variables using the Edward interface ``` Having done that we can train the MDN in TensorFlow just like we normally would, and we can get out the predictions we are interested in from ```model```, in this case: * ```model.pi``` the mixture components, * ```model.mus``` the means, * ```model.sigmas``` the standard deviations. This is done in the last line of the code cell : ``` pred_weights, pred_means, pred_std = sess.run([model.pi, model.mus, model.sigmas], feed_dict={X: X_test}) ``` The default minimisation technique used is ADAM with a decaying scale factor. This can be seen [here](https://github.com/blei-lab/edward/blob/master/edward/inferences.py#L94) in the code base of Edward. Having a decaying scale factor is not the standard way of using ADAM, this is inspired by the Automatic Differentiation Variational Inference [(ADVI)](http://arxiv.org/abs/1603.00788) work where it was used in the RMSPROP minimizer. The loss that is minimised in the ```MAP``` model from Edward is the negative log-likelihood, this calculation uses the ```log_prob``` method in the ```MixtureDensityNetwork``` class we defined above. The ```build_loss``` method in the ```MAP``` class can be found [here](https://github.com/blei-lab/edward/blob/master/edward/inferences.py#L396). However the method ```inference.loss``` used below, returns the log-likelihood, so we expect this quantity to be maximized. ``` NEPOCH = 1000 train_loss = np.zeros(NEPOCH) test_loss = np.zeros(NEPOCH) for i in range(NEPOCH): _, train_loss[i] = sess.run([inference.train, inference.loss], feed_dict={X: X_train, y: y_train}) test_loss[i] = sess.run(inference.loss, feed_dict={X: X_test, y: y_test}) pred_weights, pred_means, pred_std = sess.run([model.pi, model.mus, model.sigmas], feed_dict={X: X_test}) ``` We can plot the log-likelihood of the training and test sample as function of training epoch. Keep in mind that ```inference.loss``` returns the total log-likelihood, so not the loss per data point, so in the plotting routine we divide by the size of the train and test data respectively. We see that it converges after 400 training steps. ``` fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(16, 3.5)) plt.plot(np.arange(NEPOCH), test_loss/len(X_test), label='Test') plt.plot(np.arange(NEPOCH), train_loss/len(X_train), label='Train') plt.legend(fontsize=20) plt.xlabel('Epoch', fontsize=15) plt.ylabel('Log-likelihood', fontsize=15) ``` Next we can have a look at how some individual examples perform. Keep in mind this is an inverse problem so we can't get the answer correct, we can hope that the truth lies in area where the model has high probability. In the next plot the truth is the vertical grey line while the blue line is the prediction of the mixture density network. As you can see, we didn't do too bad. ``` obj = [0, 4, 6] fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6)) plot_normal_mix(pred_weights[obj][0], pred_means[obj][0], pred_std[obj][0], axes[0], comp=False) axes[0].axvline(x=y_test[obj][0], color='black', alpha=0.5) plot_normal_mix(pred_weights[obj][2], pred_means[obj][2], pred_std[obj][2], axes[1], comp=False) axes[1].axvline(x=y_test[obj][2], color='black', alpha=0.5) plot_normal_mix(pred_weights[obj][1], pred_means[obj][1], pred_std[obj][1], axes[2], comp=False) axes[2].axvline(x=y_test[obj][1], color='black', alpha=0.5) ``` We can check the ensemble by drawing samples of the prediction and plotting the density of those. Seems the MDN learned what it needed too. ``` a = sample_from_mixture(X_test, pred_weights, pred_means, pred_std, amount=len(X_test)) sns.jointplot(a[:,0], a[:,1], kind="hex", color="#4CB391", ylim=(-10,10), xlim=(-14,14)) ```
true
code
0.762009
null
null
null
null
<a href="https://colab.research.google.com/github/cseveriano/spatio-temporal-forecasting/blob/master/notebooks/thesis_experiments/20200924_eMVFTS_Wind_Energy_Raw.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Forecasting experiments for GEFCOM 2012 Wind Dataset ## Install Libs ``` !pip3 install -U git+https://github.com/PYFTS/pyFTS !pip3 install -U git+https://github.com/cseveriano/spatio-temporal-forecasting !pip3 install -U git+https://github.com/cseveriano/evolving_clustering !pip3 install -U git+https://github.com/cseveriano/fts2image !pip3 install -U hyperopt !pip3 install -U pyts import pandas as pd import numpy as np from hyperopt import hp from spatiotemporal.util import parameter_tuning, sampling from spatiotemporal.util import experiments as ex from sklearn.metrics import mean_squared_error from google.colab import files import matplotlib.pyplot as plt import pickle import math from pyFTS.benchmarks import Measures from pyts.decomposition import SingularSpectrumAnalysis from google.colab import files import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) import datetime ``` ## Aux Functions ``` def normalize(df): mindf = df.min() maxdf = df.max() return (df-mindf)/(maxdf-mindf) def denormalize(norm, _min, _max): return [(n * (_max-_min)) + _min for n in norm] def getRollingWindow(index): pivot = index train_start = pivot.strftime('%Y-%m-%d') pivot = pivot + datetime.timedelta(days=20) train_end = pivot.strftime('%Y-%m-%d') pivot = pivot + datetime.timedelta(days=1) test_start = pivot.strftime('%Y-%m-%d') pivot = pivot + datetime.timedelta(days=6) test_end = pivot.strftime('%Y-%m-%d') return train_start, train_end, test_start, test_end def calculate_rolling_error(cv_name, df, forecasts, order_list): cv_results = pd.DataFrame(columns=['Split', 'RMSE', 'SMAPE']) limit = df.index[-1].strftime('%Y-%m-%d') test_end = "" index = df.index[0] for i in np.arange(len(forecasts)): train_start, train_end, test_start, test_end = getRollingWindow(index) test = df[test_start : test_end] yhat = forecasts[i] order = order_list[i] rmse = Measures.rmse(test.iloc[order:], yhat[:-1]) smape = Measures.smape(test.iloc[order:], yhat[:-1]) res = {'Split' : index.strftime('%Y-%m-%d') ,'RMSE' : rmse, 'SMAPE' : smape} cv_results = cv_results.append(res, ignore_index=True) cv_results.to_csv(cv_name+".csv") index = index + datetime.timedelta(days=7) return cv_results def get_final_forecast(norm_forecasts): forecasts_final = [] for i in np.arange(len(norm_forecasts)): f_raw = denormalize(norm_forecasts[i], min_raw, max_raw) forecasts_final.append(f_raw) return forecasts_final from spatiotemporal.test import methods_space_oahu as ms from spatiotemporal.util import parameter_tuning, sampling from spatiotemporal.util import experiments as ex from sklearn.metrics import mean_squared_error import numpy as np from hyperopt import fmin, tpe, hp, STATUS_OK, Trials from hyperopt import space_eval import traceback from . import sampling import pickle def calculate_error(loss_function, test_df, forecast, offset): error = loss_function(test_df.iloc[(offset):], forecast) print("Error : "+str(error)) return error def method_optimize(experiment, forecast_method, train_df, test_df, space, loss_function, max_evals): def objective(params): print(params) try: _output = list(params['output']) forecast = forecast_method(train_df, test_df, params) _step = params.get('step', 1) offset = params['order'] + _step - 1 error = calculate_error(loss_function, test_df[_output], forecast, offset) except Exception: traceback.print_exc() error = 1000 return {'loss': error, 'status': STATUS_OK} print("Running experiment: " + experiment) trials = Trials() best = fmin(objective, space, algo=tpe.suggest, max_evals=max_evals, trials=trials) print('best parameters: ') print(space_eval(space, best)) pickle.dump(best, open("best_" + experiment + ".pkl", "wb")) pickle.dump(trials, open("trials_" + experiment + ".pkl", "wb")) def run_search(methods, data, train, loss_function, max_evals=100, resample=None): if resample: data = sampling.resample_data(data, resample) train_df, test_df = sampling.train_test_split(data, train) for experiment, method, space in methods: method_optimize(experiment, method, train_df, test_df, space, loss_function, max_evals) ``` ## Load Dataset ``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import math from sklearn.metrics import mean_squared_error #columns names wind_farms = ['wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'] # read raw dataset import pandas as pd df = pd.read_csv('https://query.data.world/s/3zx2jusk4z6zvlg2dafqgshqp3oao6', parse_dates=['date'], index_col=0) df.index = pd.to_datetime(df.index, format="%Y%m%d%H") interval = ((df.index >= '2009-07') & (df.index <= '2010-08')) df = df.loc[interval] #Normalize Data # Save Min-Max for Denorm min_raw = df.min() max_raw = df.max() # Perform Normalization norm_df = normalize(df) # Tuning split tuning_df = norm_df["2009-07-01":"2009-07-31"] norm_df = norm_df["2009-08-01":"2010-08-30"] df = df["2009-08-01":"2010-08-30"] ``` ## Forecasting Methods ### Persistence ``` def persistence_forecast(train, test, step): predictions = [] for t in np.arange(0,len(test), step): yhat = [test.iloc[t]] * step predictions.extend(yhat) return predictions def rolling_cv_persistence(df, step): forecasts = [] lags_list = [] limit = df.index[-1].strftime('%Y-%m-%d') test_end = "" index = df.index[0] while test_end < limit : print("Index: ", index.strftime('%Y-%m-%d')) train_start, train_end, test_start, test_end = getRollingWindow(index) index = index + datetime.timedelta(days=7) train = df[train_start : train_end] test = df[test_start : test_end] yhat = persistence_forecast(train, test, step) lags_list.append(1) forecasts.append(yhat) return forecasts, lags_list forecasts_raw, order_list = rolling_cv_persistence(norm_df, 1) forecasts_final = get_final_forecast(forecasts_raw) calculate_rolling_error("rolling_cv_wind_raw_persistence", norm_df, forecasts_final, order_list) files.download('rolling_cv_wind_raw_persistence.csv') ``` ### VAR ``` from statsmodels.tsa.api import VAR, DynamicVAR def evaluate_VAR_models(test_name, train, validation,target, maxlags_list): var_results = pd.DataFrame(columns=['Order','RMSE']) best_score, best_cfg, best_model = float("inf"), None, None for lgs in maxlags_list: model = VAR(train) results = model.fit(maxlags=lgs, ic='aic') order = results.k_ar forecast = [] for i in range(len(validation)-order) : forecast.extend(results.forecast(validation.values[i:i+order],1)) forecast_df = pd.DataFrame(columns=validation.columns, data=forecast) rmse = Measures.rmse(validation[target].iloc[order:], forecast_df[target].values) if rmse < best_score: best_score, best_cfg, best_model = rmse, order, results res = {'Order' : str(order) ,'RMSE' : rmse} print('VAR (%s) RMSE=%.3f' % (str(order),rmse)) var_results = var_results.append(res, ignore_index=True) var_results.to_csv(test_name+".csv") print('Best VAR(%s) RMSE=%.3f' % (best_cfg, best_score)) return best_model def var_forecast(train, test, params): order = params['order'] step = params['step'] model = VAR(train.values) results = model.fit(maxlags=order) lag_order = results.k_ar print("Lag order:" + str(lag_order)) forecast = [] for i in np.arange(0,len(test)-lag_order+1,step) : forecast.extend(results.forecast(test.values[i:i+lag_order],step)) forecast_df = pd.DataFrame(columns=test.columns, data=forecast) return forecast_df.values, lag_order def rolling_cv_var(df, params): forecasts = [] order_list = [] limit = df.index[-1].strftime('%Y-%m-%d') test_end = "" index = df.index[0] while test_end < limit : print("Index: ", index.strftime('%Y-%m-%d')) train_start, train_end, test_start, test_end = getRollingWindow(index) index = index + datetime.timedelta(days=7) train = df[train_start : train_end] test = df[test_start : test_end] # Concat train & validation for test yhat, lag_order = var_forecast(train, test, params) forecasts.append(yhat) order_list.append(lag_order) return forecasts, order_list params_raw = {'order': 4, 'step': 1} forecasts_raw, order_list = rolling_cv_var(norm_df, params_raw) forecasts_final = get_final_forecast(forecasts_raw) calculate_rolling_error("rolling_cv_wind_raw_var", df, forecasts_final, order_list) files.download('rolling_cv_wind_raw_var.csv') ``` ### e-MVFTS ``` from spatiotemporal.models.clusteredmvfts.fts import evolvingclusterfts def evolvingfts_forecast(train_df, test_df, params, train_model=True): _variance_limit = params['variance_limit'] _defuzzy = params['defuzzy'] _t_norm = params['t_norm'] _membership_threshold = params['membership_threshold'] _order = params['order'] _step = params['step'] model = evolvingclusterfts.EvolvingClusterFTS(variance_limit=_variance_limit, defuzzy=_defuzzy, t_norm=_t_norm, membership_threshold=_membership_threshold) model.fit(train_df.values, order=_order, verbose=False) forecast = model.predict(test_df.values, steps_ahead=_step) forecast_df = pd.DataFrame(data=forecast, columns=test_df.columns) return forecast_df.values def rolling_cv_evolving(df, params): forecasts = [] order_list = [] limit = df.index[-1].strftime('%Y-%m-%d') test_end = "" index = df.index[0] first_time = True while test_end < limit : print("Index: ", index.strftime('%Y-%m-%d')) train_start, train_end, test_start, test_end = getRollingWindow(index) index = index + datetime.timedelta(days=7) train = df[train_start : train_end] test = df[test_start : test_end] # Concat train & validation for test yhat = list(evolvingfts_forecast(train, test, params, train_model=first_time)) #yhat.append(yhat[-1]) #para manter o formato do vetor de metricas forecasts.append(yhat) order_list.append(params['order']) first_time = False return forecasts, order_list params_raw = {'variance_limit': 0.001, 'order': 2, 'defuzzy': 'weighted', 't_norm': 'threshold', 'membership_threshold': 0.6, 'step':1} forecasts_raw, order_list = rolling_cv_evolving(norm_df, params_raw) forecasts_final = get_final_forecast(forecasts_raw) calculate_rolling_error("rolling_cv_wind_raw_emvfts", df, forecasts_final, order_list) files.download('rolling_cv_wind_raw_emvfts.csv') ``` ### MLP ``` from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout from keras.constraints import maxnorm from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.layers.normalization import BatchNormalization # convert series to supervised learning def series_to_supervised(data, n_in=1, n_out=1, dropnan=True): n_vars = 1 if type(data) is list else data.shape[1] df = pd.DataFrame(data) cols, names = list(), list() # input sequence (t-n, ... t-1) for i in range(n_in, 0, -1): cols.append(df.shift(i)) names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)] # forecast sequence (t, t+1, ... t+n) for i in range(0, n_out): cols.append(df.shift(-i)) if i == 0: names += [('var%d(t)' % (j+1)) for j in range(n_vars)] else: names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)] # put it all together agg = pd.concat(cols, axis=1) agg.columns = names # drop rows with NaN values if dropnan: agg.dropna(inplace=True) return agg ``` #### MLP Parameter Tuning ``` from spatiotemporal.util import parameter_tuning, sampling from spatiotemporal.util import experiments as ex from sklearn.metrics import mean_squared_error from hyperopt import hp import numpy as np mlp_space = {'choice': hp.choice('num_layers', [ {'layers': 'two', }, {'layers': 'three', 'units3': hp.choice('units3', [8, 16, 64, 128, 256, 512]), 'dropout3': hp.choice('dropout3', [0, 0.25, 0.5, 0.75]) } ]), 'units1': hp.choice('units1', [8, 16, 64, 128, 256, 512]), 'units2': hp.choice('units2', [8, 16, 64, 128, 256, 512]), 'dropout1': hp.choice('dropout1', [0, 0.25, 0.5, 0.75]), 'dropout2': hp.choice('dropout2', [0, 0.25, 0.5, 0.75]), 'batch_size': hp.choice('batch_size', [28, 64, 128, 256, 512]), 'order': hp.choice('order', [1, 2, 3]), 'input': hp.choice('input', [wind_farms]), 'output': hp.choice('output', [wind_farms]), 'epochs': hp.choice('epochs', [100, 200, 300])} def mlp_tuning(train_df, test_df, params): _input = list(params['input']) _nlags = params['order'] _epochs = params['epochs'] _batch_size = params['batch_size'] nfeat = len(train_df.columns) nsteps = params.get('step',1) nobs = _nlags * nfeat output_index = -nfeat*nsteps train_reshaped_df = series_to_supervised(train_df[_input], n_in=_nlags, n_out=nsteps) train_X, train_Y = train_reshaped_df.iloc[:, :nobs].values, train_reshaped_df.iloc[:, output_index:].values test_reshaped_df = series_to_supervised(test_df[_input], n_in=_nlags, n_out=nsteps) test_X, test_Y = test_reshaped_df.iloc[:, :nobs].values, test_reshaped_df.iloc[:, output_index:].values # design network model = Sequential() model.add(Dense(params['units1'], input_dim=train_X.shape[1], activation='relu')) model.add(Dropout(params['dropout1'])) model.add(BatchNormalization()) model.add(Dense(params['units2'], activation='relu')) model.add(Dropout(params['dropout2'])) model.add(BatchNormalization()) if params['choice']['layers'] == 'three': model.add(Dense(params['choice']['units3'], activation='relu')) model.add(Dropout(params['choice']['dropout3'])) model.add(BatchNormalization()) model.add(Dense(train_Y.shape[1], activation='sigmoid')) model.compile(loss='mse', optimizer='adam') # includes the call back object model.fit(train_X, train_Y, epochs=_epochs, batch_size=_batch_size, verbose=False, shuffle=False) # predict the test set forecast = model.predict(test_X, verbose=False) return forecast methods = [] methods.append(("EXP_OAHU_MLP", mlp_tuning, mlp_space)) train_split = 0.6 run_search(methods, tuning_df, train_split, Measures.rmse, max_evals=30, resample=None) ``` #### MLP Forecasting ``` def mlp_multi_forecast(train_df, test_df, params): nfeat = len(train_df.columns) nlags = params['order'] nsteps = params.get('step',1) nobs = nlags * nfeat output_index = -nfeat*nsteps train_reshaped_df = series_to_supervised(train_df, n_in=nlags, n_out=nsteps) train_X, train_Y = train_reshaped_df.iloc[:, :nobs].values, train_reshaped_df.iloc[:, output_index:].values test_reshaped_df = series_to_supervised(test_df, n_in=nlags, n_out=nsteps) test_X, test_Y = test_reshaped_df.iloc[:, :nobs].values, test_reshaped_df.iloc[:, output_index:].values # design network model = designMLPNetwork(train_X.shape[1], train_Y.shape[1], params) # fit network model.fit(train_X, train_Y, epochs=500, batch_size=1000, verbose=False, shuffle=False) forecast = model.predict(test_X) # fcst = [f[0] for f in forecast] fcst = forecast return fcst def designMLPNetwork(input_shape, output_shape, params): model = Sequential() model.add(Dense(params['units1'], input_dim=input_shape, activation='relu')) model.add(Dropout(params['dropout1'])) model.add(BatchNormalization()) model.add(Dense(params['units2'], activation='relu')) model.add(Dropout(params['dropout2'])) model.add(BatchNormalization()) if params['choice']['layers'] == 'three': model.add(Dense(params['choice']['units3'], activation='relu')) model.add(Dropout(params['choice']['dropout3'])) model.add(BatchNormalization()) model.add(Dense(output_shape, activation='sigmoid')) model.compile(loss='mse', optimizer='adam') return model def rolling_cv_mlp(df, params): forecasts = [] order_list = [] limit = df.index[-1].strftime('%Y-%m-%d') test_end = "" index = df.index[0] while test_end < limit : print("Index: ", index.strftime('%Y-%m-%d')) train_start, train_end, test_start, test_end = getRollingWindow(index) index = index + datetime.timedelta(days=7) train = df[train_start : train_end] test = df[test_start : test_end] # Perform forecast yhat = list(mlp_multi_forecast(train, test, params)) yhat.append(yhat[-1]) #para manter o formato do vetor de metricas forecasts.append(yhat) order_list.append(params['order']) return forecasts, order_list # Enter best params params_raw = {'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128} forecasts_raw, order_list = rolling_cv_mlp(norm_df, params_raw) forecasts_final = get_final_forecast(forecasts_raw) calculate_rolling_error("rolling_cv_wind_raw_mlp_multi", df, forecasts_final, order_list) files.download('rolling_cv_wind_raw_mlp_multi.csv') ``` ### Granular FTS ``` from pyFTS.models.multivariate import granular from pyFTS.partitioners import Grid, Entropy from pyFTS.models.multivariate import variable from pyFTS.common import Membership from pyFTS.partitioners import Grid, Entropy ``` #### Granular Parameter Tuning ``` granular_space = { 'npartitions': hp.choice('npartitions', [100, 150, 200]), 'order': hp.choice('order', [1, 2]), 'knn': hp.choice('knn', [1, 2, 3, 4, 5]), 'alpha_cut': hp.choice('alpha_cut', [0, 0.1, 0.2, 0.3]), 'input': hp.choice('input', [['wp1', 'wp2', 'wp3']]), 'output': hp.choice('output', [['wp1', 'wp2', 'wp3']])} def granular_tuning(train_df, test_df, params): _input = list(params['input']) _output = list(params['output']) _npartitions = params['npartitions'] _order = params['order'] _knn = params['knn'] _alpha_cut = params['alpha_cut'] _step = params.get('step',1) ## create explanatory variables exp_variables = [] for vc in _input: exp_variables.append(variable.Variable(vc, data_label=vc, alias=vc, npart=_npartitions, func=Membership.trimf, data=train_df, alpha_cut=_alpha_cut)) model = granular.GranularWMVFTS(explanatory_variables=exp_variables, target_variable=exp_variables[0], order=_order, knn=_knn) model.fit(train_df[_input], num_batches=1) if _step > 1: forecast = pd.DataFrame(columns=test_df.columns) length = len(test_df.index) for k in range(0,(length -(_order + _step - 1))): fcst = model.predict(test_df[_input], type='multivariate', start_at=k, steps_ahead=_step) forecast = forecast.append(fcst.tail(1)) else: forecast = model.predict(test_df[_input], type='multivariate') return forecast[_output].values methods = [] methods.append(("EXP_WIND_GRANULAR", granular_tuning, granular_space)) train_split = 0.6 run_search(methods, tuning_df, train_split, Measures.rmse, max_evals=10, resample=None) ``` #### Granular Forecasting ``` def granular_forecast(train_df, test_df, params): _input = list(params['input']) _output = list(params['output']) _npartitions = params['npartitions'] _knn = params['knn'] _alpha_cut = params['alpha_cut'] _order = params['order'] _step = params.get('step',1) ## create explanatory variables exp_variables = [] for vc in _input: exp_variables.append(variable.Variable(vc, data_label=vc, alias=vc, npart=_npartitions, func=Membership.trimf, data=train_df, alpha_cut=_alpha_cut)) model = granular.GranularWMVFTS(explanatory_variables=exp_variables, target_variable=exp_variables[0], order=_order, knn=_knn) model.fit(train_df[_input], num_batches=1) if _step > 1: forecast = pd.DataFrame(columns=test_df.columns) length = len(test_df.index) for k in range(0,(length -(_order + _step - 1))): fcst = model.predict(test_df[_input], type='multivariate', start_at=k, steps_ahead=_step) forecast = forecast.append(fcst.tail(1)) else: forecast = model.predict(test_df[_input], type='multivariate') return forecast[_output].values def rolling_cv_granular(df, params): forecasts = [] order_list = [] limit = df.index[-1].strftime('%Y-%m-%d') test_end = "" index = df.index[0] while test_end < limit : print("Index: ", index.strftime('%Y-%m-%d')) train_start, train_end, test_start, test_end = getRollingWindow(index) index = index + datetime.timedelta(days=7) train = df[train_start : train_end] test = df[test_start : test_end] # Perform forecast yhat = list(granular_forecast(train, test, params)) yhat.append(yhat[-1]) #para manter o formato do vetor de metricas forecasts.append(yhat) order_list.append(params['order']) return forecasts, order_list def granular_get_final_forecast(forecasts_raw, input): forecasts_final = [] l_min = df[input].min() l_max = df[input].max() for i in np.arange(len(forecasts_raw)): f_raw = denormalize(forecasts_raw[i], l_min, l_max) forecasts_final.append(f_raw) return forecasts_final # Enter best params params_raw = {'alpha_cut': 0.3, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 5, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')} forecasts_raw, order_list = rolling_cv_granular(norm_df, params_raw) forecasts_final = granular_get_final_forecast(forecasts_raw, list(params_raw['input'])) calculate_rolling_error("rolling_cv_wind_raw_granular", df[list(params_raw['input'])], forecasts_final, order_list) files.download('rolling_cv_wind_raw_granular.csv') ``` ## Result Analysis ``` import pandas as pd from google.colab import files files.upload() def createBoxplot(filename, data, xticklabels, ylabel): # Create a figure instance fig = plt.figure(1, figsize=(9, 6)) # Create an axes instance ax = fig.add_subplot(111) # Create the boxplot bp = ax.boxplot(data, patch_artist=True) ## change outline color, fill color and linewidth of the boxes for box in bp['boxes']: # change outline color box.set( color='#7570b3', linewidth=2) # change fill color box.set( facecolor = '#AACCFF' ) ## change color and linewidth of the whiskers for whisker in bp['whiskers']: whisker.set(color='#7570b3', linewidth=2) ## change color and linewidth of the caps for cap in bp['caps']: cap.set(color='#7570b3', linewidth=2) ## change color and linewidth of the medians for median in bp['medians']: median.set(color='#FFE680', linewidth=2) ## change the style of fliers and their fill for flier in bp['fliers']: flier.set(marker='o', color='#e7298a', alpha=0.5) ## Custom x-axis labels ax.set_xticklabels(xticklabels) ax.set_ylabel(ylabel) plt.show() fig.savefig(filename, bbox_inches='tight') var_results = pd.read_csv("rolling_cv_wind_raw_var.csv") evolving_results = pd.read_csv("rolling_cv_wind_raw_emvfts.csv") mlp_results = pd.read_csv("rolling_cv_wind_raw_mlp_multi.csv") granular_results = pd.read_csv("rolling_cv_wind_raw_granular.csv") metric = 'RMSE' results_data = [evolving_results[metric],var_results[metric], mlp_results[metric], granular_results[metric]] xticks = ['e-MVFTS','VAR','MLP','FIG-FTS'] ylab = 'RMSE' createBoxplot("e-mvfts_boxplot_rmse_solar", results_data, xticks, ylab) pd.options.display.float_format = '{:.2f}'.format metric = 'RMSE' rmse_df = pd.DataFrame(columns=['e-MVFTS','VAR','MLP','FIG-FTS']) rmse_df["e-MVFTS"] = evolving_results[metric] rmse_df["VAR"] = var_results[metric] rmse_df["MLP"] = mlp_results[metric] rmse_df["FIG-FTS"] = granular_results[metric] rmse_df.std() metric = 'SMAPE' results_data = [evolving_results[metric],var_results[metric], mlp_results[metric], granular_results[metric]] xticks = ['e-MVFTS','VAR','MLP','FIG-FTS'] ylab = 'SMAPE' createBoxplot("e-mvfts_boxplot_smape_solar", results_data, xticks, ylab) metric = 'SMAPE' smape_df = pd.DataFrame(columns=['e-MVFTS','VAR','MLP','FIG-FTS']) smape_df["e-MVFTS"] = evolving_results[metric] smape_df["VAR"] = var_results[metric] smape_df["MLP"] = mlp_results[metric] smape_df["FIG-FTS"] = granular_results[metric] smape_df.std() metric = "RMSE" data = pd.DataFrame(columns=["VAR", "Evolving", "MLP", "Granular"]) data["VAR"] = var_results[metric] data["Evolving"] = evolving_results[metric] data["MLP"] = mlp_results[metric] data["Granular"] = granular_results[metric] ax = data.plot(figsize=(18,6)) ax.set(xlabel='Window', ylabel=metric) fig = ax.get_figure() #fig.savefig(path_images + exp_id + "_prequential.png") x = np.arange(len(data.columns.values)) names = data.columns.values values = data.mean().values plt.figure(figsize=(5,6)) plt.bar(x, values, align='center', alpha=0.5, width=0.9) plt.xticks(x, names) #plt.yticks(np.arange(0, 1.1, 0.1)) plt.ylabel(metric) #plt.savefig(path_images + exp_id + "_bars.png") metric = "SMAPE" data = pd.DataFrame(columns=["VAR", "Evolving", "MLP", "Granular"]) data["VAR"] = var_results[metric] data["Evolving"] = evolving_results[metric] data["MLP"] = mlp_results[metric] data["Granular"] = granular_results[metric] ax = data.plot(figsize=(18,6)) ax.set(xlabel='Window', ylabel=metric) fig = ax.get_figure() #fig.savefig(path_images + exp_id + "_prequential.png") x = np.arange(len(data.columns.values)) names = data.columns.values values = data.mean().values plt.figure(figsize=(5,6)) plt.bar(x, values, align='center', alpha=0.5, width=0.9) plt.xticks(x, names) #plt.yticks(np.arange(0, 1.1, 0.1)) plt.ylabel(metric) #plt.savefig(path_images + exp_id + "_bars.png") ```
true
code
0.44059
null
null
null
null
# Trade-off between classification accuracy and reconstruction error during dimensionality reduction - Low-dimensional LSTM representations are excellent at dimensionality reduction, but are poor at reconstructing the original data - On the other hand, PCs are excellent at reconstructing the original data but these high-variance components do not preserve class information ``` import numpy as np import pandas as pd import scipy as sp import pickle import os import random import sys # visualizations from _plotly_future_ import v4_subplots import plotly.offline as py py.init_notebook_mode(connected=True) import plotly.graph_objs as go import plotly.subplots as tls import plotly.figure_factory as ff import plotly.io as pio import plotly.express as px pio.templates.default = 'plotly_white' pio.orca.config.executable = '/home/joyneelm/fire/bin/orca' colors = px.colors.qualitative.Plotly class ARGS(): roi = 300 net = 7 subnet = 'wb' train_size = 100 batch_size = 32 num_epochs = 50 zscore = 1 #gru k_hidden = 32 k_layers = 1 dims = [3, 4, 5, 10] args = ARGS() def _get_results(k_dim): RES_DIR = 'results/clip_gru_recon' load_path = (RES_DIR + '/roi_%d_net_%d' %(args.roi, args.net) + '_trainsize_%d' %(args.train_size) + '_k_hidden_%d' %(args.k_hidden) + '_kdim_%d' %(k_dim) + '_k_layers_%d' %(args.k_layers) + '_batch_size_%d' %(args.batch_size) + '_num_epochs_45' + '_z_%d.pkl' %(args.zscore)) with open(load_path, 'rb') as f: results = pickle.load(f) # print(results.keys()) return results r = {} for k_dim in args.dims: r[k_dim] = _get_results(k_dim) def _plot_fig(ss): title_text = ss if ss=='var': ss = 'mse' invert = True else: invert = False subplot_titles = ['train', 'test'] fig = tls.make_subplots(rows=1, cols=2, subplot_titles=subplot_titles, print_grid=False) for ii, x in enumerate(['train', 'test']): gru_score = {'mean':[], 'ste':[]} pca_score = {'mean':[], 'ste':[]} for k_dim in args.dims: a = r[k_dim] # gru decoder y = np.mean(a['%s_%s'%(x, ss)]) gru_score['mean'].append(y) # pca decoder y = np.mean(a['%s_pca_%s'%(x, ss)]) pca_score['mean'].append(y) x = np.arange(len(args.dims)) if invert: y = 1 - np.array(gru_score['mean']) else: y = gru_score['mean'] error_y = gru_score['ste'] trace = go.Bar(x=x, y=y, name='lstm decoder', marker_color=colors[0]) fig.add_trace(trace, 1, ii+1) if invert: y = 1 - np.array(pca_score['mean']) else: y = pca_score['mean'] error_y = pca_score['ste'] trace = go.Bar(x=x, y=y, name='pca recon', marker_color=colors[1]) fig.add_trace(trace, 1, ii+1) fig.update_xaxes(tickvals=np.arange(len(args.dims)), ticktext=args.dims) fig.update_layout(height=350, width=700, title_text=title_text) return fig ``` ## Mean-squared error vs number of dimensions ``` ''' mse ''' ss = 'mse' fig = _plot_fig(ss) fig.show() ``` ## Variance captured vs number of dimensions ``` ''' variance ''' ss = 'var' fig = _plot_fig(ss) fig.show() ``` ## R-squared vs number of dimensions ``` ''' r2 ''' ss = 'r2' fig = _plot_fig(ss) fig.show() results = r[10] # variance not captured by pca recon pca_not = 1 - np.sum(results['pca_var']) print('percent variance captured by pca components = %0.3f' %(1 - pca_not)) # this is proportional to pca mse pca_mse = results['test_pca_mse'] # variance not captured by lstm decoder? lstm_mse = results['test_mse'] lstm_not = lstm_mse*(pca_not/pca_mse) print('percent variance captured by lstm recon = %0.3f' %(1 - lstm_not)) def _plot_fig_ext(ss): title_text = ss if ss=='var': ss = 'mse' invert = True else: invert = False subplot_titles = ['train', 'test'] fig = go.Figure() x = 'test' lstm_score = {'mean':[], 'ste':[]} pca_score = {'mean':[], 'ste':[]} lstm_acc = {'mean':[], 'ste':[]} pc_acc = {'mean':[], 'ste':[]} for k_dim in args.dims: a = r[k_dim] # lstm encoder k_sub = len(a['test']) y = np.mean(a['test']) error_y = 3/np.sqrt(k_sub)*np.std(a['test']) lstm_acc['mean'].append(y) lstm_acc['ste'].append(error_y) # lstm decoder y = np.mean(a['%s_%s'%(x, ss)]) lstm_score['mean'].append(y) lstm_score['ste'].append(error_y) # pca encoder b = r_pc[k_dim] y = np.mean(b['test']) error_y = 3/np.sqrt(k_sub)*np.std(b['test']) pc_acc['mean'].append(y) pc_acc['ste'].append(error_y) # pca decoder y = np.mean(a['%s_pca_%s'%(x, ss)]) pca_score['mean'].append(y) pca_score['ste'].append(error_y) x = np.arange(len(args.dims)) y = lstm_acc['mean'] error_y = lstm_acc['ste'] trace = go.Bar(x=x, y=y, name='GRU Accuracy', error_y=dict(type='data', array=error_y), marker_color=colors[3]) fig.add_trace(trace) y = pc_acc['mean'] error_y = pc_acc['ste'] trace = go.Bar(x=x, y=y, name='PCA Accuracy', error_y=dict(type='data', array=error_y), marker_color=colors[4]) fig.add_trace(trace) if invert: y = 1 - np.array(lstm_score['mean']) else: y = lstm_score['mean'] error_y = lstm_score['ste'] trace = go.Bar(x=x, y=y, name='GRU Reconstruction', error_y=dict(type='data', array=error_y), marker_color=colors[5]) fig.add_trace(trace) if invert: y = 1 - np.array(pca_score['mean']) else: y = pca_score['mean'] error_y = pca_score['ste'] trace = go.Bar(x=x, y=y, name='PCA Reconstruction', error_y=dict(type='data', array=error_y), marker_color=colors[2]) fig.add_trace(trace) fig.update_yaxes(title=dict(text='Accuracy or % variance', font_size=20), gridwidth=1, gridcolor='#bfbfbf', tickfont=dict(size=20)) fig.update_xaxes(title=dict(text='Number of dimensions', font_size=20), tickvals=np.arange(len(args.dims)), ticktext=args.dims, tickfont=dict(size=20)) fig.update_layout(height=470, width=570, font_color='black', legend_orientation='h', legend_font_size=20, legend_x=-0.1, legend_y=-0.3) return fig def _get_pc_results(PC_DIR, k_dim): load_path = (PC_DIR + '/roi_%d_net_%d' %(args.roi, args.net) + '_nw_%s' %(args.subnet) + '_trainsize_%d' %(args.train_size) + '_kdim_%d_batch_size_%d' %(k_dim, args.batch_size) + '_num_epochs_%d_z_%d.pkl' %(args.num_epochs, args.zscore)) with open(load_path, 'rb') as f: results = pickle.load(f) print(results.keys()) return results ``` ## Comparison of LSTM and PCA: classification accuracy and variance captured ``` ''' variance ''' r_pc = {} PC_DIR = 'results/clip_pca' for k_dim in args.dims: r_pc[k_dim] = _get_pc_results(PC_DIR, k_dim) colors = px.colors.qualitative.Set3 #colors = ["#D55E00", "#009E73", "#56B4E9", "#E69F00"] ss = 'var' fig = _plot_fig_ext(ss) fig.show() fig.write_image('figures/fig3c.png') ```
true
code
0.569853
null
null
null
null
# Controlling Flow with Conditional Statements Now that you've learned how to create conditional statements, let's learn how to use them to control the flow of our programs. This is done with `if`, `elif`, and `else` statements. ## The `if` Statement What if we wanted to check if a number was divisible by 2 and if so then print that number out. Let's diagram that out. ![image.png](attachment:image.png) - Check to see if A is even - If yes, then print our message: "A is even" This use case can be translated into a "if" statement. I'm going to write this out in pseudocode which looks very similar to Python. ```text if A is even: print "A is even" ``` ``` # Let's translate this into Python code def check_evenness(A): if A % 2 == 0: print(f"A ({A:02}) is even!") for i in range(1, 11): check_evenness(i) # You can do multiple if statements and they're executed sequentially A = 10 if A > 0: print('A is positive') if A % 2 == 0: print('A is even!') ``` ## The `else` Statement But what if we wanted to know if the number was even OR odd? Let's diagram that out: ![image.png](attachment:image.png) Again, translating this to pseudocode, we're going to use the 'else' statement: ```text if A is even: print "A is even" else: print "A is odd" ``` ``` # Let's translate this into Python code def check_evenness(A): if A % 2 == 0: print(f"A ({A:02}) is even!") else: print(f'A ({A:02}) is odd!') for i in range(1, 11): check_evenness(i) ``` # The 'else if' or `elif` Statement What if we wanted to check if A is divisible by 2 or 3? Let's diagram that out: ![image.png](attachment:image.png) Again, translating this into psuedocode, we're going to use the 'else if' statement. ```text if A is divisible by 2: print "2 divides A" else if A is divisible by 3: print "3 divides A" else print "2 and 3 don't divide A" ``` ``` # Let's translate this into Python code def check_divisible_by_2_and_3(A): if A % 2 == 0: print(f"2 divides A ({A:02})!") # else if in Python is elif elif A % 3 == 0: print(f'3 divides A ({A:02})!') else: print(f'A ({A:02}) is not divisible by 2 or 3)') for i in range(1, 11): check_divisible_by_2_and_3(i) ``` ## Order Matters When chaining conditionals, you need to be careful how you order them. For example, what if we wanted te check if a number is divisible by 2, 3, or both: ![image.png](attachment:image.png) ``` # Let's translate this into Python code def check_divisible_by_2_and_3(A): if A % 2 == 0: print(f"2 divides A ({A:02})!") elif A % 3 == 0: print(f'3 divides A ({A:02})!') elif A % 2 == 0 and A % 3 == 0: print(f'2 and 3 divides A ({A:02})!') else: print(f"2 or 3 doesn't divide A ({A:02})") for i in range(1, 11): check_divisible_by_2_and_3(i) ``` Wait! we would expect that 6, which is divisible by both 2 and 3 to show that! Looking back at the graphic, we can see that the flow is checking for 2 first, and since that's true we follow that path first. Let's make a correction to our diagram to fix this: ![image.png](attachment:image.png) ``` # Let's translate this into Python code def check_divisible_by_2_and_3(A): if A % 2 == 0 and A % 3 == 0: print(f'2 and 3 divides A ({A:02})!') elif A % 3 == 0: print(f'3 divides A ({A:02})!') elif A % 2 == 0: print(f"2 divides A ({A:02})!") else: print(f"2 or 3 doesn't divide A ({A:02})") for i in range(1, 11): check_divisible_by_2_and_3(i) ``` **NOTE:** Always put your most restrictive conditional at the top of your if statements and then work your way down to the least restrictive. ![image.png](attachment:image.png) ## In-Class Assignments - Create a funcition that takes two inputs variables `A` and `divisor`. Check if `divisor` divides into `A`. If it does, print `"<value of A> is divided by <value of divisor>"`. Don't forget about the `in` operator that checks if a substring is in another string. - Create a function that takes an input variable `A` which is a string. Check if `A` has the substring `apple`, `peach`, or `blueberry` in it. Print out which of these are found within the string. Note: you could do this using just if/elif/else statements, but is there a better way using lists, for loops, and if/elif/else statements? ## Solutions ``` def is_divisible(A, divisor): if A % divisor == 0: print(f'{A} is divided by {divisor}') A = 37 # this is actually a crude way to find if the number is prime for i in range(2, int(A / 2)): is_divisible(A, i) # notice that nothing was printed? That's because 37 is prime B = 27 for i in range(2, int(B / 2)): is_divisible(B, i) # this is ONE solution. There are more out there and probably better # one too def check_for_fruit(A): found_fruit = [] if 'apple' in A: found_fruit.append('apple') if 'peach' in A: found_fruit.append('peach') if 'blueberry' in A: found_fruit.append('blueberry') found_fruit_str = '' for fruit in found_fruit: found_fruit_str += fruit found_fruit_str += ', ' if len(found_fruit) > 0: print(found_fruit_str + ' is found within the string') else: print('No fruit found in the string') check_for_fruit('there are apples and peaches in this pie') ```
true
code
0.238772
null
null
null
null
# BERT finetuning on AG_news-4 ## Librairy ``` # !pip install transformers==4.8.2 # !pip install datasets==1.7.0 import os import time import pickle import numpy as np import torch from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score from transformers import BertTokenizer, BertTokenizerFast from transformers import BertForSequenceClassification, AdamW from transformers import Trainer, TrainingArguments from transformers import EarlyStoppingCallback from transformers.data.data_collator import DataCollatorWithPadding from datasets import load_dataset, Dataset, concatenate_datasets # print(torch.__version__) # print(torch.cuda.device_count()) # print(torch.cuda.is_available()) # print(torch.cuda.get_device_name(0)) device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # if torch.cuda.is_available(): # torch.set_default_tensor_type('torch.cuda.FloatTensor') device ``` ## Global variables ``` BATCH_SIZE = 24 NB_EPOCHS = 4 RESULTS_FILE = '~/Results/BERT_finetune/ag_news-4_BERT_finetune_b'+str(BATCH_SIZE)+'_results.pkl' RESULTS_PATH = '~/Results/BERT_finetune/ag_news-4_b'+str(BATCH_SIZE)+'/' CACHE_DIR = '~/Data/huggignface/' # path of your folder ``` ## Dataset ``` # download dataset raw_datasets = load_dataset('ag_news', cache_dir=CACHE_DIR) # tokenize tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") def tokenize_function(examples): return tokenizer(examples["text"], padding=True, truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets.set_format(type='torch', columns=['input_ids', 'attention_mask', 'label']) train_dataset = tokenized_datasets["train"].shuffle(seed=42) train_val_datasets = train_dataset.train_test_split(train_size=0.8) train_dataset = train_val_datasets['train'].rename_column('label', 'labels') val_dataset = train_val_datasets['test'].rename_column('label', 'labels') test_dataset = tokenized_datasets["test"].shuffle(seed=42).rename_column('label', 'labels') # get number of labels num_labels = len(set(train_dataset['labels'].tolist())) num_labels ``` ## Model #### Model ``` model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=num_labels) model.to(device) ``` #### Training ``` training_args = TrainingArguments( # output output_dir=RESULTS_PATH, # params num_train_epochs=NB_EPOCHS, # nb of epochs per_device_train_batch_size=BATCH_SIZE, # batch size per device during training per_device_eval_batch_size=BATCH_SIZE, # cf. paper Sun et al. learning_rate=2e-5, # cf. paper Sun et al. # warmup_steps=500, # number of warmup steps for learning rate scheduler warmup_ratio=0.1, # cf. paper Sun et al. weight_decay=0.01, # strength of weight decay # # eval evaluation_strategy="steps", eval_steps=50, # evaluation_strategy='no', # no more evaluation, takes time # log logging_dir=RESULTS_PATH+'logs', logging_strategy='steps', logging_steps=50, # save # save_strategy='epoch', # save_strategy='steps', # load_best_model_at_end=False load_best_model_at_end=True # cf. paper Sun et al. ) def compute_metrics(p): pred, labels = p pred = np.argmax(pred, axis=1) accuracy = accuracy_score(y_true=labels, y_pred=pred) return {"val_accuracy": accuracy} trainer = Trainer( model=model, args=training_args, tokenizer=tokenizer, train_dataset=train_dataset, eval_dataset=val_dataset, # compute_metrics=compute_metrics, # callbacks=[EarlyStoppingCallback(early_stopping_patience=5)] ) results = trainer.train() training_time = results.metrics["train_runtime"] training_time_per_epoch = training_time / training_args.num_train_epochs training_time_per_epoch trainer.save_model(os.path.join(RESULTS_PATH, 'best_model-0')) ``` ## Results ``` results_d = {} epoch = 1 ordered_files = sorted( [f for f in os.listdir(RESULTS_PATH) if (not f.endswith("logs")) and (f.startswith("best")) # best model eval only ], key=lambda x: int(x.split('-')[1]) ) for filename in ordered_files: print(filename) # load model model_file = os.path.join(RESULTS_PATH, filename) finetuned_model = BertForSequenceClassification.from_pretrained(model_file, num_labels=num_labels) finetuned_model.to(device) finetuned_model.eval() # compute test acc test_trainer = Trainer(finetuned_model, data_collator=DataCollatorWithPadding(tokenizer)) raw_preds, labels, _ = test_trainer.predict(test_dataset) preds = np.argmax(raw_preds, axis=1) test_acc = accuracy_score(y_true=labels, y_pred=preds) # results_d[filename] = (test_acc, training_time_per_epoch*epoch) results_d[filename] = test_acc # best model evaluation only print((test_acc, training_time_per_epoch*epoch)) epoch += 1 results_d['training_time'] = training_time # save results with open(RESULTS_FILE, 'wb') as fh: pickle.dump(results_d, fh) # load results with open(RESULTS_FILE, 'rb') as fh: results_d = pickle.load(fh) results_d ```
true
code
0.617657
null
null
null
null
# Graphs from the presentation ``` import matplotlib.pyplot as plt %matplotlib notebook # create a new figure plt.figure() # create x and y coordinates via lists x = [99, 19, 88, 12, 95, 47, 81, 64, 83, 76] y = [43, 18, 11, 4, 78, 47, 77, 70, 21, 24] # scatter the points onto the figure plt.scatter(x, y) # create a new figure plt.figure() # create x and y values via lists x = [1, 2, 3, 4, 5, 6, 7, 8] y = [1, 4, 9, 16, 25, 36, 49, 64] # plot the line plt.plot(x, y) # create a new figure plt.figure() # create a list of observations observations = [5.24, 3.82, 3.73, 5.3 , 3.93, 5.32, 6.43, 4.4 , 5.79, 4.05, 5.34, 5.62, 6.02, 6.08, 6.39, 5.03, 5.34, 4.98, 3.84, 4.91, 6.62, 4.66, 5.06, 2.37, 5. , 3.7 , 5.22, 5.86, 3.88, 4.68, 4.88, 5.01, 3.09, 5.38, 4.78, 6.26, 6.29, 5.77, 4.33, 5.96, 4.74, 4.54, 7.99, 5. , 4.85, 5.68, 3.73, 4.42, 4.99, 4.47, 6.06, 5.88, 4.56, 5.37, 6.39, 4.15] # create a histogram with 15 intervals plt.hist(observations, bins=15) # create a new figure plt.figure() # plot a red line with a transparancy of 40%. Label this 'line 1' plt.plot(x, y, color='red', alpha=0.4, label='line 1') # make a key appear on the plot plt.legend() # import pandas import pandas as pd # read in data from a csv data = pd.read_csv('data/weather.csv', parse_dates=['Date']) # create a new matplotlib figure plt.figure() # plot the temperature over time plt.plot(data['Date'], data['Temp (C)']) # add a ylabel plt.ylabel('Temperature (C)') plt.figure() # create inputs x = ['UK', 'France', 'Germany', 'Spain', 'Italy'] y = [67.5, 65.1, 83.5, 46.7, 60.6] # plot the chart plt.bar(x, y) plt.ylabel('Population (M)') plt.figure() # create inputs x = ['UK', 'France', 'Germany', 'Spain', 'Italy'] y = [67.5, 65.1, 83.5, 46.7, 60.6] # create a list of colours colour = ['red', 'green', 'blue', 'orange', 'purple'] # plot the chart with the colors and transparancy plt.bar(x, y, color=colour, alpha=0.5) plt.ylabel('Population (M)') plt.figure() x = [1, 2, 3, 4, 5, 6, 7, 8, 9] y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18] y2 = [4, 8, 12, 16, 20, 24, 28, 32, 36] plt.scatter(x, y1, color='cyan', s=5) plt.scatter(x, y2, color='violet', s=15) plt.figure() x = [1, 2, 3, 4, 5, 6, 7, 8, 9] y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18] y2 = [4, 8, 12, 16, 20, 24, 28, 32, 36] size1 = [10, 20, 30, 40, 50, 60, 70, 80, 90] size2 = [90, 80, 70, 60, 50, 40, 30, 20, 10] plt.scatter(x, y1, color='cyan', s=size1) plt.scatter(x, y2, color='violet', s=size2) co2_file = '../5. Examples of Visual Analytics in Python/data/national/co2_emissions_tonnes_per_person.csv' gdp_file = '../5. Examples of Visual Analytics in Python/data/national/gdppercapita_us_inflation_adjusted.csv' pop_file = '../5. Examples of Visual Analytics in Python/data/national/population.csv' co2_per_cap = pd.read_csv(co2_file, index_col=0, parse_dates=True) gdp_per_cap = pd.read_csv(gdp_file, index_col=0, parse_dates=True) population = pd.read_csv(pop_file, index_col=0, parse_dates=True) plt.figure() x = gdp_per_cap.loc['2017'] # gdp in 2017 y = co2_per_cap.loc['2017'] # co2 emmissions in 2017 # population in 2017 will give size of points (divide pop by 1M) size = population.loc['2017'] / 1e6 # scatter points with vector size and some transparancy plt.scatter(x, y, s=size, alpha=0.5) # set a log-scale plt.xscale('log') plt.yscale('log') plt.xlabel('GDP per capita, $US') plt.ylabel('CO2 emissions per person per year, tonnes') plt.figure() # create grid of numbers grid = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # plot the grid with 'autumn' color map plt.imshow(grid, cmap='autumn') # add a colour key plt.colorbar() import pandas as pd data = pd.read_csv("../5. Examples of Visual Analytics in Python/data/stocks/FTSE_stock_prices.csv", index_col=0) correlation_matrix = data.pct_change().corr() # create a new figure plt.figure() # imshow the grid of correlation plt.imshow(correlation_matrix, cmap='terrain') # add a color bar plt.colorbar() # remove cluttering x and y ticks plt.xticks([]) plt.yticks([]) elevation = pd.read_csv('data/UK_elevation.csv', index_col=0) # create figure plt.figure() # imshow data plt.imshow(elevation, # grid data vmin=-50, # minimum for colour bar vmax=500, # maximum for colour bar cmap='terrain', # terrain style colour map extent=[-11, 3, 50, 60]) # [x1, x2, y1, y2] plot boundaries # add axis labels and a title plt.xlabel('Longitude') plt.ylabel('Latitude') plt.title('UK Elevation Profile') # add a colourbar plt.colorbar() ```
true
code
0.663723
null
null
null
null
# BLU15 - Model CSI ## Intro: It often happens that your data distribution changes with time. More than that, sometimes you don't know how a model was trained and what was the original training data. In this learning unit we're going to try to identify whether an existing model meets our expectations and redeploy it. ## Problem statement: As an example, we're going to use the same problem that you met in the last BLU. You're already familiar with the problem, but just as a reminder: > The police department has received lots of complaints about its stop and search policy. Every time a car is stopped, the police officers have to decide whether or not to search the car for contraband. According to critics, these searches have a bias against people of certain backgrounds. You got a model from your client, and **here is the model's description:** > It's a LightGBM model (LGBMClassifier) trained on the following features: > - Department Name > - InterventionLocationName > - InterventionReasonCode > - ReportingOfficerIdentificationID > - ResidentIndicator > - SearchAuthorizationCode > - StatuteReason > - SubjectAge > - SubjectEthnicityCode > - SubjectRaceCode > - SubjectSexCode > - TownResidentIndicator > All the categorical feature were one-hot encoded. The only numerical feature (SubjectAge) was not changed. The rows that contain rare categorical features (the ones that appear less than N times in the dataset) were removed. Check the original_model.ipynb notebook for more details. P.S., if you never heard about lightgbm, XGboost and other gradient boosting, I highly recommend you to read this [article](https://mlcourse.ai/articles/topic10-boosting/) or watch these videos: [part1](https://www.youtube.com/watch?v=g0ZOtzZqdqk), [part2](https://www.youtube.com/watch?v=V5158Oug4W8) It's not essential for this BLU, so you might leave this link as a desert after you go through the learning materials and solve the exercises, but these are very good models you can use later on, so I suggest reading about them. **Here are the requirements that the police department created:** > - A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found) > - No police sub-department should have a discrepancy bigger than 5% between the search success rate between protected classes (race, ethnicity, gender) > - The largest possible amount of contraband found, given the constraints above. **And here is the description of how the current model succeeds with the requirements:** - precision score = 50% - recall = 89.3% - roc_auc_score for the probability predictions = 82.7% The precision and recall above are met for probability predictions with a specified threshold equal to **0.21073452797732833** It's not said whether the second requirement is met, and as it was not met in the previous learning unit, let's ignore it for now. ## Model diagnosing: Let's firstly try to compare these models to the ones that we created in the previous BLU: | Model | Baseline | Second iteration | New model | Best model | |-------------------|---------|--------|--------|--------| | Requirement 1 - success rate | 0.53 | 0.38 | 0.5 | 1 | | Requirement 2 - global discrimination (race) | 0.105 | 0.11 | NaN | 1 | | Requirement 2 - global discrimination (sex) | 0.012 | 0.014 | NaN | 1 | | Requirement 2 - global discrimination (ethnicity) | 0.114 | 0.101 | NaN | 2 | | Requirement 2 - # department discrimination (race) | 27 | 17 | NaN | 2 | | Requirement 2 - # department discrimination (sex) | 19 | 23 | NaN | 1 | | Requirement 2 - # department discrimination (ethnicity) | 24 | NaN | 23 | 2 | | Requirement 3 - contraband found (Recall) | 0.65 | 0.76 | 0.893 | 3 | As we can see, the last model has the exact required success rate (Requirement 1) as we need, and a very good Recall (Requirement 3). But it might be risky to have such a specific threshold, as we might end up success rate < 0.5 really quickly. It might be a better idea to have a bigger threshold (e.g. 0.25), but let's see. Let's imagine that the model was trained long time ago. And now you're in the future trying to evaluate the model, because things might have changed. Data distribution is not always the same, so something that used to work even a year ago could be completely wrong today. Especially in 2020! <img src="media/future_2020.jpg" width=400/> First of all, let's start the server which is running this model. Open the shell, ```sh python protected_server.py ``` And read a csv files with new observations from 2020: ``` import joblib import pandas as pd import json import joblib import pickle from sklearn.metrics import precision_score, recall_score, roc_auc_score from sklearn.metrics import confusion_matrix import requests import matplotlib.pyplot as plt import matplotlib.image as mpimg from sklearn.metrics import precision_recall_curve %matplotlib inline df = pd.read_csv('./data/new_observations.csv') df.head() ``` Let's start from sending all those requests and comparing the model prediction results with the target values. The model is already prepared to convert our observations to the format its expecting, the only thing we need to change is making department and intervention location names lowercase, and we're good to extract fields from the dataframe and put them to the post request. ``` # lowercaes departments and location names df['Department Name'] = df['Department Name'].apply(lambda x: str(x).lower()) df['InterventionLocationName'] = df['InterventionLocationName'].apply(lambda x: str(x).lower()) url = "http://127.0.0.1:5000/predict" headers = {'Content-Type': 'application/json'} def send_request(index: int, obs: dict, url: str, headers: dict): observation = { "id": index, "observation": { "Department Name": obs["Department Name"], "InterventionLocationName": obs["InterventionLocationName"], "InterventionReasonCode": obs["InterventionReasonCode"], "ReportingOfficerIdentificationID": obs["ReportingOfficerIdentificationID"], "ResidentIndicator": obs["ResidentIndicator"], "SearchAuthorizationCode": obs["SearchAuthorizationCode"], "StatuteReason": obs["StatuteReason"], "SubjectAge": obs["SubjectAge"], "SubjectEthnicityCode": obs["SubjectEthnicityCode"], "SubjectRaceCode": obs["SubjectRaceCode"], "SubjectSexCode": obs["SubjectSexCode"], "TownResidentIndicator": obs["TownResidentIndicator"] } } r = requests.post(url, data=json.dumps(observation), headers=headers) result = json.loads(r.text) return result responses = [send_request(i, obs, url, headers) for i, obs in df.iterrows()] print(responses[0]) df['proba'] = [r['proba'] for r in responses] threshold = 0.21073452797732833 # we're going to use the threshold we got from the client df['prediction'] = [1 if p >= threshold else 0 for p in df['proba']] ``` **NOTE:** We could also load the model and make predictions locally (without using the api), but: 1. I wanted to show you how you might send requests in a similar situation 2. If you have a running API and some model file, you always need to understand how the API works (if it makes any kind of data preprocessing), which might sometimes be complicated, and if you're trying to analyze the model running in production, you still need to make sure that the local predictions you do are equal to the one that the production api does. ``` confusion_matrix(df['ContrabandIndicator'], df['prediction']) ``` If you're not familiar with confusion matrixes, **here is an explanation of the values:** <img src="./media/confusion_matrix.jpg" alt="drawing" width="500"/> These values don't seem to be good. Let's once again take a look on the client's requirements and see if we still meet them: > A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found) ``` def verify_success_rate_above(y_true, y_pred, min_success_rate=0.5): """ Verifies the success rate on a test set is above a provided minimum """ precision = precision_score(y_true, y_pred, pos_label=True) is_satisfied = (precision >= min_success_rate) return is_satisfied, precision verify_success_rate_above(df['ContrabandIndicator'], df['prediction'], 0.5) ``` ![No please](./media/no_please.jpg) > The largest possible amount of contraband found, given the constraints above. As the client says, their model recall was 0.893. And what now? ``` def verify_amount_found(y_true, y_pred): """ Verifies the amout of contraband found in the test dataset - a.k.a the recall in our test set """ recall = recall_score(y_true, y_pred, pos_label=True) return recall verify_amount_found(df['ContrabandIndicator'], df['prediction']) ``` <img src="./media/no_please_2.jpg" alt="drawing" width="500"/> Okay, relax, it happens. Let's start from checking different thresholds. Maybe the selected threshold was to specific and doesn't work anymore. What about 0.25? ``` threshold = 0.25 df['prediction'] = [1 if p >= threshold else 0 for p in df['proba']] verify_success_rate_above(df['ContrabandIndicator'], df['prediction'], 0.5) verify_amount_found(df['ContrabandIndicator'], df['prediction']) ``` <img src="./media/poker.jpg" alt="drawing" width="200"/> Okay, let's try the same technique to identify the best threshold as they originally did. Maybe we find something good enough. It's not a good idea to verify such things on the test data, but we're going to use it just to confirm the model's performance, not to select the threshold. ``` precision, recall, thresholds = precision_recall_curve(df['ContrabandIndicator'], df['proba']) precision = precision[:-1] recall = recall[:-1] fig=plt.figure() ax1 = plt.subplot(211) ax2 = plt.subplot(212) ax1.hlines(y=0.5,xmin=0, xmax=1, colors='red') ax1.plot(thresholds,precision) ax2.plot(thresholds,recall) ax1.get_shared_x_axes().join(ax1, ax2) ax1.set_xticklabels([]) plt.xlabel('Threshold') ax1.set_title('Precision') ax2.set_title('Recall') plt.show() ``` So what do we see? There is some threshold value (around 0.6) that gives us precision >= 0.5. But the threshold is so big, that the recall at this point is really-really low. Let's calculate the exact values: ``` min_index = [i for i, prec in enumerate(precision) if prec >= 0.5][0] print(min_index) thresholds[min_index] precision[min_index] recall[min_index] ``` <img src="./media/incredible.jpg" alt="drawing" width="400"/> Before we move on, we need to understand why this happens, so that we can decide what kind of action to perform. Let's try to analyze the changes in data and discuss different things we might want to do. ``` old_df = pd.read_csv('./data/train_searched.csv') old_df.head() ``` We're going to apply the same changes to the dataset as in the original model notebook unit to understand what was the original data like and how the current dataset differs. ``` old_df = old_df[(old_df['VehicleSearchedIndicator']==True)] # lowercaes departments and location names old_df['Department Name'] = old_df['Department Name'].apply(lambda x: str(x).lower()) old_df['InterventionLocationName'] = old_df['InterventionLocationName'].apply(lambda x: str(x).lower()) train_features = old_df.columns.drop(['VehicleSearchedIndicator', 'ContrabandIndicator']) categorical_features = train_features.drop(['InterventionDateTime', 'SubjectAge']) numerical_features = ['SubjectAge'] target = 'ContrabandIndicator' # I'm going to remove less common features. # Let's create a dictionary with the minimum required number of appearences min_frequency = { "Department Name": 50, "InterventionLocationName": 50, "ReportingOfficerIdentificationID": 30, "StatuteReason": 10 } def filter_values(df: pd.DataFrame, column_name: str, threshold: int): value_counts = df[column_name].value_counts() to_keep = value_counts[value_counts > threshold].index filtered = df[df[column_name].isin(to_keep)] return filtered for feature, threshold in min_frequency.items(): old_df = filter_values(old_df, feature, threshold) old_df.shape old_df.head() old_df['ContrabandIndicator'].value_counts(normalize=True) df['ContrabandIndicator'].value_counts(normalize=True) ``` Looks like we got a bit more contraband now, and it's already a good sign: if the training data had a different target feature distribution than the test set, the model's predictions might have a different distribution as well. It's a good practice to have the same target feature distribution both in training and test sets. Let's investigate further ``` new_department_names = df['Department Name'].unique() old_department_names = old_df['Department Name'].unique() unknown_departments = [department for department in new_department_names if department not in old_department_names] len(unknown_departments) df[df['Department Name'].isin(unknown_departments)].shape ``` So we have 10 departments that the original model was not trained on, but they are only 23 rows from the test set. Let's repeat the same thing for the Intervention Location names ``` new_location_names = df['InterventionLocationName'].unique() old_location_names = old_df['InterventionLocationName'].unique() unknown_locations = [location for location in new_location_names if location not in old_location_names] len(unknown_locations) df[df['InterventionLocationName'].isin(unknown_locations)].shape[0] print('unknown locations: ', df[df['InterventionLocationName'].isin(unknown_locations)].shape[0] * 100 / df.shape[0], '%') ``` Alright, a bit more of unknown locations. We don't know if the feature was important for the model, so these 5.3% of unknown locations might be important or not. But it's worth keeping it in mind. **Here are a few ideas of what we could try to do:** 1. Reanalyze the filtered locations, e.g. filter more rare ones. 2. Create a new category for the rare locations 3. Analyze the unknown locations for containing typos Let's go further and take a look on the relation between department names and the number of contrabands they find. We're going to select the most common department names, and then see the percentage of contraband indicator in each one for the training and test sets ``` common_departments = df['Department Name'].value_counts().head(20).index departments_new = df[df['Department Name'].isin(common_departments)] departments_old = old_df[old_df['Department Name'].isin(common_departments)] pd.crosstab(departments_new['ContrabandIndicator'], departments_new['Department Name'], normalize="columns") pd.crosstab(departments_old['ContrabandIndicator'], departments_old['Department Name'], normalize="columns") ``` We can clearly see that some departments got a huge difference in the contraband indicator. E.g. Bridgeport used to have 93% of False contrabands, and now has only 62%. Similar situation with Danbury and New Haven. Why? Hard to say. There are really a lot of variables here. Maybe the departments got instructed on how to look for contraband. But we might need to retrain the model. Let's just finish reviewing other columns. ``` common_location = df['InterventionLocationName'].value_counts().head(20).index locations_new = df[df['InterventionLocationName'].isin(common_location)] locations_old = old_df[old_df['InterventionLocationName'].isin(common_location)] pd.crosstab(locations_new['ContrabandIndicator'], locations_new['InterventionLocationName'], normalize="columns") pd.crosstab(locations_old['ContrabandIndicator'], locations_old['InterventionLocationName'], normalize="columns") ``` What do we see? First of all, the InterventionLocationName and the Department Name are often same. It sounds pretty logic, as probably policeman's usually work in the area of their department. But we could try to create a feature saying whether InterventionLocationName is equal to the Department Name. Or maybe we could just get rid of one of them, if all the values are equal. What else? Well, There are similar changes in the Contraband distribution as in Department Name case. Let's move on: ``` pd.crosstab(df['ContrabandIndicator'], df['InterventionReasonCode'], normalize="columns") pd.crosstab(old_df['ContrabandIndicator'], old_df['InterventionReasonCode'], normalize="columns") ``` There are some small changes, but they don't seem to be significant. Especially that all the 3 values have around 33% of Contraband. Time for officers: ``` df['ReportingOfficerIdentificationID'].value_counts() filter_values(df, 'ReportingOfficerIdentificationID', 2)['ReportingOfficerIdentificationID'].nunique() ``` Well, looks like there are a lot of unique values for the officer id (1166 for 2000 records), and there are not so many common ones (only 206 officers have more than 2 rows in the dataset) so it doesn't make much sense to analyze it. Let's quickly go throw the rest of the columns: ``` df.columns rest = ['ResidentIndicator', 'SearchAuthorizationCode', 'StatuteReason', 'SubjectEthnicityCode', 'SubjectRaceCode', 'SubjectSexCode','TownResidentIndicator'] for col in rest: display(pd.crosstab(df['ContrabandIndicator'], df[col], normalize="columns")) display(pd.crosstab(old_df['ContrabandIndicator'], old_df[col], normalize="columns")) ``` We see that all the columns got changes, but they don't seem to be so significant as in the Departments cases. Anyway, it seems like we need to retrain the model. <img src="./media/retrain.jpg" alt="drawing" width="400"/> Retraining a model is always a decision we need to think about. Was this change in data constant, temporary or seasonal? In other words, do we expect the data distribution to stay as it is? To change back after Covid? To change from season to season? **Depending on that, we could retrain the model differently:** - **If it's a seasonality**, we might want to add features like season or month and train the same model to predict differently depending on the season. We could also investigate time-series classification algorithms. - **If it's something that is going to change back**, we might either train a new model for this particular period in case the current data distrubution changes were temporary. Otherwise, if we expect the data distribution change here and back from time to time (and we know these periods in advance), we could create a new feature that would help model understand which period it is. > E.g. if we had a task of predicting beer consumption and had a city that has a lot of football matches, we might add a feature like **football_championship** and make the model predict differently for this occasions. - **If the data distribution has simply changed and we know that it's never going to come back**, we can simply retrain the model. > But in some cases we have no idea why some changes appeared (e.g. in this case of departments having more contraband). - In this case it might be a good idea to train a new model on the new datast and create some monitoring for these features distribution, so we could react when things change. > So, in our case we don't know what was the reason of data distribution changes, so we'd like to train a model on the new dataset. > The only thing is the size of the dataset. Original dataset had around 50k rows, and our new set has only 2000. It's not enough to train a good model, so this time we're going to combine both the datasets and add a new feature helping model to distinguish between them. If we had more data, it would be probably better to train a completely new model. And we're done! <img src="./media/end.jpg" alt="drawing" width="400"/>
true
code
0.507141
null
null
null
null
# Profiling TensorFlow Multi GPU Multi Node Training Job with Amazon SageMaker Debugger This notebook will walk you through creating a TensorFlow training job with the SageMaker Debugger profiling feature enabled. It will create a multi GPU multi node training using Horovod. ### (Optional) Install SageMaker and SMDebug Python SDKs To use the new Debugger profiling features released in December 2020, ensure that you have the latest versions of SageMaker and SMDebug SDKs installed. Use the following cell to update the libraries and restarts the Jupyter kernel to apply the updates. ``` import sys import IPython install_needed = False # should only be True once if install_needed: print("installing deps and restarting kernel") !{sys.executable} -m pip install -U sagemaker smdebug IPython.Application.instance().kernel.do_shutdown(True) ``` ## 1. Create a Training Job with Profiling Enabled<a class="anchor" id="option-1"></a> You will use the standard [SageMaker Estimator API for Tensorflow](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html#tensorflow-estimator) to create training jobs. To enable profiling, create a `ProfilerConfig` object and pass it to the `profiler_config` parameter of the `TensorFlow` estimator. ### Define parameters for distributed training This parameter tells SageMaker how to configure and run horovod. If you want to use more than 4 GPUs per node then change the process_per_host paramter accordingly. ``` distributions = { "mpi": { "enabled": True, "processes_per_host": 4, "custom_mpi_options": "-verbose -x HOROVOD_TIMELINE=./hvd_timeline.json -x NCCL_DEBUG=INFO -x OMPI_MCA_btl_vader_single_copy_mechanism=none", } } ``` ### Configure rules We specify the following rules: - loss_not_decreasing: checks if loss is decreasing and triggers if the loss has not decreased by a certain persentage in the last few iterations - LowGPUUtilization: checks if GPU is under-utilizated - ProfilerReport: runs the entire set of performance rules and create a final output report with further insights and recommendations. ``` from sagemaker.debugger import Rule, ProfilerRule, rule_configs rules = [ Rule.sagemaker(rule_configs.loss_not_decreasing()), ProfilerRule.sagemaker(rule_configs.LowGPUUtilization()), ProfilerRule.sagemaker(rule_configs.ProfilerReport()), ] ``` ### Specify a profiler configuration The following configuration will capture system metrics at 500 milliseconds. The system metrics include utilization per CPU, GPU, memory utilization per CPU, GPU as well I/O and network. Debugger will capture detailed profiling information from step 5 to step 15. This information includes Horovod metrics, dataloading, preprocessing, operators running on CPU and GPU. ``` from sagemaker.debugger import ProfilerConfig, FrameworkProfile profiler_config = ProfilerConfig( system_monitor_interval_millis=500, framework_profile_params=FrameworkProfile( local_path="/opt/ml/output/profiler/", start_step=5, num_steps=10 ), ) ``` ### Get the image URI The image that we will is dependent on the region that you are running this notebook in. ``` import boto3 session = boto3.session.Session() region = session.region_name image_uri = f"763104351884.dkr.ecr.{region}.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04" ``` ### Define estimator To enable profiling, you need to pass the Debugger profiling configuration (`profiler_config`), a list of Debugger rules (`rules`), and the image URI (`image_uri`) to the estimator. Debugger enables monitoring and profiling while the SageMaker estimator requests a training job. ``` import sagemaker from sagemaker.tensorflow import TensorFlow estimator = TensorFlow( role=sagemaker.get_execution_role(), image_uri=image_uri, instance_count=2, instance_type="ml.p3.8xlarge", entry_point="tf-hvd-train.py", source_dir="entry_point", profiler_config=profiler_config, distribution=distributions, rules=rules, ) ``` ### Start training job The following `estimator.fit()` with `wait=False` argument initiates the training job in the background. You can proceed to run the dashboard or analysis notebooks. ``` estimator.fit(wait=False) ``` ## 2. Analyze Profiling Data Copy outputs of the following cell (`training_job_name` and `region`) to run the analysis notebooks `profiling_generic_dashboard.ipynb`, `analyze_performance_bottlenecks.ipynb`, and `profiling_interactive_analysis.ipynb`. ``` training_job_name = estimator.latest_training_job.name print(f"Training jobname: {training_job_name}") print(f"Region: {region}") ``` While the training is still in progress you can visualize the performance data in SageMaker Studio or in the notebook. Debugger provides utilities to plot system metrics in form of timeline charts or heatmaps. Checkout out the notebook [profiling_interactive_analysis.ipynb](analysis_tools/profiling_interactive_analysis.ipynb) for more details. In the following code cell we plot the total CPU and GPU utilization as timeseries charts. To visualize other metrics such as I/O, memory, network you simply need to extend the list passed to `select_dimension` and `select_events`. ### Install the SMDebug client library to use Debugger analysis tools ``` import pip def import_or_install(package): try: __import__(package) except ImportError: pip.main(["install", package]) import_or_install("smdebug") ``` ### Access the profiling data using the SMDebug `TrainingJob` utility class ``` from smdebug.profiler.analysis.notebook_utils.training_job import TrainingJob tj = TrainingJob(training_job_name, region) tj.wait_for_sys_profiling_data_to_be_available() ``` ### Plot time line charts The following code shows how to use the SMDebug `TrainingJob` object, refresh the object if new event files are available, and plot time line charts of CPU and GPU usage. ``` from smdebug.profiler.analysis.notebook_utils.timeline_charts import TimelineCharts system_metrics_reader = tj.get_systems_metrics_reader() system_metrics_reader.refresh_event_file_list() view_timeline_charts = TimelineCharts( system_metrics_reader, framework_metrics_reader=None, select_dimensions=["CPU", "GPU"], select_events=["total"], ) ``` ## 3. Download Debugger Profiling Report The `ProfilerReport()` rule creates an html report `profiler-report.html` with a summary of builtin rules and recommenades of next steps. You can find this report in your S3 bucket. ``` rule_output_path = estimator.output_path + estimator.latest_training_job.job_name + "/rule-output" print(f"You will find the profiler report in {rule_output_path}") ``` For more information about how to download and open the Debugger profiling report, see [SageMaker Debugger Profiling Report](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-profiling-report.html) in the SageMaker developer guide.
true
code
0.353875
null
null
null
null
# 7.6 Transformerモデル(分類タスク用)の実装 - 本ファイルでは、クラス分類のTransformerモデルを実装します。 ※ 本章のファイルはすべてUbuntuでの動作を前提としています。Windowsなど文字コードが違う環境での動作にはご注意下さい。 # 7.6 学習目標 1. Transformerのモジュール構成を理解する 2. LSTMやRNNを使用せずCNNベースのTransformerで自然言語処理が可能な理由を理解する 3. Transformerを実装できるようになる # 事前準備 書籍の指示に従い、本章で使用するデータを用意します ``` import math import numpy as np import random import torch import torch.nn as nn import torch.nn.functional as F import torchtext # Setup seeds torch.manual_seed(1234) np.random.seed(1234) random.seed(1234) class Embedder(nn.Module): '''idで示されている単語をベクトルに変換します''' def __init__(self, text_embedding_vectors): super(Embedder, self).__init__() self.embeddings = nn.Embedding.from_pretrained( embeddings=text_embedding_vectors, freeze=True) # freeze=Trueによりバックプロパゲーションで更新されず変化しなくなります def forward(self, x): x_vec = self.embeddings(x) return x_vec # 動作確認 # 前節のDataLoaderなどを取得 from utils.dataloader import get_IMDb_DataLoaders_and_TEXT train_dl, val_dl, test_dl, TEXT = get_IMDb_DataLoaders_and_TEXT( max_length=256, batch_size=24) # ミニバッチの用意 batch = next(iter(train_dl)) # モデル構築 net1 = Embedder(TEXT.vocab.vectors) # 入出力 x = batch.Text[0] x1 = net1(x) # 単語をベクトルに print("入力のテンソルサイズ:", x.shape) print("出力のテンソルサイズ:", x1.shape) class PositionalEncoder(nn.Module): '''入力された単語の位置を示すベクトル情報を付加する''' def __init__(self, d_model=300, max_seq_len=256): super().__init__() self.d_model = d_model # 単語ベクトルの次元数 # 単語の順番(pos)と埋め込みベクトルの次元の位置(i)によって一意に定まる値の表をpeとして作成 pe = torch.zeros(max_seq_len, d_model) # GPUが使える場合はGPUへ送る、ここでは省略。実際に学習時には使用する # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # pe = pe.to(device) for pos in range(max_seq_len): for i in range(0, d_model, 2): pe[pos, i] = math.sin(pos / (10000 ** ((2 * i)/d_model))) pe[pos, i + 1] = math.cos(pos / (10000 ** ((2 * (i + 1))/d_model))) # 表peの先頭に、ミニバッチ次元となる次元を足す self.pe = pe.unsqueeze(0) # 勾配を計算しないようにする self.pe.requires_grad = False def forward(self, x): # 入力xとPositonal Encodingを足し算する # xがpeよりも小さいので、大きくする ret = math.sqrt(self.d_model)*x + self.pe return ret # 動作確認 # モデル構築 net1 = Embedder(TEXT.vocab.vectors) net2 = PositionalEncoder(d_model=300, max_seq_len=256) # 入出力 x = batch.Text[0] x1 = net1(x) # 単語をベクトルに x2 = net2(x1) print("入力のテンソルサイズ:", x1.shape) print("出力のテンソルサイズ:", x2.shape) class Attention(nn.Module): '''Transformerは本当はマルチヘッドAttentionですが、 分かりやすさを優先しシングルAttentionで実装します''' def __init__(self, d_model=300): super().__init__() # SAGANでは1dConvを使用したが、今回は全結合層で特徴量を変換する self.q_linear = nn.Linear(d_model, d_model) self.v_linear = nn.Linear(d_model, d_model) self.k_linear = nn.Linear(d_model, d_model) # 出力時に使用する全結合層 self.out = nn.Linear(d_model, d_model) # Attentionの大きさ調整の変数 self.d_k = d_model def forward(self, q, k, v, mask): # 全結合層で特徴量を変換 k = self.k_linear(k) q = self.q_linear(q) v = self.v_linear(v) # Attentionの値を計算する # 各値を足し算すると大きくなりすぎるので、root(d_k)で割って調整 weights = torch.matmul(q, k.transpose(1, 2)) / math.sqrt(self.d_k) # ここでmaskを計算 mask = mask.unsqueeze(1) weights = weights.masked_fill(mask == 0, -1e9) # softmaxで規格化をする normlized_weights = F.softmax(weights, dim=-1) # AttentionをValueとかけ算 output = torch.matmul(normlized_weights, v) # 全結合層で特徴量を変換 output = self.out(output) return output, normlized_weights class FeedForward(nn.Module): def __init__(self, d_model, d_ff=1024, dropout=0.1): '''Attention層から出力を単純に全結合層2つで特徴量を変換するだけのユニットです''' super().__init__() self.linear_1 = nn.Linear(d_model, d_ff) self.dropout = nn.Dropout(dropout) self.linear_2 = nn.Linear(d_ff, d_model) def forward(self, x): x = self.linear_1(x) x = self.dropout(F.relu(x)) x = self.linear_2(x) return x class TransformerBlock(nn.Module): def __init__(self, d_model, dropout=0.1): super().__init__() # LayerNormalization層 # https://pytorch.org/docs/stable/nn.html?highlight=layernorm self.norm_1 = nn.LayerNorm(d_model) self.norm_2 = nn.LayerNorm(d_model) # Attention層 self.attn = Attention(d_model) # Attentionのあとの全結合層2つ self.ff = FeedForward(d_model) # Dropout self.dropout_1 = nn.Dropout(dropout) self.dropout_2 = nn.Dropout(dropout) def forward(self, x, mask): # 正規化とAttention x_normlized = self.norm_1(x) output, normlized_weights = self.attn( x_normlized, x_normlized, x_normlized, mask) x2 = x + self.dropout_1(output) # 正規化と全結合層 x_normlized2 = self.norm_2(x2) output = x2 + self.dropout_2(self.ff(x_normlized2)) return output, normlized_weights # 動作確認 # モデル構築 net1 = Embedder(TEXT.vocab.vectors) net2 = PositionalEncoder(d_model=300, max_seq_len=256) net3 = TransformerBlock(d_model=300) # maskの作成 x = batch.Text[0] input_pad = 1 # 単語のIDにおいて、'<pad>': 1 なので input_mask = (x != input_pad) print(input_mask[0]) # 入出力 x1 = net1(x) # 単語をベクトルに x2 = net2(x1) # Positon情報を足し算 x3, normlized_weights = net3(x2, input_mask) # Self-Attentionで特徴量を変換 print("入力のテンソルサイズ:", x2.shape) print("出力のテンソルサイズ:", x3.shape) print("Attentionのサイズ:", normlized_weights.shape) class ClassificationHead(nn.Module): '''Transformer_Blockの出力を使用し、最後にクラス分類させる''' def __init__(self, d_model=300, output_dim=2): super().__init__() # 全結合層 self.linear = nn.Linear(d_model, output_dim) # output_dimはポジ・ネガの2つ # 重み初期化処理 nn.init.normal_(self.linear.weight, std=0.02) nn.init.normal_(self.linear.bias, 0) def forward(self, x): x0 = x[:, 0, :] # 各ミニバッチの各文の先頭の単語の特徴量(300次元)を取り出す out = self.linear(x0) return out # 動作確認 # ミニバッチの用意 batch = next(iter(train_dl)) # モデル構築 net1 = Embedder(TEXT.vocab.vectors) net2 = PositionalEncoder(d_model=300, max_seq_len=256) net3 = TransformerBlock(d_model=300) net4 = ClassificationHead(output_dim=2, d_model=300) # 入出力 x = batch.Text[0] x1 = net1(x) # 単語をベクトルに x2 = net2(x1) # Positon情報を足し算 x3, normlized_weights = net3(x2, input_mask) # Self-Attentionで特徴量を変換 x4 = net4(x3) # 最終出力の0単語目を使用して、分類0-1のスカラーを出力 print("入力のテンソルサイズ:", x3.shape) print("出力のテンソルサイズ:", x4.shape) # 最終的なTransformerモデルのクラス class TransformerClassification(nn.Module): '''Transformerでクラス分類させる''' def __init__(self, text_embedding_vectors, d_model=300, max_seq_len=256, output_dim=2): super().__init__() # モデル構築 self.net1 = Embedder(text_embedding_vectors) self.net2 = PositionalEncoder(d_model=d_model, max_seq_len=max_seq_len) self.net3_1 = TransformerBlock(d_model=d_model) self.net3_2 = TransformerBlock(d_model=d_model) self.net4 = ClassificationHead(output_dim=output_dim, d_model=d_model) def forward(self, x, mask): x1 = self.net1(x) # 単語をベクトルに x2 = self.net2(x1) # Positon情報を足し算 x3_1, normlized_weights_1 = self.net3_1( x2, mask) # Self-Attentionで特徴量を変換 x3_2, normlized_weights_2 = self.net3_2( x3_1, mask) # Self-Attentionで特徴量を変換 x4 = self.net4(x3_2) # 最終出力の0単語目を使用して、分類0-1のスカラーを出力 return x4, normlized_weights_1, normlized_weights_2 # 動作確認 # ミニバッチの用意 batch = next(iter(train_dl)) # モデル構築 net = TransformerClassification( text_embedding_vectors=TEXT.vocab.vectors, d_model=300, max_seq_len=256, output_dim=2) # 入出力 x = batch.Text[0] input_mask = (x != input_pad) out, normlized_weights_1, normlized_weights_2 = net(x, input_mask) print("出力のテンソルサイズ:", out.shape) print("出力テンソルのsigmoid:", F.softmax(out, dim=1)) ``` ここまでの内容をフォルダ「utils」のtransformer.pyに別途保存しておき、次節からはこちらから読み込むようにします 以上
true
code
0.763043
null
null
null
null
# Zircon model training notebook; (extensively) modified from Detectron2 training tutorial This Colab Notebook will allow users to train new models to detect and segment detrital zircon from RL images using Detectron2 and the training dataset provided in the colab_zirc_dims repo. It is set up to train a Mask RCNN model (ResNet depth=101), but could be modified for other instance segmentation models provided that they are supported by Detectron2. The training dataset should be uploaded to the user's Google Drive before running this notebook. ## Install detectron2 ``` !pip install pyyaml==5.1 import torch TORCH_VERSION = ".".join(torch.__version__.split(".")[:2]) CUDA_VERSION = torch.__version__.split("+")[-1] print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION) # Install detectron2 that matches the above pytorch version # See https://detectron2.readthedocs.io/tutorials/install.html for instructions !pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/$CUDA_VERSION/torch$TORCH_VERSION/index.html exit(0) # Automatically restarts runtime after installation # Some basic setup: # Setup detectron2 logger import detectron2 from detectron2.utils.logger import setup_logger setup_logger() # import some common libraries import numpy as np import os, json, cv2, random from google.colab.patches import cv2_imshow import copy import time import datetime import logging import random import shutil import torch # import some common detectron2 utilities from detectron2.engine.hooks import HookBase from detectron2 import model_zoo from detectron2.evaluation import inference_context, COCOEvaluator from detectron2.engine import DefaultPredictor from detectron2.config import get_cfg from detectron2.utils.visualizer import Visualizer from detectron2.utils.logger import log_every_n_seconds from detectron2.data import MetadataCatalog, DatasetCatalog, build_detection_train_loader, DatasetMapper, build_detection_test_loader import detectron2.utils.comm as comm from detectron2.data import detection_utils as utils from detectron2.config import LazyConfig import detectron2.data.transforms as T ``` ## Define Augmentations The cell below defines augmentations used while training to ensure that models never see the same exact image twice during training. This mitigates overfitting and allows models to achieve substantially higher accuracy in their segmentations/measurements. ``` custom_transform_list = [T.ResizeShortestEdge([800,800]), #resize shortest edge of image to 800 pixels T.RandomCrop('relative', (0.95, 0.95)), #randomly crop an area (95% size of original) from image T.RandomLighting(100), #minor lighting randomization T.RandomContrast(.85, 1.15), #minor contrast randomization T.RandomFlip(prob=.5, horizontal=False, vertical=True), #random vertical flipping T.RandomFlip(prob=.5, horizontal=True, vertical=False), #and horizontal flipping T.RandomApply(T.RandomRotation([-30, 30], False), prob=.8), #random (80% probability) rotation up to 30 degrees; \ # more rotation does not seem to improve results T.ResizeShortestEdge([800,800])] # resize img again for uniformity ``` ## Mount Google Drive, set paths to dataset, model saving directories ``` from google.colab import drive drive.mount('/content/drive') #@markdown ### Add path to training dataset directory dataset_dir = '/content/drive/MyDrive/training_dataset' #@param {type:"string"} #@markdown ### Add path to model saving directory (automatically created if it does not yet exist) model_save_dir = '/content/drive/MyDrive/NAME FOR MODEL SAVING FOLDER HERE' #@param {type:"string"} os.makedirs(model_save_dir, exist_ok=True) ``` ## Define dataset mapper, training, loss eval functions ``` from detectron2.engine import DefaultTrainer from detectron2.data import DatasetMapper from detectron2.structures import BoxMode # a function to convert Via image annotation .json dict format to Detectron2 \ # training input dict format def get_zircon_dicts(img_dir): json_file = os.path.join(img_dir, "via_region_data.json") with open(json_file) as f: imgs_anns = json.load(f)['_via_img_metadata'] dataset_dicts = [] for idx, v in enumerate(imgs_anns.values()): record = {} filename = os.path.join(img_dir, v["filename"]) height, width = cv2.imread(filename).shape[:2] record["file_name"] = filename record["image_id"] = idx record["height"] = height record["width"] = width #annos = v["regions"] annos = {} for n, eachitem in enumerate(v['regions']): annos[str(n)] = eachitem objs = [] for _, anno in annos.items(): #assert not anno["region_attributes"] anno = anno["shape_attributes"] px = anno["all_points_x"] py = anno["all_points_y"] poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)] poly = [p for x in poly for p in x] obj = { "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)], "bbox_mode": BoxMode.XYXY_ABS, "segmentation": [poly], "category_id": 0, } objs.append(obj) record["annotations"] = objs dataset_dicts.append(record) return dataset_dicts # loss eval hook for getting vaidation loss, copying to metrics.json; \ # from https://gist.github.com/ortegatron/c0dad15e49c2b74de8bb09a5615d9f6b class LossEvalHook(HookBase): def __init__(self, eval_period, model, data_loader): self._model = model self._period = eval_period self._data_loader = data_loader def _do_loss_eval(self): # Copying inference_on_dataset from evaluator.py total = len(self._data_loader) num_warmup = min(5, total - 1) start_time = time.perf_counter() total_compute_time = 0 losses = [] for idx, inputs in enumerate(self._data_loader): if idx == num_warmup: start_time = time.perf_counter() total_compute_time = 0 start_compute_time = time.perf_counter() if torch.cuda.is_available(): torch.cuda.synchronize() total_compute_time += time.perf_counter() - start_compute_time iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup) seconds_per_img = total_compute_time / iters_after_start if idx >= num_warmup * 2 or seconds_per_img > 5: total_seconds_per_img = (time.perf_counter() - start_time) / iters_after_start eta = datetime.timedelta(seconds=int(total_seconds_per_img * (total - idx - 1))) log_every_n_seconds( logging.INFO, "Loss on Validation done {}/{}. {:.4f} s / img. ETA={}".format( idx + 1, total, seconds_per_img, str(eta) ), n=5, ) loss_batch = self._get_loss(inputs) losses.append(loss_batch) mean_loss = np.mean(losses) self.trainer.storage.put_scalar('validation_loss', mean_loss) comm.synchronize() return losses def _get_loss(self, data): # How loss is calculated on train_loop metrics_dict = self._model(data) metrics_dict = { k: v.detach().cpu().item() if isinstance(v, torch.Tensor) else float(v) for k, v in metrics_dict.items() } total_losses_reduced = sum(loss for loss in metrics_dict.values()) return total_losses_reduced def after_step(self): next_iter = self.trainer.iter + 1 is_final = next_iter == self.trainer.max_iter if is_final or (self._period > 0 and next_iter % self._period == 0): self._do_loss_eval() #trainer for zircons which incorporates augmentation, hooks for eval class ZirconTrainer(DefaultTrainer): @classmethod def build_train_loader(cls, cfg): #return a custom train loader with augmentations; recompute_boxes \ # is important given cropping, rotation augs return build_detection_train_loader(cfg, mapper= DatasetMapper(cfg, is_train=True, recompute_boxes = True, augmentations = custom_transform_list ), ) @classmethod def build_evaluator(cls, cfg, dataset_name, output_folder=None): if output_folder is None: output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") return COCOEvaluator(dataset_name, cfg, True, output_folder) #set up validation loss eval hook def build_hooks(self): hooks = super().build_hooks() hooks.insert(-1,LossEvalHook( cfg.TEST.EVAL_PERIOD, self.model, build_detection_test_loader( self.cfg, self.cfg.DATASETS.TEST[0], DatasetMapper(self.cfg,True) ) )) return hooks ``` ## Import train, val catalogs ``` #registers training, val datasets (converts annotations using get_zircon_dicts) for d in ["train", "val"]: DatasetCatalog.register("zircon_" + d, lambda d=d: get_zircon_dicts(dataset_dir + "/" + d)) MetadataCatalog.get("zircon_" + d).set(thing_classes=["zircon"]) zircon_metadata = MetadataCatalog.get("zircon_train") train_cat = DatasetCatalog.get("zircon_train") ``` ## Visualize train dataset ``` # visualize random sample from training dataset dataset_dicts = get_zircon_dicts(os.path.join(dataset_dir, 'train')) for d in random.sample(dataset_dicts, 4): #change int here to change sample size img = cv2.imread(d["file_name"]) visualizer = Visualizer(img[:, :, ::-1], metadata=zircon_metadata, scale=0.5) out = visualizer.draw_dataset_dict(d) cv2_imshow(out.get_image()[:, :, ::-1]) ``` # Define save to Drive function ``` # a function to save models (with iteration number in name), metrics to drive; \ # important in case training crashes or is left unattended and disconnects. \ def save_outputs_to_drive(model_name, iters): root_output_dir = os.path.join(model_save_dir, model_name) #output_dir = save dir from user input #creates individual model output directory if it does not already exist os.makedirs(root_output_dir, exist_ok=True) #creates a name for this version of model; include iteration number curr_iters_str = str(round(iters/1000, 1)) + 'k' curr_model_name = model_name + '_' + curr_iters_str + '.pth' model_save_pth = os.path.join(root_output_dir, curr_model_name) #get most recent model, current metrics, copy to drive model_path = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") metrics_path = os.path.join(cfg.OUTPUT_DIR, 'metrics.json') shutil.copy(model_path, model_save_pth) shutil.copy(metrics_path, root_output_dir) ``` ## Build, train model ### Set some parameters for training ``` #@markdown ### Add a base name for the model model_save_name = 'your model name here' #@param {type:"string"} #@markdown ### Final iteration before training stops final_iteration = 8000 #@param {type:"slider", min:3000, max:15000, step:1000} ``` ### Actually build and train model ``` #train from a pre-trained Mask RCNN model cfg = get_cfg() # train from base model: Default Mask RCNN cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml")) # Load starting weights (COCO trained) from Detectron2 model zoo. cfg.MODEL.WEIGHTS = "https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/model_final_a3ec72.pkl" cfg.DATASETS.TRAIN = ("zircon_train",) #load training dataset cfg.DATASETS.TEST = ("zircon_val",) # load validation dataset cfg.DATALOADER.NUM_WORKERS = 2 cfg.SOLVER.IMS_PER_BATCH = 2 #2 ims per batch seems to be good for model generalization cfg.SOLVER.BASE_LR = 0.00025 # low but reasonable learning rate given pre-training; \ # by default initializes with a 1000 iteration warmup cfg.SOLVER.MAX_ITER = 2000 #train for 2000 iterations before 1st save cfg.SOLVER.GAMMA = 0.5 #decay learning rate by factor of GAMMA every 1000 iterations after 2000 iterations \ # and until 10000 iterations This works well for current version of training \ # dataset but should be modified (probably a longer interval) if dataset is ever\ # extended. cfg.SOLVER.STEPS = (1999, 2999, 3999, 4999, 5999, 6999, 7999, 8999, 9999) cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # use default ROI heads batch size cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only class here is zircon cfg.MODEL.RPN.NMS_THRESH = 0.1 #sets NMS threshold lower than default; should(?) eliminate overlapping regions cfg.TEST.EVAL_PERIOD = 200 # validation eval every 200 iterations os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = ZirconTrainer(cfg) #our zircon trainer, w/ built-in augs and val loss eval trainer.resume_or_load(resume=False) trainer.train() #start training # stop training and save for the 1st time after 2000 iterations save_outputs_to_drive(model_save_name, 2000) # Saves, cold restarts training from saved model weights every 1000 iterations \ # until final iteration. This should probably be done via hooks without stopping \ # training but *seems* to produce faster decrease in validation loss. for each_iters in [iter*1000 for iter in list(range(3, int(final_iteration/1000) + 1, 1))]: #reload model with last iteration model weights resume_model_path = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") cfg.MODEL.WEIGHTS = resume_model_path cfg.SOLVER.MAX_ITER = each_iters #increase max iterations trainer = ZirconTrainer(cfg) trainer.resume_or_load(resume=True) trainer.train() #restart training #save again save_outputs_to_drive(model_save_name, each_iters) # open tensorboard training metrics curves (metrics.json): %load_ext tensorboard %tensorboard --logdir output ``` ## Inference & evaluation with final trained model Initialize model from saved weights: ``` cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # final model; modify path to other non-final model to view their segmentations cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set a custom testing threshold cfg.MODEL.RPN.NMS_THRESH = 0.1 predictor = DefaultPredictor(cfg) ``` View model segmentations for random sample of images from zircon validation dataset: ``` from detectron2.utils.visualizer import ColorMode dataset_dicts = get_zircon_dicts(os.path.join(dataset_dir, 'val')) for d in random.sample(dataset_dicts, 5): im = cv2.imread(d["file_name"]) outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format v = Visualizer(im[:, :, ::-1], metadata=zircon_metadata, scale=1.5, instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models ) out = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2_imshow(out.get_image()[:, :, ::-1]) ``` Validation eval with COCO API metric: ``` from detectron2.evaluation import COCOEvaluator, inference_on_dataset from detectron2.data import build_detection_test_loader evaluator = COCOEvaluator("zircon_val", ("bbox", "segm"), False, output_dir="./output/") val_loader = build_detection_test_loader(cfg, "zircon_val") print(inference_on_dataset(trainer.model, val_loader, evaluator)) ``` ## Final notes: To use newly-trained models in colab_zirc_dims: #### Option A: Modify the cell that initializes model(s) in colab_zirc_dims processing notebooks: ``` cfg.merge_from_file(model_zoo.get_config_file(DETECTRON2 BASE CONFIG FILE LINK FOR YOUR MODEL HERE)) cfg.MODEL.RESNETS.DEPTH = RESNET DEPTH FOR YOUR MODEL (E.G., 101) HERE cfg.MODEL.WEIGHTS = PATH TO YOUR MODEL IN YOUR GOOGLE DRIVE HERE ``` #### Option B (more complicated but potentially useful for many models): The dynamic model selection tool in colab_zirc_dims is populated from a .json file model library dictionary, which is by default [the current version on the GitHub repo.](https://github.com/MCSitar/colab_zirc_dims/blob/main/czd_model_library.json) The 'url' key in the dict will work with either an AWS download link for the model or the path to model in your Google Drive. To use a custom model library dictionary: Modify a copy of the colab_zirc_dims [.json file model library dictionary](https://github.com/MCSitar/colab_zirc_dims/blob/main/czd_model_library.json) to include download link(s)/Drive path(s) and metadata (e.g., resnet depth and config file) for your model(s). Upload this .json file to your Google Drive and change the 'model_lib_loc' variable in a processing Notebook to the .json's path for dynamic download and loading of this and other models within the Notebook.
true
code
0.58433
null
null
null
null
``` import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt ``` # Pytorch: An automatic differentiation tool `Pytorch`를 활용하면 복잡한 함수의 미분을 손쉽게 + 효율적으로 계산할 수 있습니다! `Pytorch`를 활용해서 복잡한 심층 신경망을 훈련할 때, 오차함수에 대한 파라미터의 편미분치를 계산을 손쉽게 수행할수 있습니다! ## Pytorch 첫만남 우리에게 아래와 같은 간단한 선형식이 주어져있다고 생각해볼까요? $$ y = wx $$ 그러면 $\frac{\partial y}{\partial w}$ 을 어떻게 계산 할 수 있을까요? 일단 직접 미분을 해보면$\frac{\partial y}{\partial w} = x$ 이 되니, 간단한 예제에서 `pytorch`로 해당 값을 계산하는 방법을 알아보도록 합시다! ``` # 랭크1 / 사이즈1 이며 값은 1*2 인 pytorch tensor를 하나 만듭니다. x = torch.ones(1) * 2 # 랭크1 / 사이즈1 이며 값은 1 인 pytorch tensor를 하나 만듭니다. w = torch.ones(1, requires_grad=True) y = w * x y ``` ## 편미분 계산하기! pytorch에서는 미분값을 계산하고 싶은 텐서에 `.backward()` 를 붙여주는 것으로, 해당 텐서 계산에 연결 되어있는 텐서 중 `gradient`를 계산해야하는 텐서(들)에 대한 편미분치들을 계산할수 있습니다. `requires_grad=True`를 통해서 어떤 텐서에 미분값을 계산할지 할당해줄 수 있습니다. ``` y.backward() ``` ## 편미분값 확인하기! `텐서.grad` 를 활용해서 특정 텐서의 gradient 값을 확인해볼 수 있습니다. 한번 `w.grad`를 활용해서 `y` 에 대한 `w`의 편미분값을 확인해볼까요? ``` w.grad ``` ## 그러면 requires_grad = False 인 경우는? ``` x.grad ``` ## `torch.nn`, Neural Network 패키지 `pytorch`에는 이미 다양한 neural network들의 모듈들을 구현해 놓았습니다. 그 중에 가장 간단하지만 정말 자주 쓰이는 `nn.Linear` 에 대해 알아보면서 `pytorch`의 `nn.Module`에 대해서 알아보도록 합시다. ## `nn.Linear` 돌아보기 `nn.Linear` 은 앞서 배운 선형회귀 및 다층 퍼셉트론 모델의 한 층에 해당하는 파라미터 $w$, $b$ 를 가지고 있습니다. 예시로 입력의 dimension 이 10이고 출력의 dimension 이 1인 `nn.Linear` 모듈을 만들어 봅시다! ``` lin = nn.Linear(in_features=10, out_features=1) for p in lin.parameters(): print(p) print(p.shape) print('\n') ``` ## `Linear` 모듈로 $y = Wx+b$ 계산하기 선형회귀식도 그랬지만, 다층 퍼셉트론 모델도 하나의 레이어는 아래의 수식을 계산했던 것을 기억하시죠? $$y = Wx+b$$ `nn.Linear`를 활용해서 저 수식을 계산해볼까요? 검산을 쉽게 하기 위해서 W의 값은 모두 1.0 으로 b 는 5.0 으로 만들어두겠습니다. ``` lin.weight.data = torch.ones_like(lin.weight.data) lin.bias.data = torch.ones_like(lin.bias.data) * 5.0 for p in lin.parameters(): print(p) print(p.shape) print('\n') x = torch.ones(3, 10) # rank2 tensor를 만듭니다. : mini batch size = 3 y_hat = lin(x) print(y_hat.shape) print(y_hat) ``` ## 지금 무슨일이 일어난거죠? >Q1. 왜 Rank 2 tensor 를 입력으로 사용하나요? <br> >A1. 파이토치의 `nn` 에 정의되어있는 클래스들은 입력의 가장 첫번째 디멘젼을 `배치 사이즈`로 해석합니다. >Q2. lin(x) 는 도대체 무엇인가요? <br> >A2. 파이썬에 익숙하신 분들은 `object()` 는 `object.__call__()`에 정의되어있는 함수를 실행시키신다는 것을 아실텐데요. 파이토치의 `nn.Module`은 `__call__()`을 오버라이드하는 함수인 `forward()`를 구현하는 것을 __권장__ 하고 있습니다. 일반적으로, `forward()`안에서 실제로 파라미터와 인풋을 가지고 특정 레이어의 연산과 정을 구현하게 됩니다. 여러가지 이유가 있겠지만, 파이토치가 내부적으로 foward() 의 실행의 전/후로 사용자 친화적인 환경을 제공하기위해서 추가적인 작업들을 해줍니다. 이 부분은 다음 실습에서 다층 퍼셉트론 모델을 만들면서 조금 더 자세히 설명해볼게요! ## Pytorch 로 간단히! 선형회귀 구현하기 저번 실습에서 numpy 로 구현했던 Linear regression 모델을 다시 한번 파이토치로 구현해볼까요? <br> 몇 줄이면 끝날 정도로 간단합니다 :) ``` def generate_samples(n_samples: int, w: float = 1.0, b: float = 0.5, x_range=[-1.0,1.0]): xs = np.random.uniform(low=x_range[0], high=x_range[1], size=n_samples) ys = w * xs + b xs = torch.tensor(xs).view(-1,1).float() # 파이토치 nn.Module 은 배치가 첫 디멘젼! ys = torch.tensor(ys).view(-1,1).float() return xs, ys w = 1.0 b = 0.5 xs, ys = generate_samples(30, w=w, b=b) lin_model = nn.Linear(in_features=1, out_features=1) # lim_model 생성 for p in lin_model.parameters(): print(p) print(p.grad) ys_hat = lin_model(xs) # lin_model 로 예측하기 ``` ## Loss 함수는? MSE! `pytorch`에서는 자주 쓰이는 loss 함수들에 대해서도 미리 구현을 해두었습니다. 이번 실습에서는 __numpy로 선형회귀 모델 만들기__ 에서 사용됐던 MSE 를 오차함수로 사용해볼까요? ``` criteria = nn.MSELoss() loss = criteria(ys_hat, ys) ``` ## 경사하강법을 활용해서 파라미터 업데이트하기! `pytorch`는 여러분들을 위해서 다양한 optimizer들을 구현해 두었습니다. 일단은 가장 간단한 stochastic gradient descent (SGD)를 활용해 볼까요? optimizer에 따라서 다양한 인자들을 활용하지만 기본적으로 `params` 와 `lr`을 지정해주면 나머지는 optimizer 마다 잘되는 것으로 알려진 인자들로 optimizer을 손쉽게 생성할수 있습니다. ``` opt = torch.optim.SGD(params=lin_model.parameters(), lr=0.01) ``` ## 잊지마세요! opt.zero_grad() `pytorch`로 편미분을 계산하기전에, 꼭 `opt.zero_grad()` 함수를 이용해서 편미분 계산이 필요한 텐서들의 편미분값을 초기화 해주는 것을 권장드립니다. ``` opt.zero_grad() for p in lin_model.parameters(): print(p) print(p.grad) loss.backward() opt.step() for p in lin_model.parameters(): print(p) print(p.grad) ``` ## 경사하강법을 활용해서 최적 파라미터를 찾아봅시다! ``` def run_sgd(n_steps: int = 1000, report_every: int = 100, verbose=True): lin_model = nn.Linear(in_features=1, out_features=1) opt = torch.optim.SGD(params=lin_model.parameters(), lr=0.01) sgd_losses = [] for i in range(n_steps): ys_hat = lin_model(xs) loss = criteria(ys_hat, ys) opt.zero_grad() loss.backward() opt.step() if i % report_every == 0: if verbose: print('\n') print("{}th update: {}".format(i,loss)) for p in lin_model.parameters(): print(p) sgd_losses.append(loss.log10().detach().numpy()) return sgd_losses _ = run_sgd() ``` ## 다른 Optimizer도 사용해볼까요? 수업시간에 배웠던 Adam 으로 최적화를 하면 어떤결과가 나올까요? ``` def run_adam(n_steps: int = 1000, report_every: int = 100, verbose=True): lin_model = nn.Linear(in_features=1, out_features=1) opt = torch.optim.Adam(params=lin_model.parameters(), lr=0.01) adam_losses = [] for i in range(n_steps): ys_hat = lin_model(xs) loss = criteria(ys_hat, ys) opt.zero_grad() loss.backward() opt.step() if i % report_every == 0: if verbose: print('\n') print("{}th update: {}".format(i,loss)) for p in lin_model.parameters(): print(p) adam_losses.append(loss.log10().detach().numpy()) return adam_losses _ = run_adam() ``` ## 좀 더 상세하게 비교해볼까요? `pytorch`에서 `nn.Linear`를 비롯한 많은 모듈들은 특별한 경우가 아닌이상, 모듈내에 파라미터가 임의의 값으로 __잘!__ 초기화 됩니다. > "잘!" 에 대해서는 수업에서 다루지 않았지만, 확실히 현대 딥러닝이 잘 작동하게 하는 중요한 요소중에 하나입니다. Parameter initialization 이라고 부르는 기법들이며, 대부분의 `pytorch` 모듈들은 각각의 모듈에 따라서 일반적으로 잘 작동하는것으로 알려져있는 방식으로 파라미터들이 초기화 되게 코딩되어 있습니다. 그래서 매 번 모듈을 생성할때마다 파라미터의 초기값이 달라지게 됩니다. 이번에는 조금 공정한 비교를 위해서 위에서 했던 실험을 여러번 반복해서 평균적으로도 Adam이 좋은지 확인해볼까요? ``` sgd_losses = [run_sgd(verbose=False) for _ in range(50)] sgd_losses = np.stack(sgd_losses) sgd_loss_mean = np.mean(sgd_losses, axis=0) sgd_loss_std = np.std(sgd_losses, axis=-0) adam_losses = [run_adam(verbose=False) for _ in range(50)] adam_losses = np.stack(adam_losses) adam_loss_mean = np.mean(adam_losses, axis=0) adam_loss_std = np.std(adam_losses, axis=-0) fig, ax = plt.subplots(1,1, figsize=(10,5)) ax.grid() ax.fill_between(x=range(sgd_loss_mean.shape[0]), y1=sgd_loss_mean + sgd_loss_std, y2=sgd_loss_mean - sgd_loss_std, alpha=0.3) ax.plot(sgd_loss_mean, label='SGD') ax.fill_between(x=range(adam_loss_mean.shape[0]), y1=adam_loss_mean + adam_loss_std, y2=adam_loss_mean - adam_loss_std, alpha=0.3) ax.plot(adam_loss_mean, label='Adam') ax.legend() ```
true
code
0.773548
null
null
null
null
# Callbacks and Multiple inputs ``` import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt from sklearn.preprocessing import scale from keras.optimizers import SGD from keras.layers import Dense, Input, concatenate, BatchNormalization from keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint from keras.models import Model import keras.backend as K df = pd.read_csv("../data/titanic-train.csv") Y = df['Survived'] df.info() df.head() num_features = df[['Age', 'Fare', 'SibSp', 'Parch']].fillna(0) num_features.head() cat_features = pd.get_dummies(df[['Pclass', 'Sex', 'Embarked']].astype('str')) cat_features.head() X1 = scale(num_features.values) X2 = cat_features.values K.clear_session() # Numerical features branch inputs1 = Input(shape = (X1.shape[1],)) b1 = BatchNormalization()(inputs1) b1 = Dense(3, kernel_initializer='normal', activation = 'tanh')(b1) b1 = BatchNormalization()(b1) # Categorical features branch inputs2 = Input(shape = (X2.shape[1],)) b2 = Dense(8, kernel_initializer='normal', activation = 'relu')(inputs2) b2 = BatchNormalization()(b2) b2 = Dense(4, kernel_initializer='normal', activation = 'relu')(b2) b2 = BatchNormalization()(b2) b2 = Dense(2, kernel_initializer='normal', activation = 'relu')(b2) b2 = BatchNormalization()(b2) merged = concatenate([b1, b2]) preds = Dense(1, activation = 'sigmoid')(merged) # final model model = Model([inputs1, inputs2], preds) model.compile(loss = 'binary_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy']) model.summary() outpath='/tmp/tensorflow_logs/titanic/' early_stopper = EarlyStopping(monitor='val_acc', patience=10) tensorboard = TensorBoard(outpath+'tensorboard/', histogram_freq=1) checkpointer = ModelCheckpoint(outpath+'weights_epoch_{epoch:02d}_val_acc_{val_acc:.2f}.hdf5', monitor='val_acc') # You may have to run this a couple of times if stuck on local minimum np.random.seed(2017) h = model.fit([X1, X2], Y.values, batch_size = 32, epochs = 40, verbose = 1, validation_split=0.2, callbacks=[early_stopper, tensorboard, checkpointer]) import os sorted(os.listdir(outpath)) ``` Now check the tensorboard. - If using provided aws instance, just browse to: `http://<your-ip>:6006` - If using local, open a terminal, activate the environment and run: ``` tensorboard --logdir=/tmp/tensorflow_logs/titanic/tensorboard/ ``` then open a browser at `localhost:6006` You should see something like this: ![tensorboard.png](../assets/tensorboard.png) ## Exercise 1 - try modifying the parameters of the 3 callbacks provided. What are they for? What do they do? *Copyright &copy; 2017 CATALIT LLC. All rights reserved.*
true
code
0.639483
null
null
null
null
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/extract_value_to_points.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/extract_value_to_points.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/extract_value_to_points.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/extract_value_to_points.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ``` Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map ``` ## Add Earth Engine Python script ``` # Add Earth Engine dataset # Input imagery is a cloud-free Landsat 8 composite. l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1') image = ee.Algorithms.Landsat.simpleComposite(**{ 'collection': l8.filterDate('2018-01-01', '2018-12-31'), 'asFloat': True }) # Use these bands for prediction. bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11'] # Load training points. The numeric property 'class' stores known labels. points = ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels') # This property of the table stores the land cover labels. label = 'landcover' # Overlay the points on the imagery to get training. training = image.select(bands).sampleRegions(**{ 'collection': points, 'properties': [label], 'scale': 30 }) # Define visualization parameters in an object literal. vizParams = {'bands': ['B5', 'B4', 'B3'], 'min': 0, 'max': 1, 'gamma': 1.3} Map.centerObject(points, 10) Map.addLayer(image, vizParams, 'Image') Map.addLayer(points, {'color': "yellow"}, 'Training points') first = training.first() print(first.getInfo()) ``` ## Display Earth Engine data layers ``` Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map ```
true
code
0.601886
null
null
null
null
# AutoGluon Tabular with SageMaker [AutoGluon](https://github.com/awslabs/autogluon) automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data. This notebook shows how to use AutoGluon-Tabular with Amazon SageMaker by creating custom containers. ## Prerequisites If using a SageMaker hosted notebook, select kernel `conda_mxnet_p36`. ``` # Make sure docker compose is set up properly for local mode !./setup.sh # Imports import os import boto3 import sagemaker from time import sleep from collections import Counter import numpy as np import pandas as pd from sagemaker import get_execution_role, local, Model, utils, fw_utils, s3 from sagemaker.estimator import Estimator from sagemaker.predictor import RealTimePredictor, csv_serializer, StringDeserializer from sklearn.metrics import accuracy_score, classification_report from IPython.core.display import display, HTML from IPython.core.interactiveshell import InteractiveShell # Print settings InteractiveShell.ast_node_interactivity = "all" pd.set_option('display.max_columns', 500) pd.set_option('display.max_rows', 10) # Account/s3 setup session = sagemaker.Session() local_session = local.LocalSession() bucket = session.default_bucket() prefix = 'sagemaker/autogluon-tabular' region = session.boto_region_name role = get_execution_role() client = session.boto_session.client( "sts", region_name=region, endpoint_url=utils.sts_regional_endpoint(region) ) account = client.get_caller_identity()['Account'] ecr_uri_prefix = utils.get_ecr_image_uri_prefix(account, region) registry_id = fw_utils._registry_id(region, 'mxnet', 'py3', account, '1.6.0') registry_uri = utils.get_ecr_image_uri_prefix(registry_id, region) ``` ### Build docker images First, build autogluon package to copy into docker image. ``` if not os.path.exists('package'): !pip install PrettyTable -t package !pip install --upgrade boto3 -t package !pip install bokeh -t package !pip install --upgrade matplotlib -t package !pip install autogluon -t package ``` Now build the training/inference image and push to ECR ``` training_algorithm_name = 'autogluon-sagemaker-training' inference_algorithm_name = 'autogluon-sagemaker-inference' !./container-training/build_push_training.sh {account} {region} {training_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri} !./container-inference/build_push_inference.sh {account} {region} {inference_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri} ``` ### Get the data In this example we'll use the direct-marketing dataset to build a binary classification model that predicts whether customers will accept or decline a marketing offer. First we'll download the data and split it into train and test sets. AutoGluon does not require a separate validation set (it uses bagged k-fold cross-validation). ``` # Download and unzip the data !aws s3 cp --region {region} s3://sagemaker-sample-data-{region}/autopilot/direct_marketing/bank-additional.zip . !unzip -qq -o bank-additional.zip !rm bank-additional.zip local_data_path = './bank-additional/bank-additional-full.csv' data = pd.read_csv(local_data_path) # Split train/test data train = data.sample(frac=0.7, random_state=42) test = data.drop(train.index) # Split test X/y label = 'y' y_test = test[label] X_test = test.drop(columns=[label]) ``` ##### Check the data ``` train.head(3) train.shape test.head(3) test.shape X_test.head(3) X_test.shape ``` Upload the data to s3 ``` train_file = 'train.csv' train.to_csv(train_file,index=False) train_s3_path = session.upload_data(train_file, key_prefix='{}/data'.format(prefix)) test_file = 'test.csv' test.to_csv(test_file,index=False) test_s3_path = session.upload_data(test_file, key_prefix='{}/data'.format(prefix)) X_test_file = 'X_test.csv' X_test.to_csv(X_test_file,index=False) X_test_s3_path = session.upload_data(X_test_file, key_prefix='{}/data'.format(prefix)) ``` ## Hyperparameter Selection The minimum required settings for training is just a target label, `fit_args['label']`. Additional optional hyperparameters can be passed to the `autogluon.task.TabularPrediction.fit` function via `fit_args`. Below shows a more in depth example of AutoGluon-Tabular hyperparameters from the example [Predicting Columns in a Table - In Depth](https://autogluon.mxnet.io/tutorials/tabular_prediction/tabular-indepth.html#model-ensembling-with-stacking-bagging). Please see [fit parameters](https://autogluon.mxnet.io/api/autogluon.task.html?highlight=eval_metric#autogluon.task.TabularPrediction.fit) for further information. Note that in order for hyperparameter ranges to work in SageMaker, values passed to the `fit_args['hyperparameters']` must be represented as strings. ```python nn_options = { 'num_epochs': "10", 'learning_rate': "ag.space.Real(1e-4, 1e-2, default=5e-4, log=True)", 'activation': "ag.space.Categorical('relu', 'softrelu', 'tanh')", 'layers': "ag.space.Categorical([100],[1000],[200,100],[300,200,100])", 'dropout_prob': "ag.space.Real(0.0, 0.5, default=0.1)" } gbm_options = { 'num_boost_round': "100", 'num_leaves': "ag.space.Int(lower=26, upper=66, default=36)" } model_hps = {'NN': nn_options, 'GBM': gbm_options} fit_args = { 'label': 'y', 'presets': ['best_quality', 'optimize_for_deployment'], 'time_limits': 60*10, 'hyperparameters': model_hps, 'hyperparameter_tune': True, 'search_strategy': 'skopt' } hyperparameters = { 'fit_args': fit_args, 'feature_importance': True } ``` **Note:** Your hyperparameter choices may affect the size of the model package, which could result in additional time taken to upload your model and complete training. Including `'optimize_for_deployment'` in the list of `fit_args['presets']` is recommended to greatly reduce upload times. <br> ``` # Define required label and optional additional parameters fit_args = { 'label': 'y', # Adding 'best_quality' to presets list will result in better performance (but longer runtime) 'presets': ['optimize_for_deployment'], } # Pass fit_args to SageMaker estimator hyperparameters hyperparameters = { 'fit_args': fit_args, 'feature_importance': True } ``` ## Train For local training set `train_instance_type` to `local` . For non-local training the recommended instance type is `ml.m5.2xlarge`. **Note:** Depending on how many underlying models are trained, `train_volume_size` may need to be increased so that they all fit on disk. ``` %%time instance_type = 'ml.m5.2xlarge' #instance_type = 'local' ecr_image = f'{ecr_uri_prefix}/{training_algorithm_name}:latest' estimator = Estimator(image_name=ecr_image, role=role, train_instance_count=1, train_instance_type=instance_type, hyperparameters=hyperparameters, train_volume_size=100) # Set inputs. Test data is optional, but requires a label column. inputs = {'training': train_s3_path, 'testing': test_s3_path} estimator.fit(inputs) ``` ### Create Model ``` # Create predictor object class AutoGluonTabularPredictor(RealTimePredictor): def __init__(self, *args, **kwargs): super().__init__(*args, content_type='text/csv', serializer=csv_serializer, deserializer=StringDeserializer(), **kwargs) ecr_image = f'{ecr_uri_prefix}/{inference_algorithm_name}:latest' if instance_type == 'local': model = estimator.create_model(image=ecr_image, role=role) else: model_uri = os.path.join(estimator.output_path, estimator._current_job_name, "output", "model.tar.gz") model = Model(model_uri, ecr_image, role=role, sagemaker_session=session, predictor_cls=AutoGluonTabularPredictor) ``` ### Batch Transform For local mode, either `s3://<bucket>/<prefix>/output/` or `file:///<absolute_local_path>` can be used as outputs. By including the label column in the test data, you can also evaluate prediction performance (In this case, passing `test_s3_path` instead of `X_test_s3_path`). ``` output_path = f's3://{bucket}/{prefix}/output/' # output_path = f'file://{os.getcwd()}' transformer = model.transformer(instance_count=1, instance_type=instance_type, strategy='MultiRecord', max_payload=6, max_concurrent_transforms=1, output_path=output_path) transformer.transform(test_s3_path, content_type='text/csv', split_type='Line') transformer.wait() ``` ### Endpoint ##### Deploy remote or local endpoint ``` instance_type = 'ml.m5.2xlarge' #instance_type = 'local' predictor = model.deploy(initial_instance_count=1, instance_type=instance_type) ``` ##### Attach to endpoint (or reattach if kernel was restarted) ``` # Select standard or local session based on instance_type if instance_type == 'local': sess = local_session else: sess = session # Attach to endpoint predictor = AutoGluonTabularPredictor(predictor.endpoint, sagemaker_session=sess) ``` ##### Predict on unlabeled test data ``` results = predictor.predict(X_test.to_csv(index=False)).splitlines() # Check output print(Counter(results)) ``` ##### Predict on data that includes label column Prediction performance metrics will be printed to endpoint logs. ``` results = predictor.predict(test.to_csv(index=False)).splitlines() # Check output print(Counter(results)) ``` ##### Check that classification performance metrics match evaluation printed to endpoint logs as expected ``` y_results = np.array(results) print("accuracy: {}".format(accuracy_score(y_true=y_test, y_pred=y_results))) print(classification_report(y_true=y_test, y_pred=y_results, digits=6)) ``` ##### Clean up endpoint ``` predictor.delete_endpoint() ```
true
code
0.455138
null
null
null
null
# Neural Networks In the previous part of this exercise, you implemented multi-class logistic re gression to recognize handwritten digits. However, logistic regression cannot form more complex hypotheses as it is only a linear classifier.<br><br> In this part of the exercise, you will implement a neural network to recognize handwritten digits using the same training set as before. The <strong>neural network</strong> will be able to represent complex models that form <strong>non-linear hypotheses</strong>. For this week, you will be using parameters from <strong>a neural network that we have already trained</strong>. Your goal is to implement the <strong>feedforward propagation algorithm to use our weights for prediction</strong>. In next week’s exercise, you will write the backpropagation algorithm for learning the neural network parameters.<br><br> The file <strong><em>ex3data1</em></strong> contains a training set.<br> The structure of the dataset described blow:<br> 1. X array = <strong>400 columns describe the values of pixels of 20*20 images in flatten format for 5000 samples</strong> 2. y array = <strong>Value of image (number between 0-9)</strong> <br><br> <strong> Our assignment has these sections: 1. Visualizing the Data 1. Converting .mat to .csv 2. Loading Dataset and Trained Neural Network Weights 3. Ploting Data 2. Model Representation 3. Feedforward Propagation and Prediction </strong> In each section full description provided. ## 1. Visualizing the Dataset Before starting on any task, it is often useful to understand the data by visualizing it.<br> ### 1.A Converting .mat to .csv In this specific assignment, the instructor added a .mat file as training set and weights of trained neural network. But we have to convert it to .csv to use in python.<br> After all we now ready to import our new csv files to pandas dataframes and do preprocessing on it and make it ready for next steps. ``` # import libraries import scipy.io import numpy as np data = scipy.io.loadmat("ex3data1") weights = scipy.io.loadmat('ex3weights') ``` Now we extract X and y variables from the .mat file and save them into .csv file for further usage. After running the below code <strong>you should see X.csv and y.csv files</strong> in your directory. ``` for i in data: if '__' not in i and 'readme' not in i: np.savetxt((i+".csv"),data[i],delimiter=',') for i in weights: if '__' not in i and 'readme' not in i: np.savetxt((i+".csv"),weights[i],delimiter=',') ``` ### 1.B Loading Dataset and Trained Neural Network Weights First we import .csv files into pandas dataframes then save them into numpy arrays.<br><br> There are <strong>5000 training examples</strong> in ex3data1.mat, where each training example is a <strong>20 pixel by 20 pixel <em>grayscale</em> image of the digit</strong>. Each pixel is represented by a floating point number indicating the <strong>grayscale intensity</strong> at that location. The 20 by 20 grid of pixels is <strong>"flatten" into a 400-dimensional vector</strong>. <strong>Each of these training examples becomes a single row in our data matrix X</strong>. This gives us a 5000 by 400 matrix X where every row is a training example for a handwritten digit image.<br><br> The second part of the training set is a <strong>5000-dimensional vector y that contains labels</strong> for the training set.<br><br> <strong>Notice: In dataset, the digit zero mapped to the value ten. Therefore, a "0" digit is labeled as "10", while the digits "1" to "9" are labeled as "1" to "9" in their natural order.<br></strong> But this make thing harder so we bring it back to natural order for 0! ``` # import library import pandas as pd # saving .csv files to pandas dataframes x_df = pd.read_csv('X.csv',names= np.arange(0,400)) y_df = pd.read_csv('y.csv',names=['label']) # saving .csv files to pandas dataframes Theta1_df = pd.read_csv('Theta1.csv',names = np.arange(0,401)) Theta2_df = pd.read_csv('Theta2.csv',names = np.arange(0,26)) # saving x_df and y_df into numpy arrays x = x_df.iloc[:,:].values y = y_df.iloc[:,:].values m, n = x.shape # bring back 0 to 0 !!! y = y.reshape(m,) y[y==10] = 0 y = y.reshape(m,1) print('#{} Number of training samples, #{} features per sample'.format(m,n)) # saving Theta1_df and Theta2_df into numpy arrays theta1 = Theta1_df.iloc[:,:].values theta2 = Theta2_df.iloc[:,:].values ``` ### 1.C Plotting Data You will begin by visualizing a subset of the training set. In first part, the code <strong>randomly selects selects 100 rows from X</strong> and passes those rows to the <strong>display_data</strong> function. This function maps each row to a 20 pixel by 20 pixel grayscale image and displays the images together.<br> After plotting, you should see an image like this:<img src='img/plot.jpg'> ``` import numpy as np import matplotlib.pyplot as plt import random amount = 100 lines = 10 columns = 10 image = np.zeros((amount, 20, 20)) number = np.zeros(amount) for i in range(amount): rnd = random.randint(0,4999) image[i] = x[rnd].reshape(20, 20) y_temp = y.reshape(m,) number[i] = y_temp[rnd] fig = plt.figure(figsize=(8,8)) for i in range(amount): ax = fig.add_subplot(lines, columns, 1 + i) # Turn off tick labels ax.set_yticklabels([]) ax.set_xticklabels([]) plt.imshow(image[i], cmap='binary') plt.show() print(number) ``` # 2. Model Representation Our neural network is shown in below figure. It has <strong>3 layers an input layer, a hidden layer and an output layer</strong>. Recall that our <strong>inputs are pixel</strong> values of digit images. Since the images are of <strong>size 20×20</strong>, this gives us <strong>400 input layer units</strong> (excluding the extra bias unit which always outputs +1).<br><br><img src='img/nn.jpg'><br> You have been provided with a set of <strong>network parameters (Θ<sup>(1)</sup>; Θ<sup>(2)</sup>)</strong> already trained by instructor.<br><br> <strong>Theta1 and Theta2 The parameters have dimensions that are sized for a neural network with 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).</strong> ``` print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape)) ``` It seems our weights are transposed, so we transpose them to have them in a way our neural network is. ``` theta1 = theta1.transpose() theta2 = theta2.transpose() print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape)) ``` # 3. Feedforward Propagation and Prediction Now you will implement feedforward propagation for the neural network.<br> You should implement the <strong>feedforward computation</strong> that computes <strong>h<sub>θ</sub>(x<sup>(i)</sup>)</strong> for every example i and returns the associated predictions. Similar to the one-vs-all classification strategy, the prediction from the neural network will be the <strong>label</strong> that has the <strong>largest output <strong>h<sub>θ</sub>(x)<sub>k</sub></strong></strong>. <strong>Implementation Note:</strong> The matrix X contains the examples in rows. When you complete the code, <strong>you will need to add the column of 1’s</strong> to the matrix. The matrices <strong>Theta1 and Theta2 contain the parameters for each unit in rows.</strong> Specifically, the first row of Theta1 corresponds to the first hidden unit in the second layer. <br> You must get <strong>a<sup>(l)</sup></strong> as a column vector.<br><br> You should see that the <strong>accuracy is about 97.5%</strong>. ``` # adding column of 1's to x x = np.append(np.ones(shape=(m,1)),x,axis = 1) ``` <strong>h = hypothesis(x,theta)</strong> will compute <strong>sigmoid</strong> function on <strong>θ<sup>T</sup>X</strong> and return a number which <strong>0<=h<=1</strong>.<br> You can use <a href='https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.special.expit.html'>this</a> library for calculating sigmoid. ``` def sigmoid(z): return 1/(1+np.exp(-z)) def lr_hypothesis(x,theta): return np.dot(x,theta) ``` <strong>predict(theta1, theta2, x):</strong> outputs the predicted label of x given the trained weights of a neural network (theta1, theta2). ``` layers = 3 num_labels = 10 ``` <strong>Becuase the initial dataset has changed and mapped 0 to "10", so the weights also are changed. So we just rotate columns one step to right, to predict correct values.<br> Recall we have changed mapping 0 to "10" to 0 to "0" but we cannot detect this mapping in weights of neural netwrok. So we have to this rotation on final output of probabilities.</strong> ``` def rotate_column(array): array_ = np.zeros(shape=(m,num_labels)) temp = np.zeros(num_labels,) temp= array[:,9] array_[:,1:10] = array[:,0:9] array_[:,0] = temp return array_ def predict(theta1,theta2,x): z2 = np.dot(x,theta1) # hidden layer a2 = sigmoid(z2) # hidden layer # adding column of 1's to a2 a2 = np.append(np.ones(shape=(m,1)),a2,axis = 1) z3 = np.dot(a2,theta2) a3 = sigmoid(z3) # mapping problem. Rotate left one step y_prob = rotate_column(a3) # prediction on activation a2 y_pred = np.argmax(y_prob, axis=1).reshape(-1,1) return y_pred y_pred = predict(theta1,theta2,x) y_pred.shape ``` Now we will compare our predicted result to the true one with <a href='http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html'>confusion_matrix</a> of numpy library. ``` from sklearn.metrics import confusion_matrix # Function for accuracy def acc(confusion_matrix): t = 0 for i in range(num_labels): t += confusion_matrix[i][i] f = m-t ac = t/(m) return (t,f,ac) #import library from sklearn.metrics import confusion_matrix cm_train = confusion_matrix(y.reshape(m,),y_pred.reshape(m,)) t,f,ac = acc(cm_train) print('With #{} correct, #{} wrong ==========> accuracy = {}%' .format(t,f,ac*100)) cm_train ```
true
code
0.34659
null
null
null
null
``` # This cell is added by sphinx-gallery !pip install mrsimulator --quiet %matplotlib inline import mrsimulator print(f'You are using mrsimulator v{mrsimulator.__version__}') ``` # ²⁹Si 1D MAS spinning sideband (CSA) After acquiring an NMR spectrum, we often require a least-squares analysis to determine site populations and nuclear spin interaction parameters. Generally, this comprises of two steps: - create a fitting model, and - determine the model parameters that give the best fit to the spectrum. Here, we will use the mrsimulator objects to create a fitting model, and use the `LMFIT <https://lmfit.github.io/lmfit-py/>`_ library for performing the least-squares fitting optimization. In this example, we use a synthetic $^{29}\text{Si}$ NMR spectrum of cuspidine, generated from the tensor parameters reported by Hansen `et al.` [#f1]_, to demonstrate a simple fitting procedure. We will begin by importing relevant modules and establishing figure size. ``` import csdmpy as cp import matplotlib.pyplot as plt from lmfit import Minimizer, Parameters from mrsimulator import Simulator, SpinSystem, Site from mrsimulator.methods import BlochDecaySpectrum from mrsimulator import signal_processing as sp from mrsimulator.utils import spectral_fitting as sf ``` ## Import the dataset Use the `csdmpy <https://csdmpy.readthedocs.io/en/stable/index.html>`_ module to load the synthetic dataset as a CSDM object. ``` file_ = "https://sandbox.zenodo.org/record/835664/files/synthetic_cuspidine_test.csdf?" synthetic_experiment = cp.load(file_).real # standard deviation of noise from the dataset sigma = 0.03383338 # convert the dimension coordinates from Hz to ppm synthetic_experiment.x[0].to("ppm", "nmr_frequency_ratio") # Plot of the synthetic dataset. plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(synthetic_experiment, "k", alpha=0.5) ax.set_xlim(50, -200) plt.grid() plt.tight_layout() plt.show() ``` ## Create a fitting model Before you can fit a simulation to an experiment, in this case, the synthetic dataset, you will first need to create a fitting model. We will use the ``mrsimulator`` objects as tools in creating a model for the least-squares fitting. **Step 1:** Create initial guess sites and spin systems. The initial guess is often based on some prior knowledge about the system under investigation. For the current example, we know that Cuspidine is a crystalline silica polymorph with one crystallographic Si site. Therefore, our initial guess model is a single $^{29}\text{Si}$ site spin system. For non-linear fitting algorithms, as a general recommendation, the initial guess model parameters should be a good starting point for the algorithms to converge. ``` # the guess model comprising of a single site spin system site = Site( isotope="29Si", isotropic_chemical_shift=-82.0, # in ppm, shielding_symmetric={"zeta": -63, "eta": 0.4}, # zeta in ppm ) spin_system = SpinSystem( name="Si Site", description="A 29Si site in cuspidine", sites=[site], # from the above code abundance=100, ) ``` **Step 2:** Create the method object. The method should be the same as the one used in the measurement. In this example, we use the `BlochDecaySpectrum` method. Note, when creating the method object, the value of the method parameters must match the respective values used in the experiment. ``` MAS = BlochDecaySpectrum( channels=["29Si"], magnetic_flux_density=7.1, # in T rotor_frequency=780, # in Hz spectral_dimensions=[ { "count": 2048, "spectral_width": 25000, # in Hz "reference_offset": -5000, # in Hz } ], experiment=synthetic_experiment, # add the measurement to the method. ) ``` **Step 3:** Create the Simulator object, add the method and spin system objects, and run the simulation. ``` sim = Simulator(spin_systems=[spin_system], methods=[MAS]) sim.run() ``` **Step 4:** Create a SignalProcessor class and apply post simulation processing. ``` processor = sp.SignalProcessor( operations=[ sp.IFFT(), # inverse FFT to convert frequency based spectrum to time domain. sp.apodization.Exponential(FWHM="200 Hz"), # apodization of time domain signal. sp.FFT(), # forward FFT to convert time domain signal to frequency spectrum. sp.Scale(factor=3), # scale the frequency spectrum. ] ) processed_data = processor.apply_operations(data=sim.methods[0].simulation).real ``` **Step 5:** The plot the spectrum. We also plot the synthetic dataset for comparison. ``` plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment") ax.plot(processed_data, "r", alpha=0.75, linewidth=1, label="guess spectrum") ax.set_xlim(50, -200) plt.legend() plt.grid() plt.tight_layout() plt.show() ``` ## Setup a Least-squares minimization Now that our model is ready, the next step is to set up a least-squares minimization. You may use any optimization package of choice, here we show an application using LMFIT. You may read more on the LMFIT `documentation page <https://lmfit.github.io/lmfit-py/index.html>`_. ### Create fitting parameters Next, you will need a list of parameters that will be used in the fit. The *LMFIT* library provides a `Parameters <https://lmfit.github.io/lmfit-py/parameters.html>`_ class to create a list of parameters. ``` site1 = spin_system.sites[0] params = Parameters() params.add(name="iso", value=site1.isotropic_chemical_shift) params.add(name="eta", value=site1.shielding_symmetric.eta, min=0, max=1) params.add(name="zeta", value=site1.shielding_symmetric.zeta) params.add(name="FWHM", value=processor.operations[1].FWHM) params.add(name="factor", value=processor.operations[3].factor) ``` ### Create a minimization function Note, the above set of parameters does not know about the model. You will need to set up a function that will - update the parameters of the `Simulator` and `SignalProcessor` object based on the LMFIT parameter updates, - re-simulate the spectrum based on the updated values, and - return the difference between the experiment and simulation. ``` def minimization_function(params, sim, processor, sigma=1): values = params.valuesdict() # the experiment data as a Numpy array intensity = sim.methods[0].experiment.y[0].components[0].real # Here, we update simulation parameters iso, eta, and zeta for the site object site = sim.spin_systems[0].sites[0] site.isotropic_chemical_shift = values["iso"] site.shielding_symmetric.eta = values["eta"] site.shielding_symmetric.zeta = values["zeta"] # run the simulation sim.run() # update the SignalProcessor parameter and apply line broadening. # update the scaling factor parameter at index 3 of operations list. processor.operations[3].factor = values["factor"] # update the exponential apodization FWHM parameter at index 1 of operations list. processor.operations[1].FWHM = values["FWHM"] # apply signal processing processed_data = processor.apply_operations(sim.methods[0].simulation) # return the difference vector. diff = intensity - processed_data.y[0].components[0].real return diff / sigma ``` <div class="alert alert-info"><h4>Note</h4><p>To automate the fitting process, we provide a function to parse the ``Simulator`` and ``SignalProcessor`` objects for parameters and construct an *LMFIT* ``Parameters`` object. Similarly, a minimization function, analogous to the above `minimization_function`, is also included in the *mrsimulator* library. See the next example for usage instructions.</p></div> ### Perform the least-squares minimization With the synthetic dataset, simulation, and the initial guess parameters, we are ready to perform the fit. To fit, we use the *LMFIT* `Minimizer <https://lmfit.github.io/lmfit-py/fitting.html>`_ class. ``` minner = Minimizer(minimization_function, params, fcn_args=(sim, processor, sigma)) result = minner.minimize() result ``` The plot of the fit, measurement and the residuals is shown below. ``` best_fit = sf.bestfit(sim, processor)[0] residuals = sf.residuals(sim, processor)[0] plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment") ax.plot(best_fit, "r", alpha=0.75, linewidth=1, label="Best Fit") ax.plot(residuals, alpha=0.75, linewidth=1, label="Residuals") ax.set_xlabel("Frequency / Hz") ax.set_xlim(50, -200) plt.legend() plt.grid() plt.tight_layout() plt.show() ``` .. [#f1] Hansen, M. R., Jakobsen, H. J., Skibsted, J., $^{29}\text{Si}$ Chemical Shift Anisotropies in Calcium Silicates from High-Field $^{29}\text{Si}$ MAS NMR Spectroscopy, Inorg. Chem. 2003, **42**, *7*, 2368-2377. `DOI: 10.1021/ic020647f <https://doi.org/10.1021/ic020647f>`_
true
code
0.790166
null
null
null
null
``` import torch import numpy as np import pandas as pd from sklearn.cluster import KMeans from statsmodels.discrete.discrete_model import Probit import patsy import matplotlib.pylab as plt import tqdm import itertools ax = np.newaxis ``` Make sure you have installed the pygfe package. You can simply call `pip install pygrpfe` in the terminal or call the magic command `!pip install pygrpfe` from within the notebook. If you are using the binder link, then `pygrpfe` is already installed. You can import the package directly. ``` import pygrpfe as gfe ``` # A simple model of wage and participation \begin{align*} Y^*_{it} & = \alpha_i + \epsilon_{it} \\ D_{it} &= 1\big[ u(\alpha_i) \geq c(D_{it-1}) + V_{it} \big] \\ Y_{it} &= D_{it} Y^*_{it} \\ \end{align*} where we use $$u(\alpha) = \frac{e^{(1-\gamma) \alpha } -1}{1-\gamma}$$ and use as initial conditions $D_{i1} = 1\big[ u(\alpha_i) \geq c(1) + V_{i1} \big]$. ``` def dgp_simulate(ni,nt,gamma=2.0,eps_sd=1.0): """ simulates according to the model """ alpha = np.random.normal(size=(ni)) eps = np.random.normal(size=(ni,nt)) v = np.random.normal(size=(ni,nt)) # non-censored outcome W = alpha[:,ax] + eps*eps_sd # utility U = (np.exp( alpha * (1-gamma)) - 1)/(1-gamma) U = U - U.mean() # costs C1 = -1; C0=0; # binary decision Y = np.ones((ni,nt)) Y[:,0] = U.squeeze() > C1 + v[:,0] for t in range(1,nt): Y[:,t] = U > C1*Y[:,t-1] + C0*(1-Y[:,t-1]) + v[:,t] W = W * Y return(W,Y) ``` # Estimating the model We show the steps to estimating the model. Later on, we will run a Monte-Carlo Simulation. We simulate from the DGP we have defined. ``` ni = 1000 nt = 50 Y,D = dgp_simulate(ni,nt,2.0) ``` ## Step 1: grouping observations We group individuals based on their outcomes. We consider as moments the average value of $Y$ and the average value of $D$. We give our gfe function the $t$ sepcific values so that it can compute the within individual variation. This is a measure used to pick the nubmer of groups. The `group` function chooses the number of groups based on the rule described in the paper. ``` # we create the moments # this has dimension ni x nt x nm M_itm = np.stack([Y,D],axis=2) # we use our sugar function to get the groups G_i,_ = gfe.group(M_itm) print("Number of groups = {:d}".format(G_i.max())) ``` We can plot the grouping: ``` dd = pd.DataFrame({'Y':Y.mean(1),'G':G_i,'D':D.mean(1)}) plt.scatter(dd.Y,dd.D,c=dd.G*1.0) plt.show() ``` ## Step 2: Estimate the likelihood model with group specific parameters In the model we proposed, this second step is a probit. We can then directly use python probit routine with group dummies. ``` ni,nt = D.shape # next we minimize using groups as FE dd = pd.DataFrame({ 'd': D[:,range(1,nt)].flatten(), 'dl':D[:,range(nt-1)].flatten(), 'gi':np.broadcast_to(G_i[:,ax], (ni,nt-1)).flatten()}) yv,Xv = patsy.dmatrices("d ~ 0 + dl + C(gi)", dd, return_type='matrix') mod = Probit(dd['d'], Xv) res = mod.fit(maxiter=2000,method='bfgs') print("Estimated cost parameters = {:.3f}".format(res.params[-1])) ``` ## Step 2 (alternative implementation): Pytorch and auto-diff We next write down a likelihood that we want to optimize. Instead of using the Python routine for the Probit, we make use of automatic differentiation from PyTorch. This makes it easy to modify the estimating model to accomodate for less standard likelihoods! We create a class which initializes the parameters in the `__init__` method and computes the loss in the `loss` method. We will see later how we can use this to define a fixed effect estimator. ``` class GrpProbit: # initialize parameters and data def __init__(self,D,G_i): # define parameters and tell PyTorch to keep track of gradients self.alpha = torch.tensor( np.ones(G_i.max()+1), requires_grad=True) self.cost = torch.tensor( np.random.normal(1), requires_grad=True) self.params = [self.alpha,self.cost] # predefine some components ni,nt = D.shape self.ni = ni self.G_i = G_i self.Dlag = torch.tensor(D[:,range(0,nt-1)]) self.Dout = torch.tensor(D[:,range(1,nt)]) self.N = torch.distributions.normal.Normal(0,1) # define our loss function def loss(self): Id = self.alpha[self.G_i].reshape(self.ni,1) + self.cost * self.Dlag lik_it = self.Dout * torch.log( torch.clamp( self.N.cdf( Id ), min=1e-7)) + \ (1-self.Dout)*torch.log( torch.clamp( self.N.cdf( -Id ), min=1e-7) ) return(- lik_it.mean()) # initialize the model with groups and estimate it model = GrpProbit(D,G_i) gfe.train(model) print("Estimated cost parameters = {:.3f}".format(model.params[1])) ``` ## Use PyTorch to estimate Fixed Effect version Since Pytorch makes use of efficient automatic differentiation, we can use it with many variables. This allows us to give each individual their own group, effectivily estimating a fixed-effect model. ``` model_fe = GrpProbit(D,np.arange(ni)) gfe.train(model_fe) print("Estimated cost parameters FE = {:.3f}".format(model_fe.params[1])) ``` # Monte-Carlo We finish with running a short Monte-Carlo exercise. ``` all = [] import itertools ll = list(itertools.product(range(50), [10,20,30,40])) for r, nt in tqdm.tqdm(ll): ni = 1000 gamma =2.0 Y,D = dgp_simulate(ni,nt,gamma) M_itm = np.stack([Y,D],axis=2) G_i,_ = blm2.group(M_itm,scale=True) model_fe = GrpProbit(D,np.arange(ni)) gfe.train(model_fe) model_gfe = GrpProbit(D,G_i) gfe.train(model_gfe) all.append({ 'c_fe' : model_fe.params[1].item(), 'c_gfe': model_gfe.params[1].item(), 'ni':ni, 'nt':nt, 'gamma':gamma, 'ng':G_i.max()+1}) df = pd.DataFrame(all) df2 = df.groupby(['ni','nt','gamma']).mean().reset_index() plt.plot(df2['nt'],df2['c_gfe'],label="gfe",color="orange") plt.plot(df2['nt'],df2['c_fe'],label="fe",color="red") plt.axhline(1.0,label="true",color="black",linestyle=":") plt.xlabel("T") plt.legend() plt.show() df.groupby(['ni','nt','gamma']).mean() ```
true
code
0.661704
null
null
null
null
# GDP and life expectancy Richer countries can afford to invest more on healthcare, on work and road safety, and other measures that reduce mortality. On the other hand, richer countries may have less healthy lifestyles. Is there any relation between the wealth of a country and the life expectancy of its inhabitants? The following analysis checks whether there is any correlation between the total gross domestic product (GDP) of a country in 2013 and the life expectancy of people born in that country in 2013. Getting the data Two datasets of the World Bank are considered. One dataset, available at http://data.worldbank.org/indicator/NY.GDP.MKTP.CD, lists the GDP of the world's countries in current US dollars, for various years. The use of a common currency allows us to compare GDP values across countries. The other dataset, available at http://data.worldbank.org/indicator/SP.DYN.LE00.IN, lists the life expectancy of the world's countries. The datasets were downloaded as CSV files in March 2016. ``` import warnings warnings.simplefilter('ignore', FutureWarning) import pandas as pd YEAR = 2018 GDP_INDICATOR = 'NY.GDP.MKTP.CD' gdpReset = pd.read_csv('WB 2018 GDP.csv') LIFE_INDICATOR = 'SP.DYN.LE00.IN_' lifeReset = pd.read_csv('WB 2018 LE.csv') lifeReset.head() ``` ## Cleaning the data Inspecting the data with `head()` and `tail()` shows that: 1. the first 34 rows are aggregated data, for the Arab World, the Caribbean small states, and other country groups used by the World Bank; - GDP and life expectancy values are missing for some countries. The data is therefore cleaned by: 1. removing the first 34 rows; - removing rows with unavailable values. ``` gdpCountries = gdpReset.dropna() lifeCountries = lifeReset.dropna() ``` ## Transforming the data The World Bank reports GDP in US dollars and cents. To make the data easier to read, the GDP is converted to millions of British pounds (the author's local currency) with the following auxiliary functions, using the average 2013 dollar-to-pound conversion rate provided by <http://www.ukforex.co.uk/forex-tools/historical-rate-tools/yearly-average-rates>. ``` def roundToMillions (value): return round(value / 1000000) def usdToGBP (usd): return usd / 1.334801 GDP = 'GDP (£m)' gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions) gdpCountries.head() COUNTRY = 'Country Name' headings = [COUNTRY, GDP] gdpClean = gdpCountries[headings] gdpClean.head() LIFE = 'Life expectancy (years)' lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round) headings = [COUNTRY, LIFE] lifeClean = lifeCountries[headings] lifeClean.head() gdpVsLife = pd.merge(gdpClean, lifeClean, on=COUNTRY, how='inner') gdpVsLife.head() ``` ## Calculating the correlation To measure if the life expectancy and the GDP grow together, the Spearman rank correlation coefficient is used. It is a number from -1 (perfect inverse rank correlation: if one indicator increases, the other decreases) to 1 (perfect direct rank correlation: if one indicator increases, so does the other), with 0 meaning there is no rank correlation. A perfect correlation doesn't imply any cause-effect relation between the two indicators. A p-value below 0.05 means the correlation is statistically significant. ``` from scipy.stats import spearmanr gdpColumn = gdpVsLife[GDP] lifeColumn = gdpVsLife[LIFE] (correlation, pValue) = spearmanr(gdpColumn, lifeColumn) print('The correlation is', correlation) if pValue < 0.05: print('It is statistically significant.') else: print('It is not statistically significant.') ``` The value shows a direct correlation, i.e. richer countries tend to have longer life expectancy. ## Showing the data Measures of correlation can be misleading, so it is best to see the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds. ``` %matplotlib inline gdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(10, 4)) ``` The plot shows there is no clear correlation: there are rich countries with low life expectancy, poor countries with high expectancy, and countries with around 10 thousand (104) million pounds GDP have almost the full range of values, from below 50 to over 80 years. Towards the lower and higher end of GDP, the variation diminishes. Above 40 thousand million pounds of GDP (3rd tick mark to the right of 104), most countries have an expectancy of 70 years or more, whilst below that threshold most countries' life expectancy is below 70 years. Comparing the 10 poorest countries and the 10 countries with the lowest life expectancy shows that total GDP is a rather crude measure. The population size should be taken into account for a more precise definiton of what 'poor' and 'rich' means. Furthermore, looking at the countries below, droughts and internal conflicts may also play a role in life expectancy. ``` # the 10 countries with lowest GDP gdpVsLife.sort_values(GDP).head(10) # the 10 countries with lowest life expectancy gdpVsLife.sort_values(LIFE).head(10) ``` ## Conclusions To sum up, there is no strong correlation between a country's wealth and the life expectancy of its inhabitants: there is often a wide variation of life expectancy for countries with similar GDP, countries with the lowest life expectancy are not the poorest countries, and countries with the highest expectancy are not the richest countries. Nevertheless there is some relationship, because the vast majority of countries with a life expectancy below 70 years is on the left half of the scatterplot.
true
code
0.274838
null
null
null
null
# American Gut Project example This notebook was created from a question we recieved from a user of MGnify. The question was: ``` I am attempting to retrieve some of the MGnify results from samples that are part of the American Gut Project based on sample location. However latitude and longitude do not appear to be searchable fields. Is it possible to query these fields myself or to work with someone to retrieve a list of samples from a specific geographic range? I am interested in samples from people in Hawaii, so 20.5 - 20.7 and -154.0 - -161.2. ``` Let's decompose the question: - project "American Gut Project" - Metadata filtration using the geographic location of a sample. - Get samples for Hawai: 20.5 - 20.7 ; -154.0 - -161.2 Each sample if MGnify it's obtained from [ENA](https://www.ebi.ac.uk/ena). ## Get samples The first step is to obtain the samples using [ENA advanced search API](https://www.ebi.ac.uk/ena/browser/advanced-search). ``` from pandas import DataFrame import requests base_url = 'https://www.ebi.ac.uk/ena/portal/api/search' # parameters params = { 'result': 'sample', 'query': ' AND '.join([ 'geo_box1(16.9175,-158.4687,21.6593,-152.7969)', 'description="*American Gut Project*"' ]), 'fields': ','.join(['secondary_sample_accession', 'lat', 'lon']), 'format': 'json', } response = requests.post(base_url, data=params) agp_samples = response.json() df = DataFrame(columns=('secondary_sample_accession', 'lat', 'lon')) df.index.name = 'accession' for s in agp_samples: df.loc[s.get('accession')] = [ s.get('secondary_sample_accession'), s.get('lat'), s.get('lon') ] df ``` Now we can use EMG API to get the information. ``` #!/bin/usr/env python import requests import sys def get_links(data): return data["links"]["related"] if __name__ == "__main__": samples_url = "https://www.ebi.ac.uk/metagenomics/api/v1/samples/" tsv = sys.argv[1] if len(sys.argv) == 2 else None if not tsv: print("The first arg is the tsv file") exit(1) tsv_fh = open(tsv, "r") # header next(tsv_fh) for record in tsv_fh: # get the runs first # mgnify references the secondary accession _, sec_acc, *_ = record.split("\t") samples_res = requests.get(samples_url + sec_acc) if samples_res.status_code == 404: print(sec_acc + " not found in MGnify") continue # then the analysis for that run runs_url = get_links(samples_res.json()["data"]["relationships"]["runs"]) if not runs_url: print("No runs for sample " + sec_acc) continue print("Getting the runs: " + runs_url) run_res = requests.get(runs_url) if run_res.status_code != 200: print(run_url + " failed", file=sys.stderr) continue # iterate over the sample runs run_data = run_res.json() # this script doesn't consider pagination, it's just an example # there could be more that one page of runs # use links -> next to get the next page for run in run_data["data"]: analyses_url = get_links(run["relationships"]["analyses"]) if not analyses_url: print("No analyses for run " + run) continue analyses_res = requests.get(analyses_url) if analyses_res.status_code != 200: print(analyses_url + " failed", file=sys.stderr) continue # dump print("Raw analyses data") print(analyses_res.json()) print("=" * 30) tsv_fh.close() ```
true
code
0.447038
null
null
null
null
# LassoLars Regression with Robust Scaler This Code template is for the regression analysis using a simple LassoLars Regression. It is a lasso model implemented using the LARS algorithm and feature scaling using Robust Scaler in a Pipeline ### Required Packages ``` import warnings import numpy as np import pandas as pd import seaborn as se import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.pipeline import make_pipeline from sklearn.preprocessing import RobustScaler from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.linear_model import LassoLars warnings.filterwarnings('ignore') ``` ### Initialization Filepath of CSV file ``` #filepath file_path= "" ``` List of features which are required for model training . ``` #x_values features=[] ``` Target feature for prediction. ``` #y_value target='' ``` ### Data Fetching Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools. We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry. ``` df=pd.read_csv(file_path) df.head() ``` ### Feature Selections It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model. We will assign all the required input features to X and target/outcome to Y. ``` X=df[features] Y=df[target] ``` ### Data Preprocessing Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes. ``` def NullClearner(df): if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])): df.fillna(df.mean(),inplace=True) return df elif(isinstance(df, pd.Series)): df.fillna(df.mode()[0],inplace=True) return df else:return df def EncodeX(df): return pd.get_dummies(df) ``` Calling preprocessing functions on the feature and target set. ``` x=X.columns.to_list() for i in x: X[i]=NullClearner(X[i]) X=EncodeX(X) Y=NullClearner(Y) X.head() ``` #### Correlation Map In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns. ``` f,ax = plt.subplots(figsize=(18, 18)) matrix = np.triu(X.corr()) se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix) plt.show() ``` ### Data Splitting The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data. ``` x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123) ``` ### Model LassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients. ### Tuning parameters > **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations > **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object. > **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. > **max_iter** -> Maximum number of iterations to perform. > **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator. > **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations. ### Feature Scaling Robust Scaler scale features using statistics that are robust to outliers. This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).<br> For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html) ``` model=make_pipeline(RobustScaler(),LassoLars()) model.fit(x_train,y_train) ``` #### Model Accuracy We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model. score: The score function returns the coefficient of determination R2 of the prediction. ``` print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100)) ``` > **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model. ``` y_pred=model.predict(x_test) print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100)) print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred))) print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred))) ``` #### Prediction Plot First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis. ``` plt.figure(figsize=(14,10)) plt.plot(range(20),y_test[0:20], color = "green") plt.plot(range(20),model.predict(x_test[0:20]), color = "red") plt.legend(["Actual","prediction"]) plt.title("Predicted vs True Value") plt.xlabel("Record number") plt.ylabel(target) plt.show() ``` #### Creator: Anu Rithiga , Github: [Profile](https://github.com/iamgrootsh7)
true
code
0.471527
null
null
null
null
# 2章 微分積分 ## 2.1 関数 ``` # 必要ライブラリの宣言 %matplotlib inline import numpy as np import matplotlib.pyplot as plt # PDF出力用 from IPython.display import set_matplotlib_formats set_matplotlib_formats('png', 'pdf') def f(x): return x**2 +1 f(1) f(2) ``` ### 図2-2 点(x, f(x))のプロットとy=f(x)のグラフ ``` x = np.linspace(-3, 3, 601) y = f(x) x1 = np.linspace(-3, 3, 7) y1 = f(x1) plt.figure(figsize=(6,6)) plt.ylim(-2,10) plt.plot([-3,3],[0,0],c='k') plt.plot([0,0],[-2,10],c='k') plt.scatter(x1,y1,c='k',s=50) plt.grid() plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.show() x2 = np.linspace(-3, 3, 31) y2 = f(x2) plt.figure(figsize=(6,6)) plt.ylim(-2,10) plt.plot([-3,3],[0,0],c='k') plt.plot([0,0],[-2,10],c='k') plt.scatter(x2,y2,c='k',s=50) plt.grid() plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.show() plt.figure(figsize=(6,6)) plt.plot(x,y,c='k') plt.ylim(-2,10) plt.plot([-3,3],[0,0],c='k') plt.plot([0,0],[-2,10],c='k') plt.scatter([1,2],[2,5],c='k',s=50) plt.grid() plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.show() ``` ## 2.2 合成関数・逆関数 ### 図2.6 逆関数のグラフ ``` def f(x): return(x**2 + 1) def g(x): return(np.sqrt(x - 1)) xx1 = np.linspace(0.0, 4.0, 200) xx2 = np.linspace(1.0, 4.0, 200) yy1 = f(xx1) yy2 = g(xx2) plt.figure(figsize=(6,6)) plt.xlabel('$x$',fontsize=14) plt.ylabel('$y$',fontsize=14) plt.ylim(-2.0, 4.0) plt.xlim(-2.0, 4.0) plt.grid() plt.plot(xx1,yy1, linestyle='-', c='k', label='$y=x^2+1$') plt.plot(xx2,yy2, linestyle='-.', c='k', label='$y=\sqrt{x-1}$') plt.plot([-2,4],[-2,4], color='black') plt.plot([-2,4],[0,0], color='black') plt.plot([0,0],[-2,4],color='black') plt.legend(fontsize=14) plt.show() ``` ## 2.3 微分と極限 ### 図2-7 関数のグラフを拡大したときの様子 ``` from matplotlib import pyplot as plt import numpy as np def f(x): return(x**3 - x) delta = 2.0 x = np.linspace(0.5-delta, 0.5+delta, 200) y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta) plt.xlim(0.5-delta, 0.5+delta) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter([0.5], [-3.0/8.0]) plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.grid() plt.title('delta = %.4f' % delta, fontsize=14) plt.show() delta = 0.2 x = np.linspace(0.5-delta, 0.5+delta, 200) y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta) plt.xlim(0.5-delta, 0.5+delta) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter([0.5], [-3.0/8.0]) plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.grid() plt.title('delta = %.4f' % delta, fontsize=14) plt.show() delta = 0.01 x = np.linspace(0.5-delta, 0.5+delta, 200) y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta) plt.xlim(0.5-delta, 0.5+delta) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter(0.5, -3.0/8.0) plt.xlabel('x',fontsize=14) plt.ylabel('y',fontsize=14) plt.grid() plt.title('delta = %.4f' % delta, fontsize=14) plt.show() ``` ### 図2-8 関数のグラフ上の2点を結んだ直線の傾き ``` delta = 2.0 x = np.linspace(0.5-delta, 0.5+delta, 200) x1 = 0.6 x2 = 1.0 y = f(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-1, 0.5) plt.xlim(0, 1.5) plt.plot(x, y, 'b-', lw=1, c='k') plt.scatter([x1, x2], [f(x1), f(x2)], c='k', lw=1) plt.plot([x1, x2], [f(x1), f(x2)], c='k', lw=1) plt.plot([x1, x2, x2], [f(x1), f(x1), f(x2)], c='k', lw=1) plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.show() ``` ### 図2-10 接線の方程式 ``` def f(x): return(x**2 - 4*x) def g(x): return(-2*x -1) x = np.linspace(-2, 6, 500) fig = plt.figure(figsize=(6,6)) plt.scatter([1],[-3],c='k') plt.plot(x, f(x), 'b-', lw=1, c='k') plt.plot(x, g(x), 'b-', lw=1, c='b') plt.plot([x.min(), x.max()], [0, 0], lw=2, c='k') plt.plot([0, 0], [g(x).min(), f(x).max()], lw=2, c='k') plt.grid(lw=2) plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.xlabel('X') plt.show() ``` ## 2.4 極大・極小 ### 図2-11 y= x3-3xのグラフと極大・極小 ``` def f1(x): return(x**3 - 3*x) x = np.linspace(-3, 3, 500) y = f1(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-4, 4) plt.xlim(-3, 3) plt.plot(x, y, 'b-', lw=1, c='k') plt.plot([0,0],[-4,4],c='k') plt.plot([-3,3],[0,0],c='k') plt.grid() plt.show() ``` ### 図2-12 極大でも極小でもない例 (y=x3のグラフ) ``` def f2(x): return(x**3) x = np.linspace(-3, 3, 500) y = f2(x) fig = plt.figure(figsize=(6,6)) plt.ylim(-4, 4) plt.xlim(-3, 3) plt.plot(x, y, 'b-', lw=1, c='k') plt.plot([0,0],[-4,4],c='k') plt.plot([-3,3],[0,0],c='k') plt.grid() plt.show() ``` ## 2.7 合成関数の微分 ### 図2-14 逆関数の微分 ``` #逆関数の微分 def f(x): return(x**2 + 1) def g(x): return(np.sqrt(x - 1)) xx1 = np.linspace(0.0, 4.0, 200) xx2 = np.linspace(1.0, 4.0, 200) yy1 = f(xx1) yy2 = g(xx2) plt.figure(figsize=(6,6)) plt.xlabel('$x$',fontsize=14) plt.ylabel('$y$',fontsize=14) plt.ylim(-2.0, 4.0) plt.xlim(-2.0, 4.0) plt.grid() plt.plot(xx1,yy1, linestyle='-', color='blue') plt.plot(xx2,yy2, linestyle='-', color='blue') plt.plot([-2,4],[-2,4], color='black') plt.plot([-2,4],[0,0], color='black') plt.plot([0,0],[-2,4],color='black') plt.show() ``` ## 2.9 積分 ### 図2-15 面積を表す関数S(x)とf(x)の関係 ``` def f(x) : return x**2 + 1 xx = np.linspace(-4.0, 4.0, 200) yy = f(xx) plt.figure(figsize=(6,6)) plt.xlim(-2,2) plt.ylim(-1,4) plt.plot(xx, yy) plt.plot([-2,2],[0,0],c='k',lw=1) plt.plot([0,0],[-1,4],c='k',lw=1) plt.plot([0,0],[0,f(0)],c='b') plt.plot([1,1],[0,f(1)],c='b') plt.plot([1.5,1.5],[0,f(1.5)],c='b') plt.plot([1,1.5],[f(1),f(1)],c='b') plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.show() ``` ### 図2-16 グラフの面積と定積分 ``` plt.figure(figsize=(6,6)) plt.xlim(-2,2) plt.ylim(-1,4) plt.plot(xx, yy) plt.plot([-2,2],[0,0],c='k',lw=1) plt.plot([0,0],[-1,4],c='k',lw=1) plt.plot([0,0],[0,f(0)],c='b') plt.plot([1,1],[0,f(1)],c='b') plt.plot([1.5,1.5],[0,f(1.5)],c='b') plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.show() ``` ### 図2-17 積分と面積の関係 ``` def f(x) : return x**2 + 1 x = np.linspace(-1.0, 2.0, 200) y = f(x) N = 10 xx = np.linspace(0.5, 1.5, N+1) yy = f(xx) print(xx) plt.figure(figsize=(6,6)) plt.xlim(-1,2) plt.ylim(-1,4) plt.plot(x, y) plt.plot([-1,2],[0,0],c='k',lw=2) plt.plot([0,0],[-1,4],c='k',lw=2) plt.plot([0.5,0.5],[0,f(0.5)],c='b') plt.plot([1.5,1.5],[0,f(1.5)],c='b') plt.bar(xx[:-1], yy[:-1], align='edge', width=1/N*0.9) plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False) plt.tick_params(color='white') plt.grid() plt.show() ```
true
code
0.378258
null
null
null
null
# ORF recognition by CNN Compare to ORF_CNN_101. Use 2-layer CNN. Run on Mac. ``` PC_SEQUENCES=20000 # how many protein-coding sequences NC_SEQUENCES=20000 # how many non-coding sequences PC_TESTS=1000 NC_TESTS=1000 BASES=1000 # how long is each sequence ALPHABET=4 # how many different letters are possible INPUT_SHAPE_2D = (BASES,ALPHABET,1) # Conv2D needs 3D inputs INPUT_SHAPE = (BASES,ALPHABET) # Conv1D needs 2D inputs FILTERS = 32 # how many different patterns the model looks for NEURONS = 16 WIDTH = 3 # how wide each pattern is, in bases STRIDE_2D = (1,1) # For Conv2D how far in each direction STRIDE = 1 # For Conv1D, how far between pattern matches, in bases EPOCHS=10 # how many times to train on all the data SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3 FOLDS=5 # train the model this many times (range 1 to SPLITS) import sys try: from google.colab import drive IN_COLAB = True print("On Google CoLab, mount cloud-local file, get our code from GitHub.") PATH='/content/drive/' #drive.mount(PATH,force_remount=True) # hardly ever need this #drive.mount(PATH) # Google will require login credentials DATAPATH=PATH+'My Drive/data/' # must end in "/" import requests r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py') with open('RNA_gen.py', 'w') as f: f.write(r.text) from RNA_gen import * r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py') with open('RNA_describe.py', 'w') as f: f.write(r.text) from RNA_describe import * r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py') with open('RNA_prep.py', 'w') as f: f.write(r.text) from RNA_prep import * except: print("CoLab not working. On my PC, use relative paths.") IN_COLAB = False DATAPATH='data/' # must end in "/" sys.path.append("..") # append parent dir in order to use sibling dirs from SimTools.RNA_gen import * from SimTools.RNA_describe import * from SimTools.RNA_prep import * MODELPATH="BestModel" # saved on cloud instance and lost after logout #MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login if not assert_imported_RNA_gen(): print("ERROR: Cannot use RNA_gen.") if not assert_imported_RNA_prep(): print("ERROR: Cannot use RNA_prep.") from os import listdir import time # datetime import csv from zipfile import ZipFile import numpy as np import pandas as pd from scipy import stats # mode from sklearn.preprocessing import StandardScaler from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from keras.models import Sequential from keras.layers import Dense,Embedding from keras.layers import Conv1D,Conv2D from keras.layers import Flatten,MaxPooling1D,MaxPooling2D from keras.losses import BinaryCrossentropy # tf.keras.losses.BinaryCrossentropy import matplotlib.pyplot as plt from matplotlib import colors mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1 np.set_printoptions(precision=2) t = time.time() time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)) # Use code from our SimTools library. def make_generators(seq_len): pcgen = Collection_Generator() pcgen.get_len_oracle().set_mean(seq_len) pcgen.set_seq_oracle(Transcript_Oracle()) ncgen = Collection_Generator() ncgen.get_len_oracle().set_mean(seq_len) return pcgen,ncgen pc_sim,nc_sim = make_generators(BASES) pc_train = pc_sim.get_sequences(PC_SEQUENCES) nc_train = nc_sim.get_sequences(NC_SEQUENCES) print("Train on",len(pc_train),"PC seqs") print("Train on",len(nc_train),"NC seqs") # Use code from our LearnTools library. X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles print("Data ready.") def make_DNN(): print("make_DNN") print("input shape:",INPUT_SHAPE) dnn = Sequential() #dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE)) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same", input_shape=INPUT_SHAPE)) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same")) dnn.add(MaxPooling1D()) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same")) dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same")) dnn.add(MaxPooling1D()) dnn.add(Flatten()) dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32)) dnn.add(Dense(1,activation="sigmoid",dtype=np.float32)) dnn.compile(optimizer='adam', loss=BinaryCrossentropy(from_logits=False), metrics=['accuracy']) # add to default metrics=loss dnn.build(input_shape=INPUT_SHAPE) #ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE) #bc=tf.keras.losses.BinaryCrossentropy(from_logits=False) #model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"]) return dnn model = make_DNN() print(model.summary()) from keras.callbacks import ModelCheckpoint def do_cross_validation(X,y): cv_scores = [] fold=0 mycallbacks = [ModelCheckpoint( filepath=MODELPATH, save_best_only=True, monitor='val_accuracy', mode='max')] splitter = KFold(n_splits=SPLITS) # this does not shuffle for train_index,valid_index in splitter.split(X): if fold < FOLDS: fold += 1 X_train=X[train_index] # inputs for training y_train=y[train_index] # labels for training X_valid=X[valid_index] # inputs for validation y_valid=y[valid_index] # labels for validation print("MODEL") # Call constructor on each CV. Else, continually improves the same model. model = model = make_DNN() print("FIT") # model.fit() implements learning start_time=time.time() history=model.fit(X_train, y_train, epochs=EPOCHS, verbose=1, # ascii art while learning callbacks=mycallbacks, # called at end of each epoch validation_data=(X_valid,y_valid)) end_time=time.time() elapsed_time=(end_time-start_time) print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time)) # print(history.history.keys()) # all these keys will be shown in figure pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale plt.show() do_cross_validation(X,y) from keras.models import load_model pc_test = pc_sim.get_sequences(PC_TESTS) nc_test = nc_sim.get_sequences(NC_TESTS) X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET) best_model=load_model(MODELPATH) scores = best_model.evaluate(X, y, verbose=0) print("The best model parameters were saved during cross-validation.") print("Best was defined as maximum validation accuracy at end of any epoch.") print("Now re-load the best model and test it on previously unseen data.") print("Test on",len(pc_test),"PC seqs") print("Test on",len(nc_test),"NC seqs") print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100)) from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score ns_probs = [0 for _ in range(len(y))] bm_probs = best_model.predict(X) ns_auc = roc_auc_score(y, ns_probs) bm_auc = roc_auc_score(y, bm_probs) ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs) bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs) plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc) plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc) plt.title('ROC') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend() plt.show() print("%s: %.2f%%" %('AUC',bm_auc)) ```
true
code
0.580293
null
null
null
null
# Use BlackJAX with Numpyro BlackJAX can take any log-probability function as long as it is compatible with JAX's JIT. In this notebook we show how we can use Numpyro as a modeling language and BlackJAX as an inference library. We reproduce the Eight Schools example from the [Numpyro documentation](https://github.com/pyro-ppl/numpyro) (all credit for the model goes to the Numpyro team). For this notebook to run you will need to install Numpyro: ```bash pip install numpyro ``` ``` import jax import numpy as np import numpyro import numpyro.distributions as dist from numpyro.infer.reparam import TransformReparam from numpyro.infer.util import initialize_model import blackjax num_warmup = 1000 # We can use this notebook for simple benchmarking by setting # below to True and run from Terminal. # $ipython examples/use_with_numpyro.ipynb RUN_BENCHMARK = False if RUN_BENCHMARK: num_sample = 5_000_000 print(f"Benchmark with {num_warmup} warmup steps and {num_sample} sampling steps.") else: num_sample = 10_000 ``` ## Data ``` # Data of the Eight Schools Model J = 8 y = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0]) sigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0]) ``` ## Model We use the non-centered version of the model described towards the end of the README on Numpyro's repository: ``` # Eight Schools example - Non-centered Reparametrization def eight_schools_noncentered(J, sigma, y=None): mu = numpyro.sample("mu", dist.Normal(0, 5)) tau = numpyro.sample("tau", dist.HalfCauchy(5)) with numpyro.plate("J", J): with numpyro.handlers.reparam(config={"theta": TransformReparam()}): theta = numpyro.sample( "theta", dist.TransformedDistribution( dist.Normal(0.0, 1.0), dist.transforms.AffineTransform(mu, tau) ), ) numpyro.sample("obs", dist.Normal(theta, sigma), obs=y) ``` We need to translate the model into a log-probability function that will be used by BlackJAX to perform inference. For that we use the `initialize_model` function in Numpyro's internals. We will also use the initial position it returns: ``` rng_key = jax.random.PRNGKey(0) init_params, potential_fn_gen, *_ = initialize_model( rng_key, eight_schools_noncentered, model_args=(J, sigma, y), dynamic_args=True, ) ``` Now we create the potential using the `potential_fn_gen` provided by Numpyro and initialize the NUTS state with BlackJAX: ``` if RUN_BENCHMARK: print("\nBlackjax:") print("-> Running warmup.") ``` We now run the window adaptation in BlackJAX: ``` %%time initial_position = init_params.z logprob = lambda position: -potential_fn_gen(J, sigma, y)(position) adapt = blackjax.window_adaptation( blackjax.nuts, logprob, num_warmup, target_acceptance_rate=0.8 ) last_state, kernel, _ = adapt.run(rng_key, initial_position) ``` Let us now perform inference using the previously computed step size and inverse mass matrix. We also time the sampling to give you an idea of how fast BlackJAX can be on simple models: ``` if RUN_BENCHMARK: print("-> Running sampling.") %%time def inference_loop(rng_key, kernel, initial_state, num_samples): @jax.jit def one_step(state, rng_key): state, info = kernel(rng_key, state) return state, (state, info) keys = jax.random.split(rng_key, num_samples) _, (states, infos) = jax.lax.scan(one_step, initial_state, keys) return states, ( infos.acceptance_probability, infos.is_divergent, infos.integration_steps, ) # Sample from the posterior distribution states, infos = inference_loop(rng_key, kernel, last_state, num_sample) _ = states.position["mu"].block_until_ready() ``` Let us compute the average acceptance probability and check the number of divergences (to make sure that the model sampled correctly, and that the sampling time is not a result of a majority of divergent transitions): ``` acceptance_rate = np.mean(infos[0]) num_divergent = np.mean(infos[1]) print(f"\nAcceptance rate: {acceptance_rate:.2f}") print(f"{100*num_divergent:.2f}% divergent transitions") ``` Let us now plot the distribution of the parameters. Note that since we use a transformed variable, Numpyro does not output the school treatment effect directly: ``` if not RUN_BENCHMARK: import seaborn as sns from matplotlib import pyplot as plt samples = states.position fig, axes = plt.subplots(ncols=2) fig.set_size_inches(12, 5) sns.kdeplot(samples["mu"], ax=axes[0]) sns.kdeplot(samples["tau"], ax=axes[1]) axes[0].set_xlabel("mu") axes[1].set_xlabel("tau") fig.tight_layout() if not RUN_BENCHMARK: fig, axes = plt.subplots(8, 2, sharex="col", sharey="col") fig.set_size_inches(12, 10) for i in range(J): axes[i][0].plot(samples["theta_base"][:, i]) axes[i][0].title.set_text(f"School {i} relative treatment effect chain") sns.kdeplot(samples["theta_base"][:, i], ax=axes[i][1], shade=True) axes[i][1].title.set_text(f"School {i} relative treatment effect distribution") axes[J - 1][0].set_xlabel("Iteration") axes[J - 1][1].set_xlabel("School effect") fig.tight_layout() plt.show() if not RUN_BENCHMARK: for i in range(J): print( f"Relative treatment effect for school {i}: {np.mean(samples['theta_base'][:, i]):.2f}" ) ``` ## Compare sampling time with Numpyro We compare the time it took BlackJAX to do the warmup for 1,000 iterations and then taking 100,000 samples with Numpyro's: ``` from numpyro.infer import MCMC, NUTS if RUN_BENCHMARK: print("\nNumpyro:") print("-> Running warmup+sampling.") %%time nuts_kernel = NUTS(eight_schools_noncentered, target_accept_prob=0.8) mcmc = MCMC( nuts_kernel, num_warmup=num_warmup, num_samples=num_sample, progress_bar=False ) rng_key = jax.random.PRNGKey(0) mcmc.run(rng_key, J, sigma, y=y, extra_fields=("num_steps", "accept_prob")) samples = mcmc.get_samples() _ = samples["mu"].block_until_ready() print(f"\nAcceptance rate: {mcmc.get_extra_fields()['accept_prob'].mean():.2f}") print(f"{100*mcmc.get_extra_fields()['diverging'].mean():.2f}% divergent transitions") print(f"\nBlackjax average {infos[2].mean():.2f} leapfrog per iteration.") print( f"Numpyro average {mcmc.get_extra_fields()['num_steps'].mean():.2f} leapfrog per iteration." ) ```
true
code
0.7347
null
null
null
null
# Machine Translation English-German Example Using SageMaker Seq2Seq 1. [Introduction](#Introduction) 2. [Setup](#Setup) 3. [Download dataset and preprocess](#Download-dataset-and-preprocess) 3. [Training the Machine Translation model](#Training-the-Machine-Translation-model) 4. [Inference](#Inference) ## Introduction Welcome to our Machine Translation end-to-end example! In this demo, we will train a English-German translation model and will test the predictions on a few examples. SageMaker Seq2Seq algorithm is built on top of [Sockeye](https://github.com/awslabs/sockeye), a sequence-to-sequence framework for Neural Machine Translation based on MXNet. SageMaker Seq2Seq implements state-of-the-art encoder-decoder architectures which can also be used for tasks like Abstractive Summarization in addition to Machine Translation. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. ## Setup Let's start by specifying: - The S3 bucket and prefix that you want to use for training and model data. **This should be within the same region as the Notebook Instance, training, and hosting.** - The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp in the cell below with a the appropriate full IAM role arn string(s). ``` # S3 bucket and prefix bucket = '<your_s3_bucket_name_here>' prefix = 'sagemaker/<your_s3_prefix_here>' # E.g.'sagemaker/seq2seq/eng-german' import boto3 import re from sagemaker import get_execution_role role = get_execution_role() ``` Next, we'll import the Python libraries we'll need for the remainder of the exercise. ``` from time import gmtime, strftime import time import numpy as np import os import json # For plotting attention matrix later on import matplotlib %matplotlib inline import matplotlib.pyplot as plt ``` ## Download dataset and preprocess In this notebook, we will train a English to German translation model on a dataset from the [Conference on Machine Translation (WMT) 2017](http://www.statmt.org/wmt17/). ``` %%bash wget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.de.gz & \ wget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.en.gz & wait gunzip corpus.tc.de.gz & \ gunzip corpus.tc.en.gz & wait mkdir validation curl http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/dev.tgz | tar xvzf - -C validation ``` Please note that it is a common practise to split words into subwords using Byte Pair Encoding (BPE). Please refer to [this](https://github.com/awslabs/sockeye/tree/master/tutorials/wmt) tutorial if you are interested in performing BPE. Since training on the whole dataset might take several hours/days, for this demo, let us train on the **first 10,000 lines only**. Don't run the next cell if you want to train on the complete dataset. ``` !head -n 10000 corpus.tc.en > corpus.tc.en.small !head -n 10000 corpus.tc.de > corpus.tc.de.small ``` Now, let's use the preprocessing script `create_vocab_proto.py` (provided with this notebook) to create vocabulary mappings (strings to integers) and convert these files to x-recordio-protobuf as required for training by SageMaker Seq2Seq. Uncomment the cell below and run to see check the arguments this script expects. ``` %%bash # python3 create_vocab_proto.py -h ``` The cell below does the preprocessing. If you are using the complete dataset, the script might take around 10-15 min on an m4.xlarge notebook instance. Remove ".small" from the file names for training on full datasets. ``` %%time %%bash python3 create_vocab_proto.py \ --train-source corpus.tc.en.small \ --train-target corpus.tc.de.small \ --val-source validation/newstest2014.tc.en \ --val-target validation/newstest2014.tc.de ``` The script will output 4 files, namely: - train.rec : Contains source and target sentences for training in protobuf format - val.rec : Contains source and target sentences for validation in protobuf format - vocab.src.json : Vocabulary mapping (string to int) for source language (English in this example) - vocab.trg.json : Vocabulary mapping (string to int) for target language (German in this example) Let's upload the pre-processed dataset and vocabularies to S3 ``` def upload_to_s3(bucket, prefix, channel, file): s3 = boto3.resource('s3') data = open(file, "rb") key = prefix + "/" + channel + '/' + file s3.Bucket(bucket).put_object(Key=key, Body=data) upload_to_s3(bucket, prefix, 'train', 'train.rec') upload_to_s3(bucket, prefix, 'validation', 'val.rec') upload_to_s3(bucket, prefix, 'vocab', 'vocab.src.json') upload_to_s3(bucket, prefix, 'vocab', 'vocab.trg.json') region_name = boto3.Session().region_name containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/seq2seq:latest', 'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/seq2seq:latest', 'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/seq2seq:latest', 'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/seq2seq:latest'} container = containers[region_name] print('Using SageMaker Seq2Seq container: {} ({})'.format(container, region_name)) ``` ## Training the Machine Translation model ``` job_name = 'seq2seq-en-de-p2-xlarge-' + strftime("%Y-%m-%d-%H", gmtime()) print("Training job", job_name) create_training_params = \ { "AlgorithmSpecification": { "TrainingImage": container, "TrainingInputMode": "File" }, "RoleArn": role, "OutputDataConfig": { "S3OutputPath": "s3://{}/{}/".format(bucket, prefix) }, "ResourceConfig": { # Seq2Seq does not support multiple machines. Currently, it only supports single machine, multiple GPUs "InstanceCount": 1, "InstanceType": "ml.p2.xlarge", # We suggest one of ["ml.p2.16xlarge", "ml.p2.8xlarge", "ml.p2.xlarge"] "VolumeSizeInGB": 50 }, "TrainingJobName": job_name, "HyperParameters": { # Please refer to the documentation for complete list of parameters "max_seq_len_source": "60", "max_seq_len_target": "60", "optimized_metric": "bleu", "batch_size": "64", # Please use a larger batch size (256 or 512) if using ml.p2.8xlarge or ml.p2.16xlarge "checkpoint_frequency_num_batches": "1000", "rnn_num_hidden": "512", "num_layers_encoder": "1", "num_layers_decoder": "1", "num_embed_source": "512", "num_embed_target": "512", "checkpoint_threshold": "3", "max_num_batches": "2100" # Training will stop after 2100 iterations/batches. # This is just for demo purposes. Remove the above parameter if you want a better model. }, "StoppingCondition": { "MaxRuntimeInSeconds": 48 * 3600 }, "InputDataConfig": [ { "ChannelName": "train", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/train/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, }, { "ChannelName": "vocab", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/vocab/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, }, { "ChannelName": "validation", "DataSource": { "S3DataSource": { "S3DataType": "S3Prefix", "S3Uri": "s3://{}/{}/validation/".format(bucket, prefix), "S3DataDistributionType": "FullyReplicated" } }, } ] } sagemaker_client = boto3.Session().client(service_name='sagemaker') sagemaker_client.create_training_job(**create_training_params) status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus'] print(status) status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus'] print(status) # if the job failed, determine why if status == 'Failed': message = sage.describe_training_job(TrainingJobName=job_name)['FailureReason'] print('Training failed with the following error: {}'.format(message)) raise Exception('Training job failed') ``` > Now wait for the training job to complete and proceed to the next step after you see model artifacts in your S3 bucket. You can jump to [Use a pretrained model](#Use-a-pretrained-model) as training might take some time. ## Inference A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means translating sentence(s) from English to German. This section involves several steps, - Create model - Create a model using the artifact (model.tar.gz) produced by training - Create Endpoint Configuration - Create a configuration defining an endpoint, using the above model - Create Endpoint - Use the configuration to create an inference endpoint. - Perform Inference - Perform inference on some input data using the endpoint. ### Create model We now create a SageMaker Model from the training output. Using the model, we can then create an Endpoint Configuration. ``` use_pretrained_model = False ``` ### Use a pretrained model #### Please uncomment and run the cell below if you want to use a pretrained model, as training might take several hours/days to complete. ``` # use_pretrained_model = True # model_name = "pretrained-en-de-model" # !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/model.tar.gz > model.tar.gz # !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.src.json > vocab.src.json # !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.trg.json > vocab.trg.json # upload_to_s3(bucket, prefix, 'pretrained_model', 'model.tar.gz') # model_data = "s3://{}/{}/pretrained_model/model.tar.gz".format(bucket, prefix) %%time sage = boto3.client('sagemaker') if not use_pretrained_model: info = sage.describe_training_job(TrainingJobName=job_name) model_name=job_name model_data = info['ModelArtifacts']['S3ModelArtifacts'] print(model_name) print(model_data) primary_container = { 'Image': container, 'ModelDataUrl': model_data } create_model_response = sage.create_model( ModelName = model_name, ExecutionRoleArn = role, PrimaryContainer = primary_container) print(create_model_response['ModelArn']) ``` ### Create endpoint configuration Use the model to create an endpoint configuration. The endpoint configuration also contains information about the type and number of EC2 instances to use when hosting the model. Since SageMaker Seq2Seq is based on Neural Nets, we could use an ml.p2.xlarge (GPU) instance, but for this example we will use a free tier eligible ml.m4.xlarge. ``` from time import gmtime, strftime endpoint_config_name = 'Seq2SeqEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) print(endpoint_config_name) create_endpoint_config_response = sage.create_endpoint_config( EndpointConfigName = endpoint_config_name, ProductionVariants=[{ 'InstanceType':'ml.m4.xlarge', 'InitialInstanceCount':1, 'ModelName':model_name, 'VariantName':'AllTraffic'}]) print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn']) ``` ### Create endpoint Lastly, we create the endpoint that serves up model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 10-15 minutes to complete. ``` %%time import time endpoint_name = 'Seq2SeqEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime()) print(endpoint_name) create_endpoint_response = sage.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name) print(create_endpoint_response['EndpointArn']) resp = sage.describe_endpoint(EndpointName=endpoint_name) status = resp['EndpointStatus'] print("Status: " + status) # wait until the status has changed sage.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name) # print the status of the endpoint endpoint_response = sage.describe_endpoint(EndpointName=endpoint_name) status = endpoint_response['EndpointStatus'] print('Endpoint creation ended with EndpointStatus = {}'.format(status)) if status != 'InService': raise Exception('Endpoint creation failed.') ``` If you see the message, > Endpoint creation ended with EndpointStatus = InService then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console. We will finally create a runtime object from which we can invoke the endpoint. ``` runtime = boto3.client(service_name='runtime.sagemaker') ``` # Perform Inference ### Using JSON format for inference (Suggested for a single or small number of data instances) #### Note that you don't have to convert string to text using the vocabulary mapping for inference using JSON mode ``` sentences = ["you are so good !", "can you drive a car ?", "i want to watch a movie ." ] payload = {"instances" : []} for sent in sentences: payload["instances"].append({"data" : sent}) response = runtime.invoke_endpoint(EndpointName=endpoint_name, ContentType='application/json', Body=json.dumps(payload)) response = response["Body"].read().decode("utf-8") response = json.loads(response) print(response) ``` ### Retrieving the Attention Matrix Passing `"attention_matrix":"true"` in `configuration` of the data instance will return the attention matrix. ``` sentence = 'can you drive a car ?' payload = {"instances" : [{ "data" : sentence, "configuration" : {"attention_matrix":"true"} } ]} response = runtime.invoke_endpoint(EndpointName=endpoint_name, ContentType='application/json', Body=json.dumps(payload)) response = response["Body"].read().decode("utf-8") response = json.loads(response)['predictions'][0] source = sentence target = response["target"] attention_matrix = np.array(response["matrix"]) print("Source: %s \nTarget: %s" % (source, target)) # Define a function for plotting the attentioan matrix def plot_matrix(attention_matrix, target, source): source_tokens = source.split() target_tokens = target.split() assert attention_matrix.shape[0] == len(target_tokens) plt.imshow(attention_matrix.transpose(), interpolation="nearest", cmap="Greys") plt.xlabel("target") plt.ylabel("source") plt.gca().set_xticks([i for i in range(0, len(target_tokens))]) plt.gca().set_yticks([i for i in range(0, len(source_tokens))]) plt.gca().set_xticklabels(target_tokens) plt.gca().set_yticklabels(source_tokens) plt.tight_layout() plot_matrix(attention_matrix, target, source) ``` ### Using Protobuf format for inference (Suggested for efficient bulk inference) Reading the vocabulary mappings as this mode of inference accepts list of integers and returns list of integers. ``` import io import tempfile from record_pb2 import Record from create_vocab_proto import vocab_from_json, reverse_vocab, write_recordio, list_to_record_bytes, read_next source = vocab_from_json("vocab.src.json") target = vocab_from_json("vocab.trg.json") source_rev = reverse_vocab(source) target_rev = reverse_vocab(target) sentences = ["this is so cool", "i am having dinner .", "i am sitting in an aeroplane .", "come let us go for a long drive ."] ``` Converting the string to integers, followed by protobuf encoding: ``` # Convert strings to integers using source vocab mapping. Out-of-vocabulary strings are mapped to 1 - the mapping for <unk> sentences = [[source.get(token, 1) for token in sentence.split()] for sentence in sentences] f = io.BytesIO() for sentence in sentences: record = list_to_record_bytes(sentence, []) write_recordio(f, record) response = runtime.invoke_endpoint(EndpointName=endpoint_name, ContentType='application/x-recordio-protobuf', Body=f.getvalue()) response = response["Body"].read() ``` Now, parse the protobuf response and convert list of integers back to strings ``` def _parse_proto_response(received_bytes): output_file = tempfile.NamedTemporaryFile() output_file.write(received_bytes) output_file.flush() target_sentences = [] with open(output_file.name, 'rb') as datum: next_record = True while next_record: next_record = read_next(datum) if next_record: rec = Record() rec.ParseFromString(next_record) target = list(rec.features["target"].int32_tensor.values) target_sentences.append(target) else: break return target_sentences targets = _parse_proto_response(response) resp = [" ".join([target_rev.get(token, "<unk>") for token in sentence]) for sentence in targets] print(resp) ``` # Stop / Close the Endpoint (Optional) Finally, we should delete the endpoint before we close the notebook. ``` sage.delete_endpoint(EndpointName=endpoint_name) ```
true
code
0.428413
null
null
null
null
# Let's Grow your Own Inner Core! ### Choose a model in the list: - geodyn_trg.TranslationGrowthRotation() - geodyn_static.Hemispheres() ### Choose a proxy type: - age - position - phi - theta - growth rate ### set the parameters for the model : geodynModel.set_parameters(parameters) ### set the units : geodynModel.define_units() ### Choose a data set: - data.SeismicFromFile(filename) # Lauren's data set - data.RandomData(numbers_of_points) - data.PerfectSamplingEquator(numbers_of_points) organized on a cartesian grid. numbers_of_points is the number of points along the x or y axis. The total number of points is numbers_of_points**2*pi/4 - as a special plot function to show streamlines: plot_c_vec(self,modelgeodyn) - data.PerfectSamplingEquatorRadial(Nr, Ntheta) same than below, but organized on a polar grid, not a cartesian grid. ### Extract the info: - calculate the proxy value for all points of the data set: geodyn.evaluate_proxy(data_set, geodynModel) - extract the positions as numpy arrays: extract_rtp or extract_xyz - calculate other variables: positions.angular_distance_to_point(t,p, t_point, p_point) ``` %matplotlib inline # import statements import numpy as np import matplotlib.pyplot as plt #for figures from mpl_toolkits.basemap import Basemap #to render maps import math import json #to write dict with parameters from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures cm = plt.cm.get_cmap('viridis') cm2 = plt.cm.get_cmap('winter') ``` ## Define the geodynamical model Un-comment one of the model ``` ## un-comment one of them geodynModel = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper # geodynModel = geodyn_static.Hemispheres() #this is a static model, only hemispheres. ``` Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation()) ``` age_ic_dim = 1e9 #in years rICB_dim = 1221. #in km v_g_dim = rICB_dim/age_ic_dim # in km/years #growth rate print("Growth rate is {:.2e} km/years".format(v_g_dim)) v_g_dim_seconds = v_g_dim*1e3/(np.pi*1e7) translation_velocity_dim = 0.8*v_g_dim_seconds#4e-10 #0.8*v_g_dim_seconds#4e-10 #m.s, value for today's Earth with Q_cmb = 10TW (see Alboussiere et al. 2010) time_translation = rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7) maxAge = 2.*time_translation/1e6 print("The translation recycles the inner core material in {0:.2e} million years".format(maxAge)) print("Translation velocity is {0:.2e} km/years".format(translation_velocity_dim*np.pi*1e7/1e3)) units = None #we give them already dimensionless parameters. rICB = 1. age_ic = 1. omega = 0.#0.5*np.pi/200e6*age_ic_dim#0.5*np.pi #0. #0.5*np.pi/200e6*age_ic_dim# 0.#0.5*np.pi#0.#0.5*np.pi/200e6*age_ic_dim #0. #-0.5*np.pi # Rotation rates has to be in ]-np.pi, np.pi[ print("Rotation rate is {:.2e}".format(omega)) velocity_amplitude = translation_velocity_dim*age_ic_dim*np.pi*1e7/rICB_dim/1e3 velocity_center = [0., 100.]#center of the eastern hemisphere velocity = geodyn_trg.translation_velocity(velocity_center, velocity_amplitude) exponent_growth = 1.#0.1#1 print(v_g_dim, velocity_amplitude, omega/age_ic_dim*180/np.pi*1e6) ``` Define a proxy type, and a proxy name (to be used in the figures to annotate the axes) You can re-define it later if you want (or define another proxy_type2 if needed) ``` proxy_type = "age"#"growth rate" proxy_name = "age (Myears)" #growth rate (km/Myears)" proxy_lim = [0, maxAge] #or None #proxy_lim = None fig_name = "figures/test_" #to name the figures print(rICB, age_ic, velocity_amplitude, omega, exponent_growth, proxy_type) print(velocity) ``` ### Parameters for the geodynamical model This will input the different parameters in the model. ``` parameters = dict({'units': units, 'rICB': rICB, 'tau_ic':age_ic, 'vt': velocity, 'exponent_growth': exponent_growth, 'omega': omega, 'proxy_type': proxy_type}) geodynModel.set_parameters(parameters) geodynModel.define_units() param = parameters param['vt'] = parameters['vt'].tolist() #for json serialization # write file with parameters, readable with json, byt also human-readable with open(fig_name+'parameters.json', 'w') as f: json.dump(param, f) print(parameters) ``` ## Different data set and visualisations ### Perfect sampling at the equator (to visualise the flow lines) You can add more points to get a better precision. ``` npoints = 10 #number of points in the x direction for the data set. data_set = data.PerfectSamplingEquator(npoints, rICB = 1.) data_set.method = "bt_point" proxy = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="age", verbose = False) data_set.plot_c_vec(geodynModel, proxy=proxy, cm=cm, nameproxy="age (Myears)") plt.savefig(fig_name+"equatorial_plot.pdf", bbox_inches='tight') ``` ### Perfect sampling in the first 100km (to visualise the depth evolution) ``` data_meshgrid = data.Equator_upperpart(10,10) data_meshgrid.method = "bt_point" proxy_meshgrid = geodyn.evaluate_proxy(data_meshgrid, geodynModel, proxy_type=proxy_type, verbose = False) #r, t, p = data_meshgrid.extract_rtp("bottom_turning_point") fig3, ax3 = plt.subplots(figsize=(8, 2)) X, Y, Z = data_meshgrid.mesh_RPProxy(proxy_meshgrid) sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm) sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k") ax3.set_ylim(-0, 120) fig3.gca().invert_yaxis() ax3.set_xlim(-180,180) cbar = fig3.colorbar(sc) #cbar.set_clim(0, maxAge) cbar.set_label(proxy_name) ax3.set_xlabel("longitude") ax3.set_ylabel("depth below ICB (km)") plt.savefig(fig_name+"meshgrid.pdf", bbox_inches='tight') npoints = 20 #number of points in the x direction for the data set. data_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01) data_set.method = "bt_point" proxy_surface = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose = False) #r, t, p = data_set.extract_rtp("bottom_turning_point") X, Y, Z = data_set.mesh_TPProxy(proxy_surface) ## map m, fig = plot_data.setting_map() y, x = m(Y, X) sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none') plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name)) cbar = plt.colorbar(sc) cbar.set_label(proxy_name) fig.savefig(fig_name+"map_surface.pdf", bbox_inches='tight') ``` ### Random data set, in the first 100km - bottom turning point only #### Calculate the data ``` # random data set data_set_random = data.RandomData(300) data_set_random.method = "bt_point" proxy_random = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=proxy_type, verbose=False) data_path = "../GrowYourIC/data/" geodynModel.data_path = data_path if proxy_type == "age": # ## domain size and Vp proxy_random_size = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="domain_size", verbose=False) proxy_random_dV = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="dV_V", verbose=False) r, t, p = data_set_random.extract_rtp("bottom_turning_point") dist = positions.angular_distance_to_point(t, p, *velocity_center) ## map m, fig = plot_data.setting_map() x, y = m(p, t) sc = m.scatter(x, y, c=proxy_random,s=8, zorder=10, cmap=cm, edgecolors='none') plt.title("Dataset: {},\n geodynamic model: {}".format(data_set_random.name, geodynModel.name)) cbar = plt.colorbar(sc) cbar.set_label(proxy_name) fig.savefig(fig_name+data_set_random.shortname+"_map.pdf", bbox_inches='tight') ## phi and distance plots fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0)) sc1 = ax[0,0].scatter(p, proxy_random, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0) phi = np.linspace(-180,180, 50) #analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.) #ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2) ax[0,0].set_xlabel("longitude") ax[0,0].set_ylabel(proxy_name) if proxy_lim is not None: ax[0,0].set_ylim(proxy_lim) sc2 = ax[0,1].scatter(dist, proxy_random, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0) ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center)) phi = np.linspace(-90,90, 100) if proxy_type == "age": analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.) ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2) analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.) ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2) ax[0,1].set_xlim([0,180]) ax[0,0].set_xlim([-180,180]) cbar = fig.colorbar(sc1) cbar.set_label("longitude: abs(theta)") if proxy_lim is not None: ax[0,1].set_ylim(proxy_lim) ## figure with domain size and Vp if proxy_type == "age": sc3 = ax[1,0].scatter(dist, proxy_random_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0) ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center)) ax[1,0].set_ylabel("domain size (m)") ax[1,0].set_xlim([0,180]) ax[1,0].set_ylim([0, 2500.000]) sc4 = ax[1,1].scatter(dist, proxy_random_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0) ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center)) ax[1,1].set_ylabel("dV/V") ax[1,1].set_xlim([0,180]) ax[1,1].set_ylim([-0.017, -0.002]) fig.savefig(fig_name +data_set_random.shortname+ '_long_dist.pdf', bbox_inches='tight') fig, ax = plt.subplots(figsize=(8, 2)) sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy_random, s=10,cmap=cm, linewidth=0) ax.set_ylim(-0,120) fig.gca().invert_yaxis() ax.set_xlim(-180,180) cbar = fig.colorbar(sc) if proxy_lim is not None: cbar.set_clim(0, maxAge) ax.set_xlabel("longitude") ax.set_ylabel("depth below ICB (km)") cbar.set_label(proxy_name) fig.savefig(fig_name+data_set_random.shortname+"_depth.pdf", bbox_inches='tight') ``` ### Real Data set from Waszek paper ``` ## real data set data_set = data.SeismicFromFile("../GrowYourIC/data/WD11.dat") data_set.method = "bt_point" proxy2 = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose=False) if proxy_type == "age": ## domain size and DV/V proxy_size = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="domain_size", verbose=False) proxy_dV = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="dV_V", verbose=False) r, t, p = data_set.extract_rtp("bottom_turning_point") dist = positions.angular_distance_to_point(t, p, *velocity_center) ## map m, fig = plot_data.setting_map() x, y = m(p, t) sc = m.scatter(x, y, c=proxy2,s=8, zorder=10, cmap=cm, edgecolors='none') plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name)) cbar = plt.colorbar(sc) cbar.set_label(proxy_name) fig.savefig(fig_name+data_set.shortname+"_map.pdf", bbox_inches='tight') ## phi and distance plots fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0)) sc1 = ax[0,0].scatter(p, proxy2, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0) phi = np.linspace(-180,180, 50) #analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.) #ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2) ax[0,0].set_xlabel("longitude") ax[0,0].set_ylabel(proxy_name) if proxy_lim is not None: ax[0,0].set_ylim(proxy_lim) sc2 = ax[0,1].scatter(dist, proxy2, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0) ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center)) phi = np.linspace(-90,90, 100) if proxy_type == "age": analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.) ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2) analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.) ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2) ax[0,1].set_xlim([0,180]) ax[0,0].set_xlim([-180,180]) cbar = fig.colorbar(sc1) cbar.set_label("longitude: abs(theta)") if proxy_lim is not None: ax[0,1].set_ylim(proxy_lim) ## figure with domain size and Vp if proxy_type == "age": sc3 = ax[1,0].scatter(dist, proxy_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0) ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center)) ax[1,0].set_ylabel("domain size (m)") ax[1,0].set_xlim([0,180]) ax[1,0].set_ylim([0, 2500.000]) sc4 = ax[1,1].scatter(dist, proxy_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0) ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center)) ax[1,1].set_ylabel("dV/V") ax[1,1].set_xlim([0,180]) ax[1,1].set_ylim([-0.017, -0.002]) fig.savefig(fig_name + data_set.shortname+'_long_dist.pdf', bbox_inches='tight') fig, ax = plt.subplots(figsize=(8, 2)) sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy2, s=10,cmap=cm, linewidth=0) ax.set_ylim(-0,120) fig.gca().invert_yaxis() ax.set_xlim(-180,180) cbar = fig.colorbar(sc) if proxy_lim is not None: cbar.set_clim(0, maxAge) ax.set_xlabel("longitude") ax.set_ylabel("depth below ICB (km)") cbar.set_label(proxy_name) fig.savefig(fig_name+data_set.shortname+"_depth.pdf", bbox_inches='tight') ```
true
code
0.62681
null
null
null
null
<a href="https://colab.research.google.com/github/danzerzine/seospider-colab/blob/main/Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Запуск SEO бота Screaming Frog SEO spider в облаке через Google Colab ------------- > *Protip: под задачу для крупного сайта лучше всего подходят High RAM (25GB) инстансы без GPU/TPU, доступные в PRO подписке* ###Косметическое улучшение: добавляем перенос строки для длинных однострочных команд ``` from IPython.display import HTML, display def set_css(): display(HTML(''' <style> pre { white-space: pre-wrap; } </style> ''')) get_ipython().events.register('pre_run_cell', set_css) ``` ###Подключаем Google Drive в котором хранятся конфиги бота и куда будут сохраняться результаты обхода ``` from google.colab import drive drive.mount('/content/drive') ``` ###Узнаем внешний IP инстанса чтобы затем ручками добавить его в исключения файерволла cloudflare -- иначе очень быстро упремся в rate limit и нам начнут показывать страницу с проверкой на человекообразность ``` !wget -qO- http://ipecho.net/plain | xargs echo && wget -qO - icanhazip.com ``` ###Устанавливаем последнюю версию seo spider, делаем мелкие дела по хозяйству * Обновляем установленные linux пакеты * Копируем настройки с десктопной версии SEO spider в локальную папку инстанса (это нужно чтобы передать токены авторизации к google search console, GA и так далее) ``` #@title Settings directory on GDrive { vertical-output: true, display-mode: "both" } settings_path = "" #@param {type:"string"} !wget https://download.screamingfrog.co.uk/products/seo-spider/screamingfrogseospider_16.3_all.deb !apt-get install screamingfrogseospider_16.3_all.deb !sudo apt-get update && sudo apt-get upgrade -y !mkdir -p ~/.ScreamingFrogSEOSpider !cp -r $settings_path/* ~/.ScreamingFrogSEOSpider ``` ### Запускаем bash скрипт для донастройки инстанса и бота Он добавит виртуальный дисплей для вывода из JAVA, переключит бота в режим сохранения результатов на диске вместо RAM и т.д. ``` !wget https://raw.githubusercontent.com/fili/screaming-frog-on-google-compute-engine/master/gce-sf.sh -O install.sh && chmod +x install.sh && source ./install.sh ``` ###Делаем симлинк скрытой папки с временными файлами и настройками бота на случай если придется что-то редактировать или вынимать оттуда наживую, иначе ее не будет видно в браузере файлов слева ``` !ln -s ~/.ScreamingFrogSEOSpider ~/ScreamingFrogSEOSpider ``` ###Даем команду боту в headless режиме прописываем все нужные флаги для экспорта, настроек, отчетов, выгрузок и так далее ``` #@title Crawl settings { vertical-output: true } url_start = "" #@param {type:"string"} use_gcs = "" #@param ["", "--use-google-search-console \"account \""] {allow-input: true} config_path = "" #@param {type:"string"} output_folder = "" #@param {type:"string"} !screamingfrogseospider --crawl "$url_start" $use_gcs --headless --config "$config_path" --output-folder "$output_folder" --timestamped-output --save-crawl --export-tabs "Internal:All,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB" --bulk-export "Canonicals:Contains Canonical Inlinks,Canonicals:Self Referencing Inlinks,Canonicals:Canonicalised Inlinks,Canonicals:Missing Inlinks,Canonicals:Multiple Inlinks,Canonicals:Non-Indexable Canonical Inlinks,AMP:All Inlinks,AMP:Non-200 Response Inlinks,AMP:Missing Non-AMP Return Link Inlinks,AMP:Missing Canonical to Non-AMP Inlinks,AMP:Non-Indexable Canonical Inlinks,AMP:Indexable Inlinks,AMP:Non-Indexable Inlinks,Structured Data:Contains Structured Data,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:JSON-LD URLs,Structured Data:Microdata URLs,Structured Data:RDFa URLs,Sitemaps:URLs in Sitemap Inlinks,Sitemaps:Orphan URLs Inlinks,Sitemaps:Non-Indexable URLs in Sitemap Inlinks,Sitemaps:URLs in Multiple Sitemaps Inlinks" --save-report "Crawl Overview,Redirects:All Redirects,Redirects:Redirect Chains,Redirects:Redirect & Canonical Chains,Canonicals:Canonical Chains,Canonicals:Non-Indexable Canonicals,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Hreflang:All hreflang URLs,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non Canonical Return Links,Hreflang:Noindex Return Links,Insecure Content,SERP Summary,Orphan Pages,Structured Data:Validation Errors & Warnings Summary,Structured Data:Validation Errors & Warnings,Structured Data:Google Rich Results Features Summary,Structured Data:Google Rich Results Features,HTTP Headers:HTTP Header Summary,Cookies:Cookie Summary" --export-format xlsx --export-custom-summary "Site Crawled,Date,Time,Total URLs Encountered,Total URLs Crawled,Total Internal blocked by robots.txt,Total External blocked by robots.txt,URLs Displayed,Total Internal URLs,Total External URLs,Total Internal Indexable URLs,Total Internal Non-Indexable URLs,JavaScript:All,JavaScript:Uses Old AJAX Crawling Scheme URLs,JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag,JavaScript:Page Title Only in Rendered HTML,JavaScript:Page Title Updated by JavaScript,JavaScript:H1 Only in Rendered HTML,JavaScript:H1 Updated by JavaScript,JavaScript:Meta Description Only in Rendered HTML,JavaScript:Meta Description Updated by JavaScript,JavaScript:Canonical Only in Rendered HTML,JavaScript:Canonical Mismatch,JavaScript:Noindex Only in Original HTML,JavaScript:Nofollow Only in Original HTML,JavaScript:Contains JavaScript Links,JavaScript:Contains JavaScript Content,JavaScript:Pages with Blocked Resources,H1:All,H1:Missing,H1:Duplicate,H1:Over X Characters,H1:Multiple,H2:All,H2:Missing,H2:Duplicate,H2:Over X Characters,H2:Multiple,Internal:All,Internal:HTML,Internal:JavaScript,Internal:CSS,Internal:Images,Internal:PDF,Internal:Flash,Internal:Other,Internal:Unknown,External:All,External:HTML,External:JavaScript,External:CSS,External:Images,External:PDF,External:Flash,External:Other,External:Unknown,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Content:All,Content:Spelling Errors,Content:Grammar Errors,Content:Near Duplicates,Content:Exact Duplicates,Content:Low Content Pages,Custom Extraction:All,Custom Search:All,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,Analytics:All,Analytics:Sessions Above 0,Analytics:Bounce Rate Above 70%,Analytics:No GA Data,Analytics:Non-Indexable with GA Data,Analytics:Orphan URLs,Search Console:All,Search Console:Clicks Above 0,Search Console:No GSC Data,Search Console:Non-Indexable with GSC Data,Search Console:Orphan URLs,Hreflang:All,Hreflang:Contains hreflang,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non-Canonical Return Links,Hreflang:Noindex Return Links,Hreflang:Incorrect Language & Region Codes,Hreflang:Multiple Entries,Hreflang:Missing Self Reference,Hreflang:Not Using Canonical,Hreflang:Missing X-Default,Hreflang:Missing,Images:All,Images:Over X KB,Images:Missing Alt Text,Images:Missing Alt Attribute,Images:Alt Text Over X Characters,Link Metrics:All,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,PageSpeed:All,PageSpeed:Eliminate Render-Blocking Resources,PageSpeed:Defer Offscreen Images,PageSpeed:Efficiently Encode Images,PageSpeed:Properly Size Images,PageSpeed:Minify CSS,PageSpeed:Minify JavaScript,PageSpeed:Reduce Unused CSS,PageSpeed:Reduce Unused JavaScript,PageSpeed:Serve Images in Next-Gen Formats,PageSpeed:Enable Text Compression,PageSpeed:Preconnect to Required Origins,PageSpeed:Reduce Server Response Times (TTFB),PageSpeed:Avoid Multiple Page Redirects,PageSpeed:Preload Key Requests,PageSpeed:Use Video Formats for Animated Content,PageSpeed:Avoid Excessive DOM Size,PageSpeed:Reduce JavaScript Execution Time,PageSpeed:Serve Static Assets with an Efficient Cache Policy,PageSpeed:Minimize Main-Thread Work,PageSpeed:Ensure Text Remains Visible During Webfont Load,PageSpeed:Image Elements Do Not Have Explicit Width & Height,PageSpeed:Avoid Large Layout Shifts,PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers,PageSpeed:Request Errors,Pagination:All,Pagination:Contains Pagination,Pagination:First Page,Pagination:Paginated 2+ Pages,Pagination:Pagination URL Not in Anchor Tag,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Pagination:Non-Indexable,Pagination:Multiple Pagination URLs,Pagination:Pagination Loop,Pagination:Sequence Error,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Success (2xx),Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Security:All,Security:HTTP URLs,Security:HTTPS URLs,Security:Mixed Content,Security:Form URL Insecure,Security:Form on HTTP URL,Security:Unsafe Cross-Origin Links,Security:Missing HSTS Header,Security:Bad Content Type,Security:Missing X-Content-Type-Options Header,Security:Missing X-Frame-Options Header,Security:Protocol-Relative Resource Links,Security:Missing Content-Security-Policy Header,Security:Missing Secure Referrer-Policy Header,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,URL:All,URL:Non ASCII Characters,URL:Underscores,URL:Uppercase,URL:Parameters,URL:Over X Characters,URL:Multiple Slashes,URL:Repetitive Path,URL:Contains Space,URL:Broken Bookmark,URL:Internal Search,Depth 1,Depth 2,Depth 3,Depth 4,Depth 5,Depth 6,Depth 7,Depth 8,Depth 9,Depth 10+,Top Inlinks 1 URL,Top Inlinks 1 Number of Inlinks,Top Inlinks 2 URL,Top Inlinks 2 Number of Inlinks,Top Inlinks 3 URL,Top Inlinks 3 Number of Inlinks,Top Inlinks 4 URL,Top Inlinks 4 Number of Inlinks,Top Inlinks 5 URL,Top Inlinks 5 Number of Inlinks,Top Inlinks 6 URL,Top Inlinks 6 Number of Inlinks,Top Inlinks 7 URL,Top Inlinks 7 Number of Inlinks,Top Inlinks 8 URL,Top Inlinks 8 Number of Inlinks,Top Inlinks 9 URL,Top Inlinks 9 Number of Inlinks,Top Inlinks 10 URL,Top Inlinks 10 Number of Inlinks,Top Inlinks 11 URL,Top Inlinks 11 Number of Inlinks,Top Inlinks 12 URL,Top Inlinks 12 Number of Inlinks,Top Inlinks 13 URL,Top Inlinks 13 Number of Inlinks,Top Inlinks 14 URL,Top Inlinks 14 Number of Inlinks,Top Inlinks 15 URL,Top Inlinks 15 Number of Inlinks,Top Inlinks 16 URL,Top Inlinks 16 Number of Inlinks,Top Inlinks 17 URL,Top Inlinks 17 Number of Inlinks,Top Inlinks 18 URL,Top Inlinks 18 Number of Inlinks,Top Inlinks 19 URL,Top Inlinks 19 Number of Inlinks,Top Inlinks 20 URL,Top Inlinks 20 Number of Inlinks,Response Times 0s to 1s,Response Times 1s to 2s,Response Times 2s to 3s,Response Times 3s to 4s,Response Times 4s to 5s,Response Times 5s to 6s,Response Times 6s to 7s,Response Times 7s to 8s,Response Times 8s to 9s,Response Times 10s or more" ``` # ✦ *Colab Still Alive Console Script:* <p><font size=2px ><font color="red"> Tip - Set a javascript interval to click on the connect button every 60 seconds. Open developer-settings (in your web-browser) with Ctrl+Shift+I then click on console tab and type this on the console prompt. (for mac press Option+Command+I)</font></p><b>Copy script in hidden cell and paste at your browser console !!! DO NOT CLOSE YOUR BROWSER IN ORDER TO STILL RUNNING SCRIPT</b> <code>function ClickConnect(){ console.log("Working"); document.querySelector("colab-connect-button").click() }setInterval(ClickConnect,6000)</code> # *Что в итоге* На выходе в идеале получаем папку с датой обхода и следующими выгрузками в формате Excel **Tabs**: ``` Internal:All Response Codes:All Response Codes:Blocked by Robots.txt Response Codes:Blocked Resource Response Codes:No Response Response Codes:Redirection (3xx) Response Codes:Redirection (JavaScript) Response Codes:Redirection (Meta Refresh) Response Codes:Client Error (4xx) Response Codes:Server Error (5xx) Page Titles:All Page Titles:Missing Page Titles:Duplicate Page Titles:Over X Characters Page Titles:Below X Characters Page Titles:Over X Pixels Page Titles:Below X Pixels Page Titles:Same as H1 Page Titles:Multiple Meta Description:All Meta Description:Missing Meta Description:Duplicate Meta Description:Over X Characters Meta Description:Below X Characters Meta Description:Over X Pixels Meta Description:Below X Pixels Meta Description:Multiple Meta Keywords:All Meta Keywords:Missing Meta Keywords:Duplicate Meta Keywords:Multiple Canonicals:All Canonicals:Contains Canonical Canonicals:Self Referencing Canonicals:Canonicalised Canonicals:Missing Canonicals:Multiple Canonicals:Non-Indexable Canonical Directives:All Directives:Index Directives:Noindex Directives:Follow Directives:Nofollow Directives:None Directives:NoArchive Directives:NoSnippet Directives:Max-Snippet Directives:Max-Image-Preview Directives:Max-Video-Preview Directives:NoODP Directives:NoYDIR Directives:NoImageIndex Directives:NoTranslate Directives:Unavailable_After Directives:Refresh AMP:All AMP:Non-200 Response AMP:Missing Non-AMP Return Link AMP:Missing Canonical to Non-AMP AMP:Non-Indexable Canonical AMP:Indexable AMP:Non-Indexable AMP:Missing <html amp> Tag AMP:Missing/Invalid <!doctype html> Tag AMP:Missing <head> Tag AMP:Missing <body> Tag AMP:Missing Canonical AMP:Missing/Invalid <meta charset> Tag AMP:Missing/Invalid <meta viewport> Tag AMP:Missing/Invalid AMP Script AMP:Missing/Invalid AMP Boilerplate AMP:Contains Disallowed HTML AMP:Other Validation Errors Structured Data:All Structured Data:Contains Structured Data Structured Data:Missing Structured Data:Validation Errors Structured Data:Validation Warnings Structured Data:Parse Errors Structured Data:Microdata URLs Structured Data:JSON-LD URLs Structured Data:RDFa URLs Sitemaps:All Sitemaps:URLs in Sitemap Sitemaps:URLs not in Sitemap Sitemaps:Orphan URLs Sitemaps:Non-Indexable URLs in Sitemap Sitemaps:URLs in Multiple Sitemaps Sitemaps:XML Sitemap with over 50k URLs Sitemaps:XML Sitemap over 50MB" --bulk-export "Canonicals:Contains Canonical Inlinks Canonicals:Self Referencing Inlinks Canonicals:Canonicalised Inlinks Canonicals:Missing Inlinks Canonicals:Multiple Inlinks Canonicals:Non-Indexable Canonical Inlinks AMP:All Inlinks AMP:Non-200 Response Inlinks AMP:Missing Non-AMP Return Link Inlinks AMP:Missing Canonical to Non-AMP Inlinks AMP:Non-Indexable Canonical Inlinks AMP:Indexable Inlinks AMP:Non-Indexable Inlinks Structured Data:Contains Structured Data Structured Data:Validation Errors Structured Data:Validation Warnings Structured Data:JSON-LD URLs Structured Data:Microdata URLs Structured Data:RDFa URLs Sitemaps:URLs in Sitemap Inlinks Sitemaps:Orphan URLs Inlinks Sitemaps:Non-Indexable URLs in Sitemap Inlinks Sitemaps:URLs in Multiple Sitemaps Inlinks" --save-report "Crawl Overview Redirects:All Redirects Redirects:Redirect Chains Redirects:Redirect & Canonical Chains Canonicals:Canonical Chains Canonicals:Non-Indexable Canonicals Pagination:Non-200 Pagination URLs Pagination:Unlinked Pagination URLs Hreflang:All hreflang URLs Hreflang:Non-200 hreflang URLs Hreflang:Unlinked hreflang URLs Hreflang:Missing Return Links Hreflang:Inconsistent Language & Region Return Links Hreflang:Non Canonical Return Links Hreflang:Noindex Return Links Insecure Content SERP Summary Orphan Pages Structured Data:Validation Errors & Warnings Summary Structured Data:Validation Errors & Warnings Structured Data:Google Rich Results Features Summary Structured Data:Google Rich Results Features HTTP Headers:HTTP Header Summary Cookies:Cookie Summary ``` **Summary**: ``` Site Crawled Date Time Total URLs Encountered Total URLs Crawled Total Internal blocked by robots.txt Total External blocked by robots.txt URLs Displayed Total Internal URLs Total External URLs Total Internal Indexable URLs Total Internal Non-Indexable URLs JavaScript:All JavaScript:Uses Old AJAX Crawling Scheme URLs JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag JavaScript:Page Title Only in Rendered HTML JavaScript:Page Title Updated by JavaScript JavaScript:H1 Only in Rendered HTML JavaScript:H1 Updated by JavaScript JavaScript:Meta Description Only in Rendered HTML JavaScript:Meta Description Updated by JavaScript JavaScript:Canonical Only in Rendered HTML JavaScript:Canonical Mismatch JavaScript:Noindex Only in Original HTML JavaScript:Nofollow Only in Original HTML JavaScript:Contains JavaScript Links JavaScript:Contains JavaScript Content JavaScript:Pages with Blocked Resources H1:All H1:Missing H1:Duplicate H1:Over X Characters H1:Multiple H2:All H2:Missing H2:Duplicate H2:Over X Characters H2:Multiple Internal:All Internal:HTML Internal:JavaScript Internal:CSS Internal:Images Internal:PDF Internal:Flash Internal:Other Internal:Unknown External:All External:HTML External:JavaScript External:CSS External:Images External:PDF External:Flash External:Other External:Unknown AMP:All AMP:Non-200 Response AMP:Missing Non-AMP Return Link AMP:Missing Canonical to Non-AMP AMP:Non-Indexable Canonical AMP:Indexable AMP:Non-Indexable AMP:Missing <html amp> Tag AMP:Missing/Invalid <!doctype html> Tag AMP:Missing <head> Tag AMP:Missing <body> Tag AMP:Missing Canonical AMP:Missing/Invalid <meta charset> Tag AMP:Missing/Invalid <meta viewport> Tag AMP:Missing/Invalid AMP Script AMP:Missing/Invalid AMP Boilerplate AMP:Contains Disallowed HTML AMP:Other Validation Errors Canonicals:All Canonicals:Contains Canonical Canonicals:Self Referencing Canonicals:Canonicalised Canonicals:Missing Canonicals:Multiple Canonicals:Non-Indexable Canonical Content:All Content:Spelling Errors Content:Grammar Errors Content:Near Duplicates Content:Exact Duplicates Content:Low Content Pages Custom Extraction:All Custom Search:All Directives:All Directives:Index Directives:Noindex Directives:Follow Directives:Nofollow Directives:None Directives:NoArchive Directives:NoSnippet Directives:Max-Snippet Directives:Max-Image-Preview Directives:Max-Video-Preview Directives:NoODP Directives:NoYDIR Directives:NoImageIndex Directives:NoTranslate Directives:Unavailable_After Directives:Refresh Analytics:All Analytics:Sessions Above 0 Analytics:Bounce Rate Above 70% Analytics:No GA Data Analytics:Non-Indexable with GA Data Analytics:Orphan URLs Search Console:All Search Console:Clicks Above 0 Search Console:No GSC Data Search Console:Non-Indexable with GSC Data Search Console:Orphan URLs Hreflang:All Hreflang:Contains hreflang Hreflang:Non-200 hreflang URLs Hreflang:Unlinked hreflang URLs Hreflang:Missing Return Links Hreflang:Inconsistent Language & Region Return Links Hreflang:Non-Canonical Return Links Hreflang:Noindex Return Links Hreflang:Incorrect Language & Region Codes Hreflang:Multiple Entries Hreflang:Missing Self Reference Hreflang:Not Using Canonical Hreflang:Missing X-Default Hreflang:Missing Images:All Images:Over X KB Images:Missing Alt Text Images:Missing Alt Attribute Images:Alt Text Over X Characters Link Metrics:All Meta Description:All Meta Description:Missing Meta Description:Duplicate Meta Description:Over X Characters Meta Description:Below X Characters Meta Description:Over X Pixels Meta Description:Below X Pixels Meta Description:Multiple Meta Keywords:All Meta Keywords:Missing Meta Keywords:Duplicate Meta Keywords:Multiple PageSpeed:All PageSpeed:Eliminate Render-Blocking Resources PageSpeed:Defer Offscreen Images PageSpeed:Efficiently Encode Images PageSpeed:Properly Size Images PageSpeed:Minify CSS PageSpeed:Minify JavaScript PageSpeed:Reduce Unused CSS PageSpeed:Reduce Unused JavaScript PageSpeed:Serve Images in Next-Gen Formats PageSpeed:Enable Text Compression PageSpeed:Preconnect to Required Origins PageSpeed:Reduce Server Response Times (TTFB) PageSpeed:Avoid Multiple Page Redirects PageSpeed:Preload Key Requests PageSpeed:Use Video Formats for Animated Content PageSpeed:Avoid Excessive DOM Size PageSpeed:Reduce JavaScript Execution Time PageSpeed:Serve Static Assets with an Efficient Cache Policy PageSpeed:Minimize Main-Thread Work PageSpeed:Ensure Text Remains Visible During Webfont Load PageSpeed:Image Elements Do Not Have Explicit Width & Height PageSpeed:Avoid Large Layout Shifts PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers PageSpeed:Request Errors Pagination:All Pagination:Contains Pagination Pagination:First Page Pagination:Paginated 2+ Pages Pagination:Pagination URL Not in Anchor Tag Pagination:Non-200 Pagination URLs Pagination:Unlinked Pagination URLs Pagination:Non-Indexable Pagination:Multiple Pagination URLs Pagination:Pagination Loop Pagination:Sequence Error Response Codes:All Response Codes:Blocked by Robots.txt Response Codes:Blocked Resource Response Codes:No Response Response Codes:Success (2xx) Response Codes:Redirection (3xx) Response Codes:Redirection (JavaScript) Response Codes:Redirection (Meta Refresh) Response Codes:Client Error (4xx) Response Codes:Server Error (5xx) Security:All Security:HTTP URLs Security:HTTPS URLs Security:Mixed Content Security:Form URL Insecure Security:Form on HTTP URL Security:Unsafe Cross-Origin Links Security:Missing HSTS Header Security:Bad Content Type Security:Missing X-Content-Type-Options Header Security:Missing X-Frame-Options Header Security:Protocol-Relative Resource Links Security:Missing Content-Security-Policy Header Security:Missing Secure Referrer-Policy Header Sitemaps:All Sitemaps:URLs in Sitemap Sitemaps:URLs not in Sitemap Sitemaps:Orphan URLs Sitemaps:Non-Indexable URLs in Sitemap Sitemaps:URLs in Multiple Sitemaps Sitemaps:XML Sitemap with over 50k URLs Sitemaps:XML Sitemap over 50MB Structured Data:All Structured Data:Contains Structured Data Structured Data:Missing Structured Data:Validation Errors Structured Data:Validation Warnings Structured Data:Parse Errors Structured Data:Microdata URLs Structured Data:JSON-LD URLs Structured Data:RDFa URLs Page Titles:All Page Titles:Missing Page Titles:Duplicate Page Titles:Over X Characters Page Titles:Below X Characters Page Titles:Over X Pixels Page Titles:Below X Pixels Page Titles:Same as H1 Page Titles:Multiple URL:All URL:Non ASCII Characters URL:Underscores URL:Uppercase URL:Parameters URL:Over X Characters URL:Multiple Slashes URL:Repetitive Path URL:Contains Space URL:Broken Bookmark URL:Internal Search Depth 1 Depth 2 Depth 3 Depth 4 Depth 5 Depth 6 Depth 7 Depth 8 Depth 9 Depth 10+ Top Inlinks 1 URL Top Inlinks 1 Number of Inlinks Top Inlinks 2 URL Top Inlinks 2 Number of Inlinks Top Inlinks 3 URL Top Inlinks 3 Number of Inlinks Top Inlinks 4 URL Top Inlinks 4 Number of Inlinks Top Inlinks 5 URL Top Inlinks 5 Number of Inlinks Top Inlinks 6 URL Top Inlinks 6 Number of Inlinks Top Inlinks 7 URL Top Inlinks 7 Number of Inlinks Top Inlinks 8 URL Top Inlinks 8 Number of Inlinks Top Inlinks 9 URL Top Inlinks 9 Number of Inlinks Top Inlinks 10 URL Top Inlinks 10 Number of Inlinks Top Inlinks 11 URL Top Inlinks 11 Number of Inlinks Top Inlinks 12 URL Top Inlinks 12 Number of Inlinks Top Inlinks 13 URL Top Inlinks 13 Number of Inlinks Top Inlinks 14 URL Top Inlinks 14 Number of Inlinks Top Inlinks 15 URL Top Inlinks 15 Number of Inlinks Top Inlinks 16 URL Top Inlinks 16 Number of Inlinks Top Inlinks 17 URL Top Inlinks 17 Number of Inlinks Top Inlinks 18 URL Top Inlinks 18 Number of Inlinks Top Inlinks 19 URL Top Inlinks 19 Number of Inlinks Top Inlinks 20 URL Top Inlinks 20 Number of Inlinks Response Times 0s to 1s Response Times 1s to 2s Response Times 2s to 3s Response Times 3s to 4s Response Times 4s to 5s Response Times 5s to 6s Response Times 6s to 7s Response Times 7s to 8s Response Times 8s to 9s Response Times 10s or more" ```
true
code
0.467696
null
null
null
null
## _*Using Qiskit Aqua for clique problems*_ This Qiskit Aqua Optimization notebook demonstrates how to use the VQE quantum algorithm to compute the clique of a given graph. The problem is defined as follows. A clique in a graph $G$ is a complete subgraph of $G$. That is, it is a subset $K$ of the vertices such that every two vertices in $K$ are the two endpoints of an edge in $G$. A maximal clique is a clique to which no more vertices can be added. A maximum clique is a clique that includes the largest possible number of vertices. We will go through three examples to show (1) how to run the optimization in the non-programming way, (2) how to run the optimization in the programming way, (3) how to run the optimization with the VQE. We will omit the details for the support of CPLEX, which are explained in other notebooks such as maxcut. Note that the solution may not be unique. ### The problem and a brute-force method. ``` import numpy as np from qiskit import Aer from qiskit_aqua import run_algorithm from qiskit_aqua.input import EnergyInput from qiskit_aqua.translators.ising import clique from qiskit_aqua.algorithms import ExactEigensolver ``` first, let us have a look at the graph, which is in the adjacent matrix form. ``` K = 3 # K means the size of the clique np.random.seed(100) num_nodes = 5 w = clique.random_graph(num_nodes, edge_prob=0.8, weight_range=10) print(w) ``` Let us try a brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a vertex is either 0 (meaning the vertex is not in the clique) or 1 (meaning the vertex is in the clique). We print the binary assignment that satisfies the definition of the clique (Note the size is specified as K). ``` def brute_force(): # brute-force way: try every possible assignment! def bitfield(n, L): result = np.binary_repr(n, L) return [int(digit) for digit in result] L = num_nodes # length of the bitstring that represents the assignment max = 2**L has_sol = False for i in range(max): cur = bitfield(i, L) cur_v = clique.satisfy_or_not(np.array(cur), w, K) if cur_v: has_sol = True break return has_sol, cur has_sol, sol = brute_force() if has_sol: print("solution is ", sol) else: print("no solution found for K=", K) ``` ### Part I: run the optimization in the non-programming way ``` qubit_op, offset = clique.get_clique_qubitops(w, K) algo_input = EnergyInput(qubit_op) params = { 'problem': {'name': 'ising'}, 'algorithm': {'name': 'ExactEigensolver'} } result = run_algorithm(params, algo_input) x = clique.sample_most_likely(len(w), result['eigvecs'][0]) ising_sol = clique.get_graph_solution(x) if clique.satisfy_or_not(ising_sol, w, K): print("solution is", ising_sol) else: print("no solution found for K=", K) ``` ### Part II: run the optimization in the programming way ``` algo = ExactEigensolver(algo_input.qubit_op, k=1, aux_operators=[]) result = algo.run() x = clique.sample_most_likely(len(w), result['eigvecs'][0]) ising_sol = clique.get_graph_solution(x) if clique.satisfy_or_not(ising_sol, w, K): print("solution is", ising_sol) else: print("no solution found for K=", K) ``` ### Part III: run the optimization with the VQE ``` algorithm_cfg = { 'name': 'VQE', 'operator_mode': 'matrix' } optimizer_cfg = { 'name': 'COBYLA' } var_form_cfg = { 'name': 'RY', 'depth': 5, 'entanglement': 'linear' } params = { 'problem': {'name': 'ising', 'random_seed': 10598}, 'algorithm': algorithm_cfg, 'optimizer': optimizer_cfg, 'variational_form': var_form_cfg } backend = Aer.get_backend('statevector_simulator') result = run_algorithm(params, algo_input, backend=backend) x = clique.sample_most_likely(len(w), result['eigvecs'][0]) ising_sol = clique.get_graph_solution(x) if clique.satisfy_or_not(ising_sol, w, K): print("solution is", ising_sol) else: print("no solution found for K=", K) ```
true
code
0.397003
null
null
null
null
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` # 1. Деревья решений для классификации (продолжение) На прошлом занятии мы разобрали идею Деревьев решений: ![DecisionTree](tree1.png) Давайте теперь разберемся **как происходит разделения в каждом узле** то есть как проходит этап **обучения модели**. Есть как минимум две причины в этом разобраться : во-первых это позволит нам решать задачи классификации на 3 и более классов, во-вторых это даст нам возможность считать *важность* признаков в обученной модели. Для начала посмотрим какие бывают деревья решений ---- Дерево решений вообще говоря **не обязано быть бинарным**, на практике однако используются именно бинарные деревья, поскольку для любоого не бинарного дерева решений **можно построить бинарное** (при этом увеличится глубина дерева). ### 1. Деревья решений использую простой одномерный предикат для разделения объектов Имеется ввиду что в каждом узле разделение объектов (и создание двух новых узлов) происходит **по 1 (одному)** признаку: *Все объекты со значением некоторого признака меньше трешхолда отправляются в один узел, а больше - в другой:* $$ [x_j < t] $$ Вообще говоря это совсем не обязательно, например в каждом отдельном узле можно строить любую модель (например логистическую регрессию или KNN), рассматривая сразу несколько признаков. ### 2. Оценка качества Мы говорили про простой функционал качества разбиения (**выбора трешхолда**): количество ошибок (1-accuracy). На практике используются два критерия: Gini's impurity index и Information gain. **Индекс Джини** $$ I_{Gini} = 1 - \sum_i^K p_i^2 $$ где $K$ - количество классов, a $p_i = \frac{|n_i|}{n}$ - доля представителей $i$ - ого класса в данном узле **Энтропия** $$ H(p) = - \sum_i^K p_i\log(p_i) $$ **Информационный критерий** $$ IG(p) = H(\text{parent}) - H(\text{child}) $$ #### Разделение производится по тому трешхолду и тому признаку по которому взвешенное среднее функционала качества в узлах потомках наименьшее. ### 3. Критерий остановки Мы с вами говорили о таких параметрах Решающего дерева как минимальное число объектов в листе, и минимальное число объектов в узле, для того чтобы он был разделен на два. Еще один критерий - глубина дерева. Возможны и другие. * Ограничение числа объектов в листе * Ограничение числа объектов в узле, для того чтобы он был разделен * Ограничение глубины дерева * Ограничение минимального прироста Энтропии или Информационного критерия при разделении * Остановка в случае если все объекты в листе принадлежат к одному классу На прошлой лекции мы обсуждали технику которая называется **Прунинг** (pruning) это альтернатива Критериям остановки, когда сначала строится переобученное дерево, а затем она каким то образом упрощается. На практике по ряду причин чаще используются критерии остановки, а не прунинг. Подробнее см. https://github.com/esokolov/ml-course-hse/blob/master/2018-fall/lecture-notes/lecture07-trees.pdf Оссобенности разбиения непрерывных признаков * http://kevinmeurer.com/a-simple-guide-to-entropy-based-discretization/ * http://clear-lines.com/blog/post/Discretizing-a-continuous-variable-using-Entropy.aspx --- ## 1.1. Оценка качества разделения в узле ``` def gini_impurity(y_current): n = y_current.shape[0] val, count = np.unique(y_current, return_counts=True) gini = 1 - ((count/n)**2).sum() return gini def entropy(y_current): gini = 1 n = y_current.shape[0] val, count = np.unique(y_current, return_counts=True) p = count/n igain = p.dot(np.log(p)) return igain n = 100 Y_example = np.zeros((100,100)) for i in range(100): for j in range(i, 100): Y_example[i, j] = 1 gini = [gini_impurity(y) for y in Y_example] ig = [-entropy(y) for y in Y_example] plt.figure(figsize=(7,7)) plt.plot(np.linspace(0,1,100), gini, label='Index Gini'); plt.plot(np.linspace(0,1,100), ig, label ='Entropy'); plt.legend() plt.xlabel('Доля примеров\n положительного класса') plt.ylabel('Значение оптимизируемого\n функционала'); ``` ## 1.2. Пример работы Решающего дерева **Индекс Джини** и **Информационный критерий** это меры сбалансированности вектора (насколько значения объектов в наборе однородны). Максимальная неоднородность когда объектов разных классов поровну. Максимальная однородность когда в наборе объекты одного класса. Разбивая множество объектов на два подмножества, мы стремимся уменьшить неоднородность в каждом подмножестве. Посмотрем на примере Ирисов Фишера ### Ирисы Фишера ``` from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier iris = load_iris() model = DecisionTreeClassifier() model = model.fit(iris.data, iris.target) feature_names = ['sepal length', 'sepal width', 'petal length', 'petal width'] target_names = ['setosa', 'versicolor', 'virginica'] model.feature_importances_ np.array(model.decision_path(iris.data).todense())[0] np.array(model.decision_path(iris.data).todense())[90] iris.data[0] model.predict(iris.data) model.tree_.node_count ``` ### Цифры. Интерпретируемость ``` from sklearn.datasets import load_digits X, y = load_digits(n_class=2, return_X_y=True) plt.figure(figsize=(12,12)) for i in range(9): ax = plt.subplot(3,3,i+1) ax.imshow(X[i].reshape(8,8), cmap='gray') from sklearn.metrics import accuracy_score model = DecisionTreeClassifier() model.fit(X, y) y_pred = model.predict(X) print(accuracy_score(y, y_pred)) print(X.shape) np.array(model.decision_path(X).todense())[0] model.feature_importances_ plt.imshow(model.feature_importances_.reshape(8,8)); from sklearn.tree import export_graphviz export_graphviz(model, out_file='tree.dot', filled=True) # #sudo apt-get install graphviz # !dot -Tpng 'tree.dot' -o 'tree.png' # ![Iris_tree](tree.png) np.array(model.decision_path(X).todense())[0] plt.imshow(X[0].reshape(8,8)) ``` ## 2.3. Решающие деревья легко обобщаются на задачу многоклассовой классификации ### Пример с рукописными цифрами ``` X, y = load_digits(n_class=10, return_X_y=True) plt.figure(figsize=(12,12)) for i in range(9): ax = plt.subplot(3,3,i+1) ax.imshow(X[i].reshape(8,8), cmap='gray') ax.set_title(y[i]) ax.set_xticks([]) ax.set_yticks([]) model = DecisionTreeClassifier() model.fit(X, y) y_pred = model.predict(X) print(accuracy_score(y, y_pred)) plt.imshow(model.feature_importances_.reshape(8,8)); model.feature_importances_ ``` ### Вопрос: откуда мы получаем feature importance? ## 2.4. Пример на котором дерево решений строит очень сложную разделяющую кривую Пример взят отсюда https://habr.com/ru/company/ods/blog/322534/#slozhnyy-sluchay-dlya-derevev-resheniy . Как мы помним Деревья используют одномерный предикат для разделени множества объектов. Это значит что если данные плохо разделимы по **каждому** (индивидуальному) признаку по отдельности, результирующее решающее правило может оказаться очень сложным. ``` from sklearn.tree import DecisionTreeClassifier def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30): data, target = [], [] for i in range(n): x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max) if np.abs(x1 - x2) > 0.5: data.append([x1, x2]) target.append(np.sign(x1 - x2)) return np.array(data), np.array(target) X, y = form_linearly_separable_data() plt.figure(figsize=(10,10)) plt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn'); ``` Давайте посмотрим как данные выглядит в проекции на 1 ось ``` plt.figure(figsize=(15,5)) ax1 = plt.subplot(1,2,1) ax1.set_title('Проекция на ось $X_0$') ax1.hist(X[y==1, 0], alpha=.3); ax1.hist(X[y==-1, 0], alpha=.6); ax2 = plt.subplot(1,2,2) ax2.set_title('Проекция на ось $X_1$') ax2.hist(X[y==1, 1], alpha=.3); ax2.hist(X[y==-1, 1], alpha=.6); def get_grid(data, eps=0.01): x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 return np.meshgrid(np.arange(x_min, x_max, eps), np.arange(y_min, y_max, eps)) tree = DecisionTreeClassifier(random_state=17).fit(X, y) xx, yy = get_grid(X, eps=.05) predicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(10,10)) plt.pcolormesh(xx, yy, predicted, cmap='autumn', alpha=0.3) plt.scatter(X[y==1, 0], X[y==1, 1], marker='x', s=100, cmap='autumn', linewidth=1.5) plt.scatter(X[y==-1, 0], X[y==-1, 1], marker='o', s=100, cmap='autumn', edgecolors='k',linewidth=1.5) plt.title('Easy task. Decision tree compexifies everything'); # export_graphviz(tree, out_file='complex_tree.dot', filled=True) # !dot -Tpng 'complex_tree.dot' -o 'complex_tree.png' ``` ## 2.5. Деревья решений для регрессии (кратко) см. sklearn.DecisionTreeRegressor # 3. Ансамблирование деревьев. Случайный лес. Что если у нас несколько классификаторов (каждый может быть не очень *умным*) ошибающихся на разных объектах Тогда если в качестве предсказания мы будем использовать *моду* мы можем расчитывать на лучшую предсказательную силу. ### Идея 1 Как получить модели которые ошибаются в разных местах? Давайте брать *тупые* деревья но учить их на **разных подвыборках признаков** ! ### Идея 2 Как получить модели которые ошибаются в разных местах? Давайте брать *тупые* деревья, но учить их на **разных подвыборках объектов** ! ### Результат: Случайный лес. sklearn.ensemble RandomForrest
true
code
0.437884
null
null
null
null
# Datasets and Neural Networks This notebook will step through the process of loading an arbitrary dataset in PyTorch, and creating a simple neural network for regression. # Datasets We will first work through loading an arbitrary dataset in PyTorch. For this project, we chose the <a href="http://www.cs.toronto.edu/~delve/data/abalone/desc.html">delve abalone dataset</a>. First, download and unzip the dataset from the link above, then unzip `Dataset.data.gz` and move `Dataset.data` into `hackpack-ml/models/data`. We are given the following attribute information in the spec: ``` Attributes: 1 sex u M F I # Gender or Infant (I) 2 length u (0,Inf] # Longest shell measurement (mm) 3 diameter u (0,Inf] # perpendicular to length (mm) 4 height u (0,Inf] # with meat in shell (mm) 5 whole_weight u (0,Inf] # whole abalone (gr) 6 shucked_weight u (0,Inf] # weight of meat (gr) 7 viscera_weight u (0,Inf] # gut weight (after bleeding) (gr) 8 shell_weight u (0,Inf] # after being dried (gr) 9 rings u 0..29 # +1.5 gives the age in years ``` ``` import math from tqdm import tqdm import torch import torch.nn as nn import torch.optim as optim import torch.utils.data as data import torch.nn.functional as F import pandas as pd from torch.utils.data import Dataset, DataLoader ``` Pandas is a data manipulation library that works really well with structured data. We can use Pandas DataFrames to load the dataset. ``` col_names = ['sex', 'length', 'diameter', 'height', 'whole_weight', 'shucked_weight', 'viscera_weight', 'shell_weight', 'rings'] abalone_df = pd.read_csv('../data/Dataset.data', sep=' ', names=col_names) abalone_df.head(n=3) ``` We define a subclass of PyTorch Dataset for our Abalone dataset. ``` class AbaloneDataset(data.Dataset): """Abalone dataset. Provides quick iteration over rows of data.""" def __init__(self, csv): """ Args: csv (string): Path to the Abalone dataset. """ self.features = ['sex', 'length', 'diameter', 'height', 'whole_weight', 'shucked_weight', 'viscera_weight', 'shell_weight'] self.y = ['rings'] self.abalone_df = pd.read_csv(csv, sep=' ', names=(self.features + self.y)) # Turn categorical data into machine interpretable format (one hot) self.abalone_df['sex'] = pd.get_dummies(self.abalone_df['sex']) def __len__(self): return len(self.abalone_df) def __getitem__(self, idx): """Return (x,y) pair where x are abalone features and y is age.""" features = self.abalone_df.iloc[idx][self.features].values y = self.abalone_df.iloc[idx][self.y] return torch.Tensor(features).float(), torch.Tensor(y).float() ``` # Neural Networks The task is to predict the age (number of rings) of abalone from physical measurements. We build a simple neural network with one hidden layer to model the regression. ``` class Net(nn.Module): def __init__(self, feature_size): super(Net, self).__init__() # feature_size input channels (8), 1 output channels self.fc1 = nn.Linear(feature_size, 4) self.fc2 = nn.Linear(4, 1) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x ``` We instantiate an Abalone dataset instance and create DataLoaders for train and test sets. ``` dataset = AbaloneDataset('../data/Dataset.data') train_split, test_split = math.floor(len(dataset) * 0.8), math.ceil(len(dataset) * 0.2) trainset = [dataset[i] for i in range(train_split)] testset = [dataset[train_split + j] for j in range(test_split)] batch_sz = len(trainset) # Compact data allows for big batch size trainloader = data.DataLoader(trainset, batch_size=batch_sz, shuffle=True, num_workers=4) testloader = data.DataLoader(testset, batch_size=batch_sz, shuffle=False, num_workers=4) ``` Now, we can initialize our network and define train and test functions ``` net = Net(len(dataset.features)) loss_fn = nn.MSELoss() optimizer = optim.Adam(net.parameters(), lr=0.1) device = 'cuda' if torch.cuda.is_available() else 'cpu' gpu_ids = [0] # On Colab, we have access to one GPU. Change this value as you see fit def train(epoch): """ Trains our net on data from the trainloader for a single epoch """ net.train() with tqdm(total=len(trainloader.dataset)) as progress_bar: for batch_idx, (inputs, targets) in enumerate(trainloader): inputs, targets = inputs.to(device), targets.to(device) optimizer.zero_grad() # Clear any stored gradients for new step outputs = net(inputs.float()) loss = loss_fn(outputs, targets) # Calculate loss between prediction and label loss.backward() # Backpropagate gradient updates through net based on loss optimizer.step() # Update net weights based on gradients progress_bar.set_postfix(loss=loss.item()) progress_bar.update(inputs.size(0)) def test(epoch): """ Run net in inference mode on test data. """ net.eval() # Ensures the net will not update weights with torch.no_grad(): with tqdm(total=len(testloader.dataset)) as progress_bar: for batch_idx, (inputs, targets) in enumerate(testloader): inputs, targets = inputs.to(device).float(), targets.to(device).float() outputs = net(inputs) loss = loss_fn(outputs, targets) progress_bar.set_postfix(testloss=loss.item()) progress_bar.update(inputs.size(0)) ``` Now that everything is prepared, it's time to train! ``` test_freq = 5 # Frequency to run model on validation data for epoch in range(0, 200): train(epoch) if epoch % test_freq == 0: test(epoch) ``` We use the network's eval mode to do a sample prediction to see how well it does. ``` net.eval() sample = testset[0] predicted_age = net(sample[0]) true_age = sample[1] print(f'Input features: {sample[0]}') print(f'Predicted age: {predicted_age.item()}, True age: {true_age[0]}') ``` Congratulations! You now know how to load your own datasets into PyTorch and run models on it. For an example of Computer Vision, check out the DenseNet notebook. Happy hacking!
true
code
0.827166
null
null
null
null
# Optimization with equality constraints ``` import math import numpy as np from scipy import optimize as opt ``` maximize $.4\,\log(x_1)+.6\,\log(x_2)$ s.t. $x_1+3\,x_2=50$. ``` I = 50 p = np.array([1, 3]) U = lambda x: (.4*math.log(x[0])+.6*math.log(x[1])) x0 = (I/len(p))/np.array(p) budget = ({'type': 'eq', 'fun': lambda x: I-np.sum(np.multiply(x, p))}) opt.minimize(lambda x: -U(x), x0, method='SLSQP', constraints=budget, tol=1e-08, options={'disp': True, 'ftol': 1e-08}) def consumer(U, p, I): budget = ({'type': 'eq', 'fun': lambda x: I-np.sum(np.multiply(x, p))}) x0 = (I/len(p))/np.array(p) sol = opt.minimize(lambda x: -U(x), x0, method='SLSQP', constraints=budget, tol=1e-08, options={'disp': False, 'ftol': 1e-08}) if sol.status == 0: return {'x': sol.x, 'V': -sol.fun, 'MgU': -sol.jac, 'mult': -sol.jac[0]/p[0]} else: return 0 consumer(U, p, I) delta=.01 (consumer(U, p, I+delta)['V']-consumer(U, p, I-delta)['V'])/(2*delta) delta=.001 numerador = (consumer(U,p+np.array([delta, 0]), I)['V']-consumer(U,p+np.array([-delta, 0]), I)['V'])/(2*delta) denominador = (consumer(U, p, I+delta)['V']-consumer(U, p, I-delta)['V'])/(2*delta) -numerador/denominador ``` ## Cost function ``` # Production function F = lambda x: (x[0]**.8)*(x[1]**.2) w = np.array([5, 4]) y = 1 constraint = ({'type': 'eq', 'fun': lambda x: y-F(x)}) x0 = np.array([.5, .5]) cost = opt.minimize(lambda x: w@x, x0, method='SLSQP', constraints=constraint, tol=1e-08, options={'disp': True, 'ftol': 1e-08}) F(cost.x) cost ``` ## Exercise ``` a = 2 u = lambda c: -np.exp(-a*c) R = 2 Z2 = np.array([.72, .92, 1.12, 1.32]) Z3 = np.array([.86, .96, 1.06, 1.16]) def U(x): states = len(Z2)*len(Z3) U = u(x[0]) for z2 in Z2: for z3 in Z3: U += (1/states)*u(x[1]*R+x[2]*z2+x[3]*z3) return U p = np.array([1, 1, .5, .5]) I = 4 # a=1 consumer(U, p, I) # a=5 consumer(U, p, I) # a=2 consumer(U, p, I) import matplotlib.pyplot as plt x = np.arange(0.0, 2.0, 0.01) a = 2 u = lambda c: -np.exp(-a*c) plt.plot(x, u(x)) a = -2 plt.plot(x, u(x)) ``` # Optimization with inequality constraints ``` f = lambda x: -x[0]**3+x[1]**2-2*x[0]*(x[2]**2) constraints =({'type': 'eq', 'fun': lambda x: 2*x[0]+x[1]**2+x[2]-5}, {'type': 'ineq', 'fun': lambda x: 5*x[0]**2-x[1]**2-x[2]-2}) constraints =({'type': 'eq', 'fun': lambda x: x[0]**3-x[1]}) x0 = np.array([.5, .5, 2]) opt.minimize(f, x0, method='SLSQP', constraints=constraints, tol=1e-08, options={'disp': True, 'ftol': 1e-08}) ```
true
code
0.272363
null
null
null
null
# SAMUR Emergency Frequencies This notebook explores how the frequency of different types of emergency changes with time in relation to different periods (hours of the day, days of the week, months of the year...) and locations in Madrid. This will be useful for constructing a realistic emergency generator in the city simulation. Let's start with some imports and setup, and then read the table. ``` import pandas as pd import datetime import matplotlib.pyplot as plt import yaml %matplotlib inline df = pd.read_csv("../data/emergency_data.csv") df.head() ``` The column for the time of the call is a string, so let's change that into a timestamp. ``` df["time_call"] = pd.to_datetime(df["Solicitud"]) ``` We will also need to assign a numerical code to each district of the city in order to properly vectorize the distribution an make it easier to work along with other parts of the project. ``` district_codes = { 'Centro': 1, 'Arganzuela': 2, 'Retiro': 3, 'Salamanca': 4, 'Chamartín': 5, 'Tetuán': 6, 'Chamberí': 7, 'Fuencarral - El Pardo': 8, 'Moncloa - Aravaca': 9, 'Latina': 10, 'Carabanchel': 11, 'Usera': 12, 'Puente de Vallecas': 13, 'Moratalaz': 14, 'Ciudad Lineal': 15, 'Hortaleza': 16, 'Villaverde': 17, 'Villa de Vallecas': 18, 'Vicálvaro': 19, 'San Blas - Canillejas': 20, 'Barajas': 21, } df["district_code"] = df.Distrito.apply(lambda x: district_codes[x]) ``` Each emergency has already been assigned a severity level, depending on the nature of the reported emergency. ``` df["severity"] = df["Gravedad"] ``` We also need the hour, weekday and month of the event in order to assign it in the various distributions. ``` df["hour"] = df["time_call"].apply(lambda x: x.hour) # From 0 to 23 df["weekday"] = df["time_call"].apply(lambda x: x.weekday()+1) # From 1 (Mon) to 7 (Sun) df["month"] = df["time_call"].apply(lambda x: x.month) ``` Let's also strip down the dataset to just the columns we need right now. ``` df = df[["district_code", "severity", "time_call", "hour", "weekday", "month"]] df.head() ``` We are going to group the distributions by severity. ``` emergencies_per_grav = df.severity.value_counts().sort_index().rename("total_emergencies") emergencies_per_grav ``` We will also need the global frequency of the emergencies: ``` total_seconds = (df.time_call.max()-df.time_call.min()).total_seconds() frequencies_per_grav = (emergencies_per_grav / total_seconds).rename("emergency_frequencies") frequencies_per_grav ``` Each emergency will need to be assigne a district. Assuming independent distribution of emergencies by district and time, each will be assigned to a district according to a global probability based on this dataset, as follows. ``` prob_per_district = (df.district_code.value_counts().sort_index()/df.district_code.value_counts().sum()).rename("distric_weight") prob_per_district ``` In order to be able to simplify the generation of emergencies, we are going to assume that the distributions of emergencies per hour, per weekday and per month are independent, sharing no correlation. This is obiously not fully true, but it is a good approximation for the chosen time-frames. ``` hourly_dist = (df.hour.value_counts()/df.hour.value_counts().mean()).sort_index().rename("hourly_distribution") daily_dist = (df.weekday.value_counts()/df.weekday.value_counts().mean()).sort_index().rename("daily_distribution") monthly_dist = (df.month.value_counts()/df.month.value_counts().mean()).sort_index().rename("monthly_distribution") ``` We will actually make one of these per severity level. This will allow us to modify the base emergency density of a given severity as follows: ``` def emergency_density(gravity, hour, weekday, month): base_density = frequencies_per_grav[gravity] density = base_density * hourly_dist[hour] * daily_dist[weekday] * monthly_dist[month] return density emergency_density(3, 12, 4, 5) # Emergency frequency for severity level 3, at 12 hours of a thursday in May ``` In order for the model to read these distributions we will need to store them in a dict-like format, in this case YAML, which is easily readable by human or machine. ``` dists = {} for severity in range(1, 6): sub_df = df[df["severity"] == severity] frequency = float(frequencies_per_grav.round(8)[severity]) hourly_dist = (sub_df.hour. value_counts()/sub_df.hour. value_counts().mean()).sort_index().round(5).to_dict() daily_dist = (sub_df.weekday.value_counts()/sub_df.weekday.value_counts().mean()).sort_index().round(5).to_dict() monthly_dist = (sub_df.month. value_counts()/sub_df.month. value_counts().mean()).sort_index().round(5).to_dict() district_prob = (sub_df.district_code.value_counts()/sub_df.district_code.value_counts().sum()).sort_index().round(5).to_dict() dists[severity] = {"frequency": frequency, "hourly_dist": hourly_dist, "daily_dist": daily_dist, "monthly_dist": monthly_dist, "district_prob": district_prob} f = open("../data/distributions.yaml", "w+") yaml.dump(dists, f, allow_unicode=True) ``` We can now check that the dictionary stored in the YAML file is the same one we have created. ``` with open("../data/distributions.yaml") as dist_file: yaml_dict = yaml.safe_load(dist_file) yaml_dict == dists ```
true
code
0.288331
null
null
null
null
# 1 - Sequence to Sequence Learning with Neural Networks In this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language. In this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper. ## Introduction The most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time. ![](assets/seq2seq1.png) The above image shows an example translation. The input/source sentence, "guten morgen", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$: $$h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$$ We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit). Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{<sos>}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter. Once the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence. Now we have our context vector, $z$, we can start decoding it to get the output/target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as: $$s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$$ Although the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters. In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$. $$\hat{y}_t = f(s_t)$$ The words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/). When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated. Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model. ## Preparing Data We'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data. ``` import torch import torch.nn as nn import torch.optim as optim from torchtext.legacy.datasets import Multi30k from torchtext.legacy.data import Field, BucketIterator import spacy import numpy as np import random import math import time ``` We'll set the random seeds for deterministic results. ``` SEED = 1234 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True ``` Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word. spaCy has model for each language ("de_core_news_sm" for German and "en_core_web_sm" for English) which need to be loaded so we can access the tokenizer of each model. **Note**: the models must first be downloaded using the following on the command line: ``` python -m spacy download en_core_web_sm python -m spacy download de_core_news_sm ``` We load the models as such: ``` spacy_de = spacy.load('de_core_news_sm') spacy_en = spacy.load('en_core_web_sm') ``` Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens. In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier". We copy this by reversing the German sentence after it has been transformed into a list of tokens. ``` def tokenize_de(text): """ Tokenizes German text from a string into a list of strings (tokens) and reverses it """ return [tok.text for tok in spacy_de.tokenizer(text)][::-1] def tokenize_en(text): """ Tokenizes English text from a string into a list of strings (tokens) """ return [tok.text for tok in spacy_en.tokenizer(text)] ``` torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61). We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase. ``` SRC = Field(tokenize = tokenize_de, init_token = '<sos>', eos_token = '<eos>', lower = True) TRG = Field(tokenize = tokenize_en, init_token = '<sos>', eos_token = '<eos>', lower = True) ``` Next, we download and load the train, validation and test data. The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence. `exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target. ``` train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'), fields = (SRC, TRG)) ``` We can double check that we've loaded the right number of examples: ``` print(f"Number of training examples: {len(train_data.examples)}") print(f"Number of validation examples: {len(valid_data.examples)}") print(f"Number of testing examples: {len(test_data.examples)}") ``` We can also print out an example, making sure the source sentence is reversed: ``` print(vars(train_data.examples[0])) ``` The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed. Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct. Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token. It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores. ``` SRC.build_vocab(train_data, min_freq = 2) TRG.build_vocab(train_data, min_freq = 2) print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}") print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}") ``` The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary. We also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator. When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us! We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences. ``` device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') BATCH_SIZE = 128 train_iterator, valid_iterator, test_iterator = BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device) ``` ## Building the Seq2Seq Model We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each. ### Encoder First, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers. For a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by: $$h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$ The hidden states in the second layer are given by: $$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$ Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$. Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step. $$\begin{align*} h_t &= \text{RNN}(e(x_t), h_{t-1})\\ (h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1}) \end{align*}$$ We can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$. Extending our multi-layer equations to LSTMs, we get: $$\begin{align*} (h_t^1, c_t^1) &= \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\ (h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2)) \end{align*}$$ Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state. So our encoder looks something like this: ![](assets/seq2seq2.png) We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments: - `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size. - `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions. - `hid_dim` is the dimensionality of the hidden and cell states. - `n_layers` is the number of layers in the RNN. - `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout. We aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/). The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these. One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$. In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros. The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other). As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`. The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2. ``` class Encoder(nn.Module): def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout): super().__init__() self.hid_dim = hid_dim self.n_layers = n_layers self.embedding = nn.Embedding(input_dim, emb_dim) self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout) self.dropout = nn.Dropout(dropout) def forward(self, src): #src = [src len, batch size] embedded = self.dropout(self.embedding(src)) #embedded = [src len, batch size, emb dim] outputs, (hidden, cell) = self.rnn(embedded) #outputs = [src len, batch size, hid dim * n directions] #hidden = [n layers * n directions, batch size, hid dim] #cell = [n layers * n directions, batch size, hid dim] #outputs are always from the top hidden layer return hidden, cell ``` ### Decoder Next, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM. ![](assets/seq2seq3.png) The `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder. $$\begin{align*} (s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\ (s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2)) \end{align*}$$ Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$. We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$. $$\hat{y}_{t+1} = f(s_t^L)$$ The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state. Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state. **Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`. ``` class Decoder(nn.Module): def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout): super().__init__() self.output_dim = output_dim self.hid_dim = hid_dim self.n_layers = n_layers self.embedding = nn.Embedding(output_dim, emb_dim) self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout) self.fc_out = nn.Linear(hid_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, input, hidden, cell): #input = [batch size] #hidden = [n layers * n directions, batch size, hid dim] #cell = [n layers * n directions, batch size, hid dim] #n directions in the decoder will both always be 1, therefore: #hidden = [n layers, batch size, hid dim] #context = [n layers, batch size, hid dim] input = input.unsqueeze(0) #input = [1, batch size] embedded = self.dropout(self.embedding(input)) #embedded = [1, batch size, emb dim] output, (hidden, cell) = self.rnn(embedded, (hidden, cell)) #output = [seq len, batch size, hid dim * n directions] #hidden = [n layers * n directions, batch size, hid dim] #cell = [n layers * n directions, batch size, hid dim] #seq len and n directions will always be 1 in the decoder, therefore: #output = [1, batch size, hid dim] #hidden = [n layers, batch size, hid dim] #cell = [n layers, batch size, hid dim] prediction = self.fc_out(output.squeeze(0)) #prediction = [batch size, output dim] return prediction, hidden, cell ``` ### Seq2Seq For the final part of the implemenetation, we'll implement the seq2seq model. This will handle: - receiving the input/source sentence - using the encoder to produce the context vectors - using the decoder to produce the predicted output/target sentence Our full model will look like this: ![](assets/seq2seq4.png) The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists). For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc. Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence. The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$. We then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states. The first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder. During each iteration of the loop, we: - pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder - receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder - place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs` - decide if we are going to "teacher force" or not - if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]` - if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`. **Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like: $$\begin{align*} \text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\ \text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>] \end{align*}$$ Later on when we calculate the loss, we cut off the first element of each tensor to get: $$\begin{align*} \text{trg} = [&y_1, y_2, y_3, <eos>]\\ \text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>] \end{align*}$$ ``` class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device assert encoder.hid_dim == decoder.hid_dim, \ "Hidden dimensions of encoder and decoder must be equal!" assert encoder.n_layers == decoder.n_layers, \ "Encoder and decoder must have equal number of layers!" def forward(self, src, trg, teacher_forcing_ratio = 0.5): #src = [src len, batch size] #trg = [trg len, batch size] #teacher_forcing_ratio is probability to use teacher forcing #e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time batch_size = trg.shape[1] trg_len = trg.shape[0] trg_vocab_size = self.decoder.output_dim #tensor to store decoder outputs outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device) #last hidden state of the encoder is used as the initial hidden state of the decoder hidden, cell = self.encoder(src) #first input to the decoder is the <sos> tokens input = trg[0,:] for t in range(1, trg_len): #insert input token embedding, previous hidden and previous cell states #receive output tensor (predictions) and new hidden and cell states output, hidden, cell = self.decoder(input, hidden, cell) #place predictions in a tensor holding predictions for each token outputs[t] = output #decide if we are going to use teacher forcing or not teacher_force = random.random() < teacher_forcing_ratio #get the highest predicted token from our predictions top1 = output.argmax(1) #if teacher forcing, use actual next token as next input #if not, use predicted token input = trg[t] if teacher_force else top1 return outputs ``` # Training the Seq2Seq Model Now we have our model implemented, we can begin training it. First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same. We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`. ``` INPUT_DIM = len(SRC.vocab) OUTPUT_DIM = len(TRG.vocab) ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 HID_DIM = 512 N_LAYERS = 2 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5 enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT) dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT) model = Seq2Seq(enc, dec, device).to(device) ``` Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$. We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`. ``` def init_weights(m): for name, param in m.named_parameters(): nn.init.uniform_(param.data, -0.08, 0.08) model.apply(init_weights) ``` We also define a function that will calculate the number of trainable parameters in the model. ``` def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ``` We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam. ``` optimizer = optim.Adam(model.parameters()) ``` Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions. Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token. ``` TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token] criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX) ``` Next, we'll define our training loop. First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator. As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like: $$\begin{align*} \text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\ \text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>] \end{align*}$$ Here, when we calculate the loss, we cut off the first element of each tensor to get: $$\begin{align*} \text{trg} = [&y_1, y_2, y_3, <eos>]\\ \text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>] \end{align*}$$ At each iteration: - get the source and target sentences from the batch, $X$ and $Y$ - zero the gradients calculated from the last batch - feed the source and target into the model to get the output, $\hat{Y}$ - as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view` - we slice off the first column of the output and target tensors as mentioned above - calculate the gradients with `loss.backward()` - clip the gradients to prevent them from exploding (a common issue in RNNs) - update the parameters of our model by doing an optimizer step - sum the loss value to a running total Finally, we return the loss that is averaged over all batches. ``` def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): src = batch.src trg = batch.trg optimizer.zero_grad() output = model(src, trg) #trg = [trg len, batch size] #output = [trg len, batch size, output dim] output_dim = output.shape[-1] output = output[1:].view(-1, output_dim) trg = trg[1:].view(-1) #trg = [(trg len - 1) * batch size] #output = [(trg len - 1) * batch size, output dim] loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) ``` Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value. We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used). We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up. The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment. ``` def evaluate(model, iterator, criterion): model.eval() epoch_loss = 0 with torch.no_grad(): for i, batch in enumerate(iterator): src = batch.src trg = batch.trg output = model(src, trg, 0) #turn off teacher forcing #trg = [trg len, batch size] #output = [trg len, batch size, output dim] output_dim = output.shape[-1] output = output[1:].view(-1, output_dim) trg = trg[1:].view(-1) #trg = [(trg len - 1) * batch size] #output = [(trg len - 1) * batch size, output dim] loss = criterion(output, trg) epoch_loss += loss.item() return epoch_loss / len(iterator) ``` Next, we'll create a function that we'll use to tell us how long an epoch takes. ``` def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs ``` We can finally start training our model! At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss. We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger. ``` N_EPOCHS = 10 CLIP = 1 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_iterator, optimizer, criterion, CLIP) valid_loss = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut1-model.pt') print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}') print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') ``` We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set. ``` model.load_state_dict(torch.load('tut1-model.pt')) test_loss = evaluate(model, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') ``` In the following notebook we'll implement a model that achieves improved test perplexity, but only uses a single layer in the encoder and the decoder.
true
code
0.859192
null
null
null
null
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. # Training Pipeline - Custom Script _**Training many models using a custom script**_ ---- This notebook demonstrates how to create a pipeline that trains and registers many models using a custom script. We utilize the [ParallelRunStep](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step) to parallelize the process of training the models to make the process more efficient. For this solution accelerator we are using the [OJ Sales Dataset](https://azure.microsoft.com/en-us/services/open-datasets/catalog/sample-oj-sales-simulated/) to train individual models that predict sales for each store and brand of orange juice. The model we use here is a simple, regression-based forecaster built on scikit-learn and pandas utilities. See the [training script](scripts/train.py) to see how the forecaster is constructed. This forecaster is intended for demonstration purposes, so it does not handle the large variety of special cases that one encounters in time-series modeling. For instance, the model here assumes that all time-series are comprised of regularly sampled observations on a contiguous interval with no missing values. The model does not include any handling of categorical variables. For a more general-use forecaster that handles missing data, advanced featurization, and automatic model selection, see the [AutoML Forecasting task](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast). Also, see the notebooks demonstrating [AutoML forecasting in a many models scenario](../Automated_ML). ### Prerequisites At this point, you should have already: 1. Created your AML Workspace using the [00_Setup_AML_Workspace notebook](../00_Setup_AML_Workspace.ipynb) 2. Run [01_Data_Preparation.ipynb](../01_Data_Preparation.ipynb) to setup your compute and create the dataset #### Please ensure you have the latest version of the Azure ML SDK and also install Pipeline Steps Package ``` #!pip install --upgrade azureml-sdk # !pip install azureml-pipeline-steps ``` ## 1.0 Connect to workspace and datastore ``` from azureml.core import Workspace # set up workspace ws = Workspace.from_config() # set up datastores dstore = ws.get_default_datastore() print('Workspace Name: ' + ws.name, 'Azure Region: ' + ws.location, 'Subscription Id: ' + ws.subscription_id, 'Resource Group: ' + ws.resource_group, sep = '\n') ``` ## 2.0 Create an experiment ``` from azureml.core import Experiment experiment = Experiment(ws, 'oj_training_pipeline') print('Experiment name: ' + experiment.name) ``` ## 3.0 Get the training Dataset Next, we get the training Dataset using the [Dataset.get_by_name()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--) method. This is the training dataset we created and registered in the [data preparation notebook](../01_Data_Preparation.ipynb). If you chose to use only a subset of the files, the training dataset name will be `oj_data_small_train`. Otherwise, the name you'll have to use is `oj_data_train`. We recommend to start with the small dataset and make sure everything runs successfully, then scale up to the full dataset. ``` dataset_name = 'oj_data_small_train' from azureml.core.dataset import Dataset dataset = Dataset.get_by_name(ws, name=dataset_name) dataset_input = dataset.as_named_input(dataset_name) ``` ## 4.0 Create the training pipeline Now that the workspace, experiment, and dataset are set up, we can put together a pipeline for training. ### 4.1 Configure environment for ParallelRunStep An [environment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-environments) defines a collection of resources that we will need to run our pipelines. We configure a reproducible Python environment for our training script including the [scikit-learn](https://scikit-learn.org/stable/index.html) python library. ``` from azureml.core import Environment from azureml.core.conda_dependencies import CondaDependencies train_env = Environment(name="many_models_environment") train_conda_deps = CondaDependencies.create(pip_packages=['sklearn', 'pandas', 'joblib', 'azureml-defaults', 'azureml-core', 'azureml-dataprep[fuse]']) train_env.python.conda_dependencies = train_conda_deps ``` ### 4.2 Choose a compute target Currently ParallelRunConfig only supports AMLCompute. This is the compute cluster you created in the [setup notebook](../00_Setup_AML_Workspace.ipynb#3.0-Create-compute-cluster). ``` cpu_cluster_name = "cpucluster" from azureml.core.compute import AmlCompute compute = AmlCompute(ws, cpu_cluster_name) ``` ### 4.3 Set up ParallelRunConfig [ParallelRunConfig](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py) provides the configuration for the ParallelRunStep we'll be creating next. Here we specify the environment and compute target we created above along with the entry script that will be for each batch. There's a number of important parameters to configure including: - **mini_batch_size**: The number of files per batch. If you have 500 files and mini_batch_size is 10, 50 batches would be created containing 10 files each. Batches are split across the various nodes. - **node_count**: The number of compute nodes to be used for running the user script. For the small sample of OJ datasets, we only need a single node, but you will likely need to increase this number for larger datasets composed of more files. If you increase the node count beyond five here, you may need to increase the max_nodes for the compute cluster as well. - **process_count_per_node**: The number of processes per node. The compute cluster we are using has 8 cores so we set this parameter to 8. - **run_invocation_timeout**: The run() method invocation timeout in seconds. The timeout should be set to be higher than the maximum training time of one model (in seconds), by default it's 60. Since the batches that takes the longest to train are about 120 seconds, we set it to be 180 to ensure the method has adequate time to run. We also added tags to preserve the information about our training cluster's node count, process count per node, and dataset name. You can find the 'Tags' column in Azure Machine Learning Studio. ``` from azureml.pipeline.steps import ParallelRunConfig processes_per_node = 8 node_count = 1 timeout = 180 parallel_run_config = ParallelRunConfig( source_directory='./scripts', entry_script='train.py', mini_batch_size="1", run_invocation_timeout=timeout, error_threshold=-1, output_action="append_row", environment=train_env, process_count_per_node=processes_per_node, compute_target=compute, node_count=node_count) ``` ### 4.4 Set up ParallelRunStep This [ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) is the main step in our training pipeline. First, we set up the output directory and define the pipeline's output name. The datastore that stores the pipeline's output data is Workspace's default datastore. ``` from azureml.pipeline.core import PipelineData output_dir = PipelineData(name="training_output", datastore=dstore) ``` We provide our ParallelRunStep with a name, the ParallelRunConfig created above and several other parameters: - **inputs**: A list of input datasets. Here we'll use the dataset created in the previous notebook. The number of files in that path determines the number of models will be trained in the ParallelRunStep. - **output**: A PipelineData object that corresponds to the output directory. We'll use the output directory we just defined. - **arguments**: A list of arguments required for the train.py entry script. Here, we provide the schema for the timeseries data - i.e. the names of target, timestamp, and id columns - as well as columns that should be dropped prior to modeling, a string identifying the model type, and the number of observations we want to leave aside for testing. ``` from azureml.pipeline.steps import ParallelRunStep parallel_run_step = ParallelRunStep( name="many-models-training", parallel_run_config=parallel_run_config, inputs=[dataset_input], output=output_dir, allow_reuse=False, arguments=['--target_column', 'Quantity', '--timestamp_column', 'WeekStarting', '--timeseries_id_columns', 'Store', 'Brand', '--drop_columns', 'Revenue', 'Store', 'Brand', '--model_type', 'lr', '--test_size', 20] ) ``` ## 5.0 Run the pipeline Next, we submit our pipeline to run. The run will train models for each dataset using a train set, compute accuracy metrics for the fits using a test set, and finally re-train models with all the data available. With 10 files, this should only take a few minutes but with the full dataset this can take over an hour. ``` from azureml.pipeline.core import Pipeline pipeline = Pipeline(workspace=ws, steps=[parallel_run_step]) run = experiment.submit(pipeline) #Wait for the run to complete run.wait_for_completion(show_output=False, raise_on_error=True) ``` ## 6.0 View results of training pipeline The dataframe we return in the run method of train.py is outputted to *parallel_run_step.txt*. To see the results of our training pipeline, we'll download that file, read in the data to a DataFrame, and then visualize the results, including the in-sample metrics. The run submitted to the Azure Machine Learning Training Compute Cluster may take a while. The output is not generated until the run is complete. You can monitor the status of the run in Azure Portal https://ml.azure.com ### 6.1 Download parallel_run_step.txt locally ``` import os def download_results(run, target_dir=None, step_name='many-models-training', output_name='training_output'): stitch_run = run.find_step_run(step_name)[0] port_data = stitch_run.get_output_data(output_name) port_data.download(target_dir, show_progress=True) return os.path.join(target_dir, 'azureml', stitch_run.id, output_name) file_path = download_results(run, 'output') file_path ``` ### 6.2 Convert the file to a dataframe ``` import pandas as pd df = pd.read_csv(file_path + '/parallel_run_step.txt', sep=" ", header=None) df.columns = ['Store', 'Brand', 'Model', 'File Name', 'ModelName', 'StartTime', 'EndTime', 'Duration', 'MSE', 'RMSE', 'MAE', 'MAPE', 'Index', 'Number of Models', 'Status'] df['StartTime'] = pd.to_datetime(df['StartTime']) df['EndTime'] = pd.to_datetime(df['EndTime']) df['Duration'] = df['EndTime'] - df['StartTime'] df.head() ``` ### 6.3 Review Results ``` total = df['EndTime'].max() - df['StartTime'].min() print('Number of Models: ' + str(len(df))) print('Total Duration: ' + str(total)[6:]) print('Average MAPE: ' + str(round(df['MAPE'].mean(), 5))) print('Average MSE: ' + str(round(df['MSE'].mean(), 5))) print('Average RMSE: ' + str(round(df['RMSE'].mean(), 5))) print('Average MAE: '+ str(round(df['MAE'].mean(), 5))) print('Maximum Duration: '+ str(df['Duration'].max())[7:]) print('Minimum Duration: ' + str(df['Duration'].min())[7:]) print('Average Duration: ' + str(df['Duration'].mean())[7:]) ``` ### 6.4 Visualize Performance across models Here, we produce some charts from the errors metrics calculated during the run using a subset put aside for testing. First, we examine the distribution of mean absolute percentage error (MAPE) over all the models: ``` import seaborn as sns import matplotlib.pyplot as plt fig = sns.boxplot(y='MAPE', data=df) fig.set_title('MAPE across all models') ``` Next, we can break that down by Brand or Store to see variations in error across our models ``` fig = sns.boxplot(x='Brand', y='MAPE', data=df) fig.set_title('MAPE by Brand') ``` We can also look at how long models for different brands took to train ``` brand = df.groupby('Brand') brand = brand['Duration'].sum() brand = pd.DataFrame(brand) brand['time_in_seconds'] = [time.total_seconds() for time in brand['Duration']] brand.drop(columns=['Duration']).plot(kind='bar') plt.xlabel('Brand') plt.ylabel('Seconds') plt.title('Total Training Time by Brand') plt.show() ``` ## 7.0 Publish and schedule the pipeline (Optional) ### 7.1 Publish the pipeline Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines. ``` # published_pipeline = pipeline.publish(name = 'train_many_models', # description = 'train many models', # version = '1', # continue_on_step_failure = False) ``` ### 7.2 Schedule the pipeline You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift. ``` # from azureml.pipeline.core import Schedule, ScheduleRecurrence # training_pipeline_id = published_pipeline.id # recurrence = ScheduleRecurrence(frequency="Month", interval=1, start_time="2020-01-01T09:00:00") # recurring_schedule = Schedule.create(ws, name="training_pipeline_recurring_schedule", # description="Schedule Training Pipeline to run on the first day of every month", # pipeline_id=training_pipeline_id, # experiment_name=experiment.name, # recurrence=recurrence) ``` ## Next Steps Now that you've trained and scored the models, move on to [03_CustomScript_Forecasting_Pipeline.ipynb](03_CustomScript_Forecasting_Pipeline.ipynb) to make forecasts with your models.
true
code
0.493592
null
null
null
null
# Repertoire classification subsampling When training a classifier to assign repertoires to the subject from which they were obtained, we need a set of subsampled sequences. The sequences have been condensed to just the V- and J-gene assignments and the CDR3 length (VJ-CDR3len). Subsample sizes range from 10 to 10,000 sequences per biological replicate. The [`abutils`](https://www.github.com/briney/abutils) Python package is required for this notebook, and can be installed by running `pip install abutils`. *NOTE: this notebook requires the use of the Unix command line tool `shuf`. Thus, it requires a Unix-based operating system to run correctly (MacOS and most flavors of Linux should be fine). Running this notebook on Windows 10 may be possible using the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) but we have not tested this.* ``` from __future__ import print_function, division from collections import Counter import os import subprocess as sp import sys import tempfile from abutils.utils.pipeline import make_dir ``` ## Subjects, subsample sizes, and directories The `input_dir` should contain deduplicated clonotype sequences. The datafiles are too large to be included in the Github repository, but may be downloaded [**here**](http://burtonlab.s3.amazonaws.com/GRP_github_data/techrep-merged_vj-cdr3len_no-header.tar.gz). If downloading the data (which will be downloaded as a compressed archive), decompress the archive in the `data` directory (in the same parent directory as this notebook) and you should be ready to go. If you want to store the downloaded data in some other location, adjust the `input_dir` path below as needed. By default, subsample sizes increase by 10 from 10 to 100, by 100 from 100 to 1,000, and by 1,000 from 1,000 to 10,000. ``` with open('./data/subjects.txt') as f: subjects = sorted(f.read().split()) subsample_sizes = list(range(10, 100, 10)) + list(range(100, 1000, 100)) + list(range(1000, 11000, 1000)) input_dir = './data/techrep-merged_vj-cdr3len_no-header/' subsample_dir = './data/repertoire_classification/user-created_subsamples_vj-cdr3len' make_dir(subsample_dir) ``` ## Subsampling ``` def subsample(infile, outfile, n_seqs, iterations): with open(outfile, 'w') as f: f.write('') shuf_cmd = 'shuf -n {} {}'.format(n_seqs, infile) p = sp.Popen(shuf_cmd, stdout=sp.PIPE, stderr=sp.PIPE, shell=True) stdout, stderr = p.communicate() with open(outfile, 'a') as f: for iteration in range(iterations): seqs = ['_'.join(s.strip().split()) for s in stdout.strip().split('\n') if s.strip()] counts = Counter(seqs) count_strings = [] for k, v in counts.items(): count_strings.append('{}:{}'.format(k, v)) f.write(','.join(count_strings) + '\n') for subject in subjects: print(subject) files = list_files(os.path.join(input_dir, subject)) for file_ in files: for subsample_size in subsample_sizes: num = os.path.basename(file_).split('_')[0] ofile = os.path.join(subsample_dir, '{}_{}-{}'.format(subject, subsample_size, num)) subsample(file_, ofile, subsample_size, 50) ```
true
code
0.423935
null
null
null
null
# Scenario Analysis: Pop Up Shop ![](https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Weich_Couture_Alpaca%2C_D%C3%BCsseldorf%2C_December_2020_%2809%29.jpg/300px-Weich_Couture_Alpaca%2C_D%C3%BCsseldorf%2C_December_2020_%2809%29.jpg) Kürschner (talk) 17:51, 1 December 2020 (UTC), CC0, via Wikimedia Commons ``` # install Pyomo and solvers for Google Colab import sys if "google.colab" in sys.modules: !wget -N -q https://raw.githubusercontent.com/jckantor/MO-book/main/tools/install_on_colab.py %run install_on_colab.py ``` ## The problem There is an opportunity to operate a pop-up shop to sell a unique commemorative item for events held at a famous location. The items cost 12 &euro; each and will selL for 40 &euro;. Unsold items can be returned to the supplier at a value of only 2 &euro; due to their commemorative nature. | Parameter | Symbol | Value | | :---: | :---: | :---: | | sales price | $r$ | 40 &euro; | | unit cost | $c$ | 12 &euro; | | salvage value | $w$ | 2 &euro; | Profit will increase with sales. Demand for these items, however, will be high only if the weather is good. Historical data suggests the following scenarios. | Scenario ($s$) | Demand ($d_s$) | Probability ($p_s$) | | :---: | :-----: | :----------: | | Sunny Skies | 650 | 0.10 | | Good Weather | 400 | 0.60 | | Poor Weather | 200 | 0.30 | The problem is to determine how many items to order for the pop-up shop. The dilemma is that the weather won't be known until after the order is placed. Ordering enough items to meet demand for a good weather day results in a financial penalty on returned goods if the weather is poor. But ordering just enough items to satisfy demand on a poor weather day leaves "money on the table" if the weather is good. How many items should be ordered for sale? ## Expected value for the mean scenario (EVM) A naive solution to this problem is to place an order equal to the expected demand. The expected demand is given by $$ \begin{align*} \mathbb E[D] & = \sum_{s\in S} p_s d_s \end{align*} $$ Choosing an order size $x = \mathbb E[d]$ results in an expected profit we call the **expected value of the mean scenario (EVM)**. Variable $y_s$ is the actual number of items sold if scenario $s$ should occur. The number sold is the lesser of the demand $d_s$ and the order size $x$. $$ \begin{align*} y_s & = \min(d_s, x) & \forall s \in S \end{align*} $$ Any unsold inventory $x - y_s$ remaining after the event will be sold at the salvage price $w$. Taking into account the revenue from sales $r y_s$, the salvage value of the unsold inventory $w(x - y_s)$, and the cost of the order $c x$, the profit $f_s$ for scenario $s$ is given by $$ \begin{align*} f_s & = r y_s + w (x - y_s) - c x & \forall s \in S \end{align*} $$ The average or expected profit is given by $$ \begin{align*} \text{EVM} = \mathbb E[f] & = \sum_{s\in S} p_s f_s \end{align*} $$ These calculations can be executed using operations on the pandas dataframe. Let's begin by calculating the expected demand. Below we create a pandas DataFrame object to store the scenario data. ``` import numpy as np import pandas as pd # price information r = 40 c = 12 w = 2 # scenario information scenarios = { "sunny skies" : {"probability": 0.10, "demand": 650}, "good weather": {"probability": 0.60, "demand": 400}, "poor weather": {"probability": 0.30, "demand": 200}, } df = pd.DataFrame.from_dict(scenarios).T display(df) expected_demand = sum(df["probability"] * df["demand"]) print(f"Expected demand = {expected_demand}") ``` Subsequent calculations can be done directly withthe pandas dataframe holding the scenario data. ``` df["order"] = expected_demand df["sold"] = df[["demand", "order"]].min(axis=1) df["salvage"] = df["order"] - df["sold"] df["profit"] = r * df["sold"] + w * df["salvage"] - c * df["order"] EVM = sum(df["probability"] * df["profit"]) print(f"Mean demand = {expected_demand}") print(f"Expected value of the mean demand (EVM) = {EVM}") display(df) ``` ## Expected value of the stochastic solution (EVSS) The optimization problem is to find the order size $x$ that maximizes expected profit subject to operational constraints on the decision variables. The variables $x$ and $y_s$ are non-negative integers, while $f_s$ is a real number that can take either positive and negative values. The number of goods sold in scenario $s$ has to be less than the order size $x$ and customer demand $d_s$. The problem to be solved is $$ \begin{align*} \text{EV} = & \max_{x, y_s} \mathbb E[F] = \sum_{s\in S} p_s f_s \\ \text{subject to:} \\ f_s & = r y_s + w(x - y_s) - c x & \forall s \in S\\ y_s & \leq x & \forall s \in S \\ y_s & \leq d_s & \forall s \in S \end{align*} $$ where $S$ is the set of all scenarios under consideration. ``` import pyomo.environ as pyo import pandas as pd # price information r = 40 c = 12 w = 2 # scenario information scenarios = { "sunny skies" : {"demand": 650, "probability": 0.1}, "good weather": {"demand": 400, "probability": 0.6}, "poor weather": {"demand": 200, "probability": 0.3}, } # create model instance m = pyo.ConcreteModel('Pop-up Shop') # set of scenarios m.S = pyo.Set(initialize=scenarios.keys()) # decision variables m.x = pyo.Var(domain=pyo.NonNegativeIntegers) m.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers) m.f = pyo.Var(m.S, domain=pyo.Reals) # objective @m.Objective(sense=pyo.maximize) def EV(m): return sum([scenarios[s]["probability"]*m.f[s] for s in m.S]) # constraints @m.Constraint(m.S) def profit(m, s): return m.f[s] == r*m.y[s] + w*(m.x - m.y[s]) - c*m.x @m.Constraint(m.S) def sales_less_than_order(m, s): return m.y[s] <= m.x @m.Constraint(m.S) def sales_less_than_demand(m, s): return m.y[s] <= scenarios[s]["demand"] # solve solver = pyo.SolverFactory('glpk') results = solver.solve(m) # display solution using Pandas print("Solver Termination Condition:", results.solver.termination_condition) print("Expected Profit:", m.EV()) print() for s in m.S: scenarios[s]["order"] = m.x() scenarios[s]["sold"] = m.y[s]() scenarios[s]["salvage"] = m.x() - m.y[s]() scenarios[s]["profit"] = m.f[s]() df = pd.DataFrame.from_dict(scenarios).T display(df) ``` Optimizing over all scenarios provides an expected profit of 8,920 &euro;, an increase of 581 &euro; over the base case of simply ordering the expected number of items sold. The new solution places a larger order. In poor weather conditions there will be more returns and lower profit that is more than compensated by the increased profits in good weather conditions. The addtional value that results from solve of this planning problem is called the **Value of the Stochastic Solution (VSS)**. The value of the stochastic solution is the additional profit compared to ordering to meet expected in demand. In this case, $$\text{VSS} = \text{EV} - \text{EVM} = 8,920 - 8,339 = 581$$ ## Expected value with perfect information (EVPI) Maximizing expected profit requires the size of the order be decided before knowing what scenario will unfold. The decision for $x$ has to be made "here and now" with probablistic information about the future, but without specific information on which future will actually transpire. Nevertheless, we can perform the hypothetical calculation of what profit would be realized if we could know the future. We are still subject to the variability of weather, what is different is we know what the weather will be at the time the order is placed. The resulting value for the expected profit is called the **Expected Value of Perfect Information (EVPI)**. The difference EVPI - EV is the extra profit due to having perfect knowledge of the future. To compute the expected profit with perfect information, we let the order variable $x$ be indexed by the subsequent scenario that will unfold. Given decision varaible $x_s$, the model for EVPI becomes $$ \begin{align*} \text{EVPI} = & \max_{x_s, y_s} \mathbb E[f] = \sum_{s\in S} p_s f_s \\ \text{subject to:} \\ f_s & = r y_s + w(x_s - y_s) - c x_s & \forall s \in S\\ y_s & \leq x_s & \forall s \in S \\ y_s & \leq d_s & \forall s \in S \end{align*} $$ The following implementation is a variation of the prior cell. ``` import pyomo.environ as pyo import pandas as pd # price information r = 40 c = 12 w = 2 # scenario information scenarios = { "sunny skies" : {"demand": 650, "probability": 0.1}, "good weather": {"demand": 400, "probability": 0.6}, "poor weather": {"demand": 200, "probability": 0.3}, } # create model instance m = pyo.ConcreteModel('Pop-up Shop') # set of scenarios m.S = pyo.Set(initialize=scenarios.keys()) # decision variables m.x = pyo.Var(m.S, domain=pyo.NonNegativeIntegers) m.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers) m.f = pyo.Var(m.S, domain=pyo.Reals) # objective @m.Objective(sense=pyo.maximize) def EV(m): return sum([scenarios[s]["probability"]*m.f[s] for s in m.S]) # constraints @m.Constraint(m.S) def profit(m, s): return m.f[s] == r*m.y[s] + w*(m.x[s] - m.y[s]) - c*m.x[s] @m.Constraint(m.S) def sales_less_than_order(m, s): return m.y[s] <= m.x[s] @m.Constraint(m.S) def sales_less_than_demand(m, s): return m.y[s] <= scenarios[s]["demand"] # solve solver = pyo.SolverFactory('glpk') results = solver.solve(m) # display solution using Pandas print("Solver Termination Condition:", results.solver.termination_condition) print("Expected Profit:", m.EV()) print() for s in m.S: scenarios[s]["order"] = m.x[s]() scenarios[s]["sold"] = m.y[s]() scenarios[s]["salvage"] = m.x[s]() - m.y[s]() scenarios[s]["profit"] = m.f[s]() df = pd.DataFrame.from_dict(scenarios).T display(df) ``` ## Summary To summarize, have computed three different solutions to the problem of order size: * The expected value of the mean solution (EVM) is the expected profit resulting from ordering the number of items expected to sold under all scenarios. * The expected value of the stochastic solution (EVSS) is the expected profit found by solving an two-state optimization problem where the order size was the "here and now" decision without specific knowledge of which future scenario would transpire. * The expected value of perfect information (EVPI) is the result of a hypotherical case where knowledge of the future scenario was somehow available when then order had to be placed. For this example we found | Solution | Value (&euro;) | | :------ | ----: | | Expected Value of the Mean Solution (EVM) | 8,399.0 | | Expected Value of the Stochastic Solution (EVSS) | 8,920.0 | | Expected Value of Perfect Information (EVPI) | 10,220.0 | These results verify our expectation that $$ \begin{align*} EVM \leq EVSS \leq EVPI \end{align*} $$ The value of the stochastic solution $$ \begin{align*} VSS = EVSS - EVM = 581 \end{align*} $$ The value of perfect information $$ \begin{align*} VPI = EVPI - EVSS = 1,300 \end{align*} $$ As one might expect, there is a cost that results from lack of knowledge about an uncertain future.
true
code
0.413921
null
null
null
null
``` !wget --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \ -O cats_and_dogs_filtered.zip ! unzip cats_and_dogs_filtered.zip import keras,os from keras.models import Sequential from keras.layers import Dense, Conv2D, MaxPool2D , Flatten from keras.preprocessing.image import ImageDataGenerator import numpy as np trdata = ImageDataGenerator() traindata = trdata.flow_from_directory(directory="cats_and_dogs_filtered/train",target_size=(224,224)) tsdata = ImageDataGenerator() testdata = tsdata.flow_from_directory(directory="cats_and_dogs_filtered/validation", target_size=(224,224)) model = Sequential() model.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu")) model.add(MaxPool2D(pool_size=(2,2),strides=(2,2))) model.add(Flatten()) model.add(Dense(units=4096,activation="relu")) model.add(Dense(units=4096,activation="relu")) model.add(Dense(units=2, activation="softmax")) from keras.optimizers import Adam opt = Adam(lr=0.001) model.compile(optimizer=opt, loss=keras.losses.categorical_crossentropy, metrics=['accuracy']) model.summary() from keras.callbacks import ModelCheckpoint, EarlyStopping checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1) early = EarlyStopping(monitor='val_acc', min_delta=0, patience=20, verbose=1, mode='auto') hist = model.fit_generator(steps_per_epoch=100,generator=traindata, validation_data= testdata, validation_steps=10,epochs=100,callbacks=[checkpoint,early]) import matplotlib.pyplot as plt plt.plot(hist.history["acc"]) plt.plot(hist.history['val_acc']) plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.title("model accuracy") plt.ylabel("Accuracy") plt.xlabel("Epoch") plt.legend(["Accuracy","Validation Accuracy","loss","Validation Loss"]) plt.show() from keras.preprocessing import image img = image.load_img("Pomeranian_01.jpeg",target_size=(224,224)) img = np.asarray(img) plt.imshow(img) img = np.expand_dims(img, axis=0) from keras.models import load_model saved_model = load_model("vgg16_1.h5") output = saved_model.predict(img) if output[0][0] > output[0][1]: print("cat") else: print('dog') ```
true
code
0.643273
null
null
null
null
# Classification with Neural Network for Yoga poses detection ## Import Dependencies ``` import numpy as np import pandas as pd import os import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.image import load_img, img_to_array from tensorflow.python.keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import classification_report, log_loss, accuracy_score from sklearn.model_selection import train_test_split ``` ## Getting the data (images) and labels ``` # Data path train_dir = 'pose_recognition_data/dataset' # Getting the folders name to be able to labelize the data Name=[] for file in os.listdir(train_dir): Name+=[file] print(Name) print(len(Name)) N=[] for i in range(len(Name)): N+=[i] normal_mapping=dict(zip(Name,N)) reverse_mapping=dict(zip(N,Name)) def mapper(value): return reverse_mapping[value] dataset=[] testset=[] count=0 for file in os.listdir(train_dir): t=0 path=os.path.join(train_dir,file) for im in os.listdir(path): image=load_img(os.path.join(path,im), grayscale=False, color_mode='rgb', target_size=(40,40)) image=img_to_array(image) image=image/255.0 if t<60: dataset+=[[image,count]] else: testset+=[[image,count]] t+=1 count=count+1 data,labels0=zip(*dataset) test,testlabels0=zip(*testset) labels1=to_categorical(labels0) labels=np.array(labels1) # Transforming the into Numerical Data data=np.array(data) test=np.array(test) trainx,testx,trainy,testy=train_test_split(data,labels,test_size=0.2,random_state=44) print(trainx.shape) print(testx.shape) print(trainy.shape) print(testy.shape) # Data augmentation datagen = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=20,zoom_range=0.2, width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,fill_mode="nearest") # Loading the pretrained model , here DenseNet201 pretrained_model3 = tf.keras.applications.DenseNet201(input_shape=(40,40,3),include_top=False,weights='imagenet',pooling='avg') pretrained_model3.trainable = False inputs3 = pretrained_model3.input x3 = tf.keras.layers.Dense(128, activation='relu')(pretrained_model3.output) outputs3 = tf.keras.layers.Dense(107, activation='softmax')(x3) model = tf.keras.Model(inputs=inputs3, outputs=outputs3) model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) his=model.fit(datagen.flow(trainx,trainy,batch_size=32),validation_data=(testx,testy),epochs=50) y_pred=model.predict(testx) pred=np.argmax(y_pred,axis=1) ground = np.argmax(testy,axis=1) print(classification_report(ground,pred)) #Checking accuracy of our model get_acc = his.history['accuracy'] value_acc = his.history['val_accuracy'] get_loss = his.history['loss'] validation_loss = his.history['val_loss'] epochs = range(len(get_acc)) plt.plot(epochs, get_acc, 'r', label='Accuracy of Training data') plt.plot(epochs, value_acc, 'b', label='Accuracy of Validation data') plt.title('Training vs validation accuracy') plt.legend(loc=0) plt.figure() plt.show() # Checking the loss of data epochs = range(len(get_loss)) plt.plot(epochs, get_loss, 'r', label='Loss of Training data') plt.plot(epochs, validation_loss, 'b', label='Loss of Validation data') plt.title('Training vs validation loss') plt.legend(loc=0) plt.figure() plt.show() load_img("pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png",target_size=(40,40)) image = load_img("pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png",target_size=(40,40)) image=img_to_array(image) image=image/255.0 prediction_image=np.array(image) prediction_image= np.expand_dims(image, axis=0) prediction=model.predict(prediction_image) value=np.argmax(prediction) move_name=mapper(value) print("Prediction is {}.".format(move_name)) print(test.shape) pred2=model.predict(test) print(pred2.shape) PRED=[] for item in pred2: value2=np.argmax(item) PRED+=[value2] ANS=testlabels0 accuracy=accuracy_score(ANS,PRED) print(accuracy) ```
true
code
0.554531
null
null
null
null
## _*H2 ground state energy computation using Iterative QPE*_ This notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using IQPE (Iterative Quantum Phase Estimation) algorithm. It is compared to the same energies as computed by the ExactEigensolver This notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the qiskit_chemistry stack. Such a dictionary can be manipulated programmatically and this is indeed the case here where we alter the molecule supplied to the driver in each loop. This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires. ``` import numpy as np import pylab from qiskit import LegacySimulators from qiskit_chemistry import QiskitChemistry import time # Input dictionary to configure Qiskit Chemistry for the chemistry problem. qiskit_chemistry_dict = { 'driver': {'name': 'PYSCF'}, 'PYSCF': {'atom': '', 'basis': 'sto3g'}, 'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'}, 'algorithm': {'name': ''}, 'initial_state': {'name': 'HartreeFock'}, } molecule = 'H .0 .0 -{0}; H .0 .0 {0}' algorithms = [ { 'name': 'IQPE', 'num_iterations': 16, 'num_time_slices': 3000, 'expansion_mode': 'trotter', 'expansion_order': 1, }, { 'name': 'ExactEigensolver' } ] backends = [ LegacySimulators.get_backend('qasm_simulator'), None ] start = 0.5 # Start distance by = 0.5 # How much to increase distance by steps = 20 # Number of steps to increase by energies = np.empty([len(algorithms), steps+1]) hf_energies = np.empty(steps+1) distances = np.empty(steps+1) import concurrent.futures import multiprocessing as mp import copy def subrountine(i, qiskit_chemistry_dict, d, backend, algorithm): solver = QiskitChemistry() qiskit_chemistry_dict['PYSCF']['atom'] = molecule.format(d/2) qiskit_chemistry_dict['algorithm'] = algorithm result = solver.run(qiskit_chemistry_dict, backend=backend) return i, d, result['energy'], result['hf_energy'] start_time = time.time() max_workers = max(4, mp.cpu_count()) with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor: futures = [] for j in range(len(algorithms)): algorithm = algorithms[j] backend = backends[j] for i in range(steps+1): d = start + i*by/steps future = executor.submit( subrountine, i, copy.deepcopy(qiskit_chemistry_dict), d, backend, algorithm ) futures.append(future) for future in concurrent.futures.as_completed(futures): i, d, energy, hf_energy = future.result() energies[j][i] = energy hf_energies[i] = hf_energy distances[i] = d print(' --- complete') print('Distances: ', distances) print('Energies:', energies) print('Hartree-Fock energies:', hf_energies) print("--- %s seconds ---" % (time.time() - start_time)) pylab.plot(distances, hf_energies, label='Hartree-Fock') for j in range(len(algorithms)): pylab.plot(distances, energies[j], label=algorithms[j]['name']) pylab.xlabel('Interatomic distance') pylab.ylabel('Energy') pylab.title('H2 Ground State Energy') pylab.legend(loc='upper right') pylab.show() pylab.plot(distances, np.subtract(hf_energies, energies[1]), label='Hartree-Fock') pylab.plot(distances, np.subtract(energies[0], energies[1]), label='IQPE') pylab.xlabel('Interatomic distance') pylab.ylabel('Energy') pylab.title('Energy difference from ExactEigensolver') pylab.legend(loc='upper right') pylab.show() ```
true
code
0.51562
null
null
null
null
# ML Pipeline Preparation Follow the instructions below to help you create your ML pipeline. ### 1. Import libraries and load data from database. - Import Python libraries - Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html) - Define feature and target variables X and Y ``` # import necessary libraries import pandas as pd import numpy as np import os import pickle import nltk import re from sqlalchemy import create_engine import sqlite3 from nltk.tokenize import word_tokenize, RegexpTokenizer from nltk.stem import WordNetLemmatizer from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.multioutput import MultiOutputClassifier from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV from sklearn.metrics import classification_report from sklearn.naive_bayes import MultinomialNB from sklearn.tree import DecisionTreeClassifier from sklearn.base import BaseEstimator, TransformerMixin from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.model_selection import GridSearchCV from sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report from sklearn.metrics import precision_recall_fscore_support from scipy.stats import hmean from scipy.stats.mstats import gmean from nltk.corpus import stopwords nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords']) import matplotlib.pyplot as plt %matplotlib inline # load data from database engine = create_engine('sqlite:///InsertDatabaseName.db') df = pd.read_sql("SELECT * FROM InsertTableName", engine) df.head() # View types of unque 'genre' attribute genre_types = df.genre.value_counts() genre_types # check for attributes with missing values/elements df.isnull().mean().head() # drops attributes with missing values df.dropna() df.head() # load data from database with 'X' as attributes for message column X = df["message"] # load data from database with 'Y' attributes for the last 36 columns Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1) ``` ### 2. Write a tokenization function to process your text data ``` # Proprocess text by removing unwanted properties def tokenize(text): ''' input: text: input text data containing attributes output: clean_tokens: cleaned text without unwanted texts ''' url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' detected_urls = re.findall(url_regex, text) for url in detected_urls: text = text.replace(url, "urlplaceholder") # take out all punctuation while tokenizing tokenizer = RegexpTokenizer(r'\w+') tokens = tokenizer.tokenize(text) # lemmatize as shown in the lesson lemmatizer = WordNetLemmatizer() clean_tokens = [] for tok in tokens: clean_tok = lemmatizer.lemmatize(tok).lower().strip() clean_tokens.append(clean_tok) return clean_tokens ``` ### 3. Build a machine learning pipeline This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables. ``` pipeline = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('tfidf', TfidfTransformer()), ('clf', MultiOutputClassifier(RandomForestClassifier())), ]) # Visualize model parameters pipeline.get_params() ``` ### 4. Train pipeline - Split data into train and test sets - Train pipeline ``` # use sklearn split function to split dataset into train and 20% test sets X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2) # Train pipeline using RandomForest Classifier algorithm pipeline.fit(X_train, y_train) ``` ### 5. Test your model Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's classification_report on each. ``` # Output result metrics of trained RandomForest Classifier algorithm def evaluate_model(model, X_test, y_test): ''' Input: model: RandomForest Classifier trained model X_test: Test training features Y_test: Test training response variable Output: None: Display model precision, recall, f1-score, support ''' y_pred = model.predict(X_test) for item, col in enumerate(y_test): print(col) print(classification_report(y_test[col], y_pred[:, item])) # classification_report to display model precision, recall, f1-score, support evaluate_model(pipeline, X_test, y_test) ``` ### 6. Improve your model Use grid search to find better parameters. ``` parameters = {'clf__estimator__max_depth': [10, 50, None], 'clf__estimator__min_samples_leaf':[2, 5, 10]} cv = GridSearchCV(pipeline, parameters) ``` ### 7. Test your model Show the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio! ``` # Train pipeline using the improved model cv.fit(X_train, y_train) # # classification_report to display model precision, recall, f1-score, support evaluate_model(cv, X_test, y_test) cv.best_estimator_ ``` ### 8. Try improving your model further. Here are a few ideas: * try other machine learning algorithms * add other features besides the TF-IDF ``` # Improve model using DecisionTree Classifier new_pipeline = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('tfidf', TfidfTransformer()), ('clf', MultiOutputClassifier(DecisionTreeClassifier())) ]) # Train improved model new_pipeline.fit(X_train, y_train) # Run result metric score display function evaluate_model(new_pipeline, X_test, y_test) ``` ### 9. Export your model as a pickle file ``` # save a copy file of the the trained model to disk trained_model_file = 'trained_model.sav' pickle.dump(cv, open(trained_model_file, 'wb')) ``` ### 10. Use this notebook to complete `train.py` Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
true
code
0.561034
null
null
null
null
# Random Signals *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).* ## Auto-Power Spectral Density The (auto-) [power spectral density](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density) (PSD) is defined as the Fourier transformation of the [auto-correlation function](correlation_functions.ipynb) (ACF). ### Definition For a continuous-amplitude, real-valued, wide-sense stationary (WSS) random signal $x[k]$ the PSD is given as \begin{equation} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathcal{F}_* \{ \varphi_{xx}[\kappa] \}, \end{equation} where $\mathcal{F}_* \{ \cdot \}$ denotes the [discrete-time Fourier transformation](https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform) (DTFT) and $\varphi_{xx}[\kappa]$ the ACF of $x[k]$. Note that the DTFT is performed with respect to $\kappa$. The ACF of a random signal of finite length $N$ can be expressed by way of a linear convolution \begin{equation} \varphi_{xx}[\kappa] = \frac{1}{N} \cdot x_N[k] * x_N[-k]. \end{equation} Taking the DTFT of the left- and right-hand side results in \begin{equation} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, X_N(\mathrm{e}^{-\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, | X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2. \end{equation} The last equality results from the definition of the magnitude and the symmetry of the DTFT for real-valued signals. The spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ quantifies the amplitude density of the signal $x_N[k]$. It can be concluded from above result that the PSD quantifies the squared amplitude or power density of a random signal. This explains the term power spectral density. ### Properties The properties of the PSD can be deduced from the properties of the ACF and the DTFT as: 1. From the link between the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and the spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ derived above it can be concluded that the PSD is real valued $$\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \in \mathbb{R}$$ 2. From the even symmetry $\varphi_{xx}[\kappa] = \varphi_{xx}[-\kappa]$ of the ACF it follows that $$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \Phi_{xx}(\mathrm{e}^{\,-\mathrm{j}\, \Omega}) $$ 3. The PSD of an uncorrelated random signal is given as $$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \sigma_x^2 + \mu_x^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) ,$$ which can be deduced from the [ACF of an uncorrelated signal](correlation_functions.ipynb#Properties). 4. The quadratic mean of a random signal is given as $$ E\{ x[k]^2 \} = \varphi_{xx}[\kappa=0] = \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \,\mathrm{d} \Omega $$ The last relation can be found by expressing the ACF via the inverse DTFT of $\Phi_{xx}$ and considering that $\mathrm{e}^{\mathrm{j} \Omega \kappa} = 1$ when evaluating the integral for $\kappa=0$. ### Example - Power Spectral Density of a Speech Signal In this example the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \,\Omega})$ of a speech signal of length $N$ is estimated by applying a discrete Fourier transformation (DFT) to its ACF. For a better interpretation of the PSD, the frequency axis $f = \frac{\Omega}{2 \pi} \cdot f_s$ has been chosen for illustration, where $f_s$ denotes the sampling frequency of the signal. The speech signal constitutes a recording of the vowel 'o' spoken from a German male, loaded into variable `x`. In Python the ACF is stored in a vector with indices $0, 1, \dots, 2N - 2$ corresponding to the lags $\kappa = (0, 1, \dots, 2N - 2)^\mathrm{T} - (N-1)$. When computing the discrete Fourier transform (DFT) of the ACF numerically by the fast Fourier transform (FFT) one has to take this shift into account. For instance, by multiplying the DFT $\Phi_{xx}[\mu]$ by $\mathrm{e}^{\mathrm{j} \mu \frac{2 \pi}{2N - 1} (N-1)}$. ``` import numpy as np import matplotlib.pyplot as plt from scipy.io import wavfile # read audio file fs, x = wavfile.read('../data/vocal_o_8k.wav') x = np.asarray(x, dtype=float) N = len(x) # compute ACF acf = 1/N * np.correlate(x, x, mode='full') # compute PSD psd = np.fft.fft(acf) psd = psd * np.exp(1j*np.arange(2*N-1)*2*np.pi*(N-1)/(2*N-1)) f = np.fft.fftfreq(2*N-1, d=1/fs) # plot PSD plt.figure(figsize=(10, 4)) plt.plot(f, np.real(psd)) plt.title('Estimated power spectral density') plt.ylabel(r'$\hat{\Phi}_{xx}(e^{j \Omega})$') plt.xlabel(r'$f / Hz$') plt.axis([0, 500, 0, 1.1*max(np.abs(psd))]) plt.grid() ``` **Exercise** * What does the PSD tell you about the average spectral contents of a speech signal? Solution: The speech signal exhibits a harmonic structure with the dominant fundamental frequency $f_0 \approx 100$ Hz and a number of harmonics $f_n \approx n \cdot f_0$ for $n > 0$. This due to the fact that vowels generate random signals which are in good approximation periodic. To generate vowels, the sound produced by the periodically vibrating vowel folds is filtered by the resonance volumes and articulators above the voice box. The spectrum of periodic signals is a line spectrum. ## Cross-Power Spectral Density The cross-power spectral density is defined as the Fourier transformation of the [cross-correlation function](correlation_functions.ipynb#Cross-Correlation-Function) (CCF). ### Definition For two continuous-amplitude, real-valued, wide-sense stationary (WSS) random signals $x[k]$ and $y[k]$, the cross-power spectral density is given as \begin{equation} \Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mathcal{F}_* \{ \varphi_{xy}[\kappa] \}, \end{equation} where $\varphi_{xy}[\kappa]$ denotes the CCF of $x[k]$ and $y[k]$. Note again, that the DTFT is performed with respect to $\kappa$. The CCF of two random signals of finite length $N$ and $M$ can be expressed by way of a linear convolution \begin{equation} \varphi_{xy}[\kappa] = \frac{1}{N} \cdot x_N[k] * y_M[-k]. \end{equation} Note the chosen $\frac{1}{N}$-averaging convention corresponds to the length of signal $x$. If $N \neq M$, care should be taken on the interpretation of this normalization. In case of $N=M$ the $\frac{1}{N}$-averaging yields a [biased estimator](https://en.wikipedia.org/wiki/Bias_of_an_estimator) of the CCF, which consistently should be denoted with $\hat{\varphi}_{xy,\mathrm{biased}}[\kappa]$. Taking the DTFT of the left- and right-hand side from above cross-correlation results in \begin{equation} \Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, Y_M(\mathrm{e}^{-\,\mathrm{j}\,\Omega}). \end{equation} ### Properties 1. The symmetries of $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ can be derived from the symmetries of the CCF and the DTFT as $$ \underbrace {\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \Phi_{xy}^*(\mathrm{e}^{-\,\mathrm{j}\, \Omega})}_{\varphi_{xy}[\kappa] \in \mathbb{R}} = \underbrace {\Phi_{yx}(\mathrm{e}^{\,- \mathrm{j}\, \Omega}) = \Phi_{yx}^*(\mathrm{e}^{\,\mathrm{j}\, \Omega})}_{\varphi_{yx}[-\kappa] \in \mathbb{R}},$$ from which $|\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})| = |\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\, \Omega})|$ can be concluded. 2. The cross PSD of two uncorrelated random signals is given as $$ \Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mu_x^2 \mu_y^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) $$ which can be deduced from the CCF of an uncorrelated signal. ### Example - Cross-Power Spectral Density The following example estimates and plots the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of two random signals $x_N[k]$ and $y_M[k]$ of finite lengths $N = 64$ and $M = 512$. ``` N = 64 # length of x M = 512 # length of y # generate two uncorrelated random signals np.random.seed(1) x = 2 + np.random.normal(size=N) y = 3 + np.random.normal(size=M) N = len(x) M = len(y) # compute cross PSD via CCF acf = 1/N * np.correlate(x, y, mode='full') psd = np.fft.fft(acf) psd = psd * np.exp(1j*np.arange(N+M-1)*2*np.pi*(M-1)/(2*M-1)) psd = np.fft.fftshift(psd) Om = 2*np.pi * np.arange(0, N+M-1) / (N+M-1) Om = Om - np.pi # plot results plt.figure(figsize=(10, 4)) plt.stem(Om, np.abs(psd), basefmt='C0:', use_line_collection=True) plt.title('Biased estimator of cross power spectral density') plt.ylabel(r'$|\hat{\Phi}_{xy}(e^{j \Omega})|$') plt.xlabel(r'$\Omega$') plt.grid() ``` **Exercise** * What does the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega})$ tell you about the statistical properties of the two random signals? Solution: The cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega})$ is essential only non-zero for $\Omega=0$. It hence can be concluded that the two random signals are not mean-free and uncorrelated to each other. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
true
code
0.622832
null
null
null
null
# Implementation of VGG16 > In this notebook I have implemented VGG16 on CIFAR10 dataset using Pytorch ``` #importing libraries import torch import torch.nn as nn import torch.nn.functional as F from torchvision import transforms import torch.optim as optim import tqdm import matplotlib.pyplot as plt from torchvision.datasets import CIFAR10 from torch.utils.data import random_split from torch.utils.data.dataloader import DataLoader ``` Load the data and do standard preprocessing steps,such as resizing and converting the images into tensor ``` transform = transforms.Compose([transforms.Resize(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485,0.456,0.406], std=[0.229,0.224,0.225])]) train_ds = CIFAR10(root='data/',train = True,download=True,transform = transform) val_ds = CIFAR10(root='data/',train = False,download=True,transform = transform) batch_size = 128 train_loader = DataLoader(train_ds,batch_size,shuffle=True,num_workers=4,pin_memory=True) val_loader = DataLoader(val_ds,batch_size,num_workers=4,pin_memory=True) ``` A custom utility class to print out the accuracy and losses during training and testing ``` def accuracy(outputs,labels): _,preds = torch.max(outputs,dim=1) return torch.tensor(torch.sum(preds==labels).item()/len(preds)) class ImageClassificationBase(nn.Module): def training_step(self,batch): images, labels = batch out = self(images) loss = F.cross_entropy(out,labels) return loss def validation_step(self,batch): images, labels = batch out = self(images) loss = F.cross_entropy(out,labels) acc = accuracy(out,labels) return {'val_loss': loss.detach(),'val_acc': acc} def validation_epoch_end(self,outputs): batch_losses = [x['val_loss'] for x in outputs] epoch_loss = torch.stack(batch_losses).mean() batch_accs = [x['val_acc'] for x in outputs] epoch_acc = torch.stack(batch_accs).mean() return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()} def epoch_end(self, epoch, result): print("Epoch [{}], train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format( epoch, result['train_loss'], result['val_loss'], result['val_acc'])) ``` ### Creating a network ``` VGG_types = { 'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], 'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], } class VGG_net(ImageClassificationBase): def __init__(self, in_channels=3, num_classes=1000): super(VGG_net, self).__init__() self.in_channels = in_channels self.conv_layers = self.create_conv_layers(VGG_types['VGG16']) self.fcs = nn.Sequential( nn.Linear(512*7*7, 4096), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(p=0.5), nn.Linear(4096, num_classes) ) def forward(self, x): x = self.conv_layers(x) x = x.reshape(x.shape[0], -1) x = self.fcs(x) return x def create_conv_layers(self, architecture): layers = [] in_channels = self.in_channels for x in architecture: if type(x) == int: out_channels = x layers += [nn.Conv2d(in_channels=in_channels,out_channels=out_channels, kernel_size=(3,3), stride=(1,1), padding=(1,1)), nn.BatchNorm2d(x), nn.ReLU()] in_channels = x elif x == 'M': layers += [nn.MaxPool2d(kernel_size=(2,2), stride=(2,2))] return nn.Sequential(*layers) ``` A custom function to pick a default device ``` def get_default_device(): """Pick GPU if available else CPU""" if torch.cuda.is_available(): return torch.device('cuda') else: return torch.device('cpu') device = get_default_device() device def to_device(data,device): """Move tensors to chosen device""" if isinstance(data,(list,tuple)): return [to_device(x,device) for x in data] return data.to(device,non_blocking=True) for images, labels in train_loader: print(images.shape) images = to_device(images,device) print(images.device) break class DeviceDataLoader(): """Wrap a DataLoader to move data to a device""" def __init__(self,dl,device): self.dl = dl self.device = device def __iter__(self): """Yield a batch of data to a dataloader""" for b in self.dl: yield to_device(b, self.device) def __len__(self): """Number of batches""" return len(self.dl) train_loader = DeviceDataLoader(train_loader,device) val_loader = DeviceDataLoader(val_loader,device) model = VGG_net(in_channels=3,num_classes=10) to_device(model,device) ``` ### Training the model ``` @torch.no_grad() def evaluate(model, val_loader): model.eval() outputs = [model.validation_step(batch) for batch in val_loader] return model.validation_epoch_end(outputs) def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD): history = [] train_losses =[] optimizer = opt_func(model.parameters(), lr) for epoch in range(epochs): # Training Phase model.train() for batch in train_loader: loss = model.training_step(batch) train_losses.append(loss) loss.backward() optimizer.step() optimizer.zero_grad() # Validation phase result = evaluate(model, val_loader) result['train_loss'] = torch.stack(train_losses).mean().item() model.epoch_end(epoch, result) history.append(result) return history history = [evaluate(model, val_loader)] history #history = fit(2,0.1,model,train_loader,val_loader) ```
true
code
0.827602
null
null
null
null
# REINFORCE in PyTorch Just like we did before for Q-learning, this time we'll design a PyTorch network to learn `CartPole-v0` via policy gradient (REINFORCE). Most of the code in this notebook is taken from approximate Q-learning, so you'll find it more or less familiar and even simpler. ``` import sys, os if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'): !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash !touch .setup_complete # This code creates a virtual display to draw game images on. # It will have no effect if your machine has a monitor. if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0: !bash ../xvfb start os.environ['DISPLAY'] = ':1' import gym import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` A caveat: with some versions of `pyglet`, the following cell may crash with `NameError: name 'base' is not defined`. The corresponding bug report is [here](https://github.com/pyglet/pyglet/issues/134). If you see this error, try restarting the kernel. ``` env = gym.make("CartPole-v0") # gym compatibility: unwrap TimeLimit if hasattr(env, '_max_episode_steps'): env = env.env env.reset() n_actions = env.action_space.n state_dim = env.observation_space.shape plt.imshow(env.render("rgb_array")) ``` # Building the network for REINFORCE For REINFORCE algorithm, we'll need a model that predicts action probabilities given states. For numerical stability, please __do not include the softmax layer into your network architecture__. We'll use softmax or log-softmax where appropriate. ``` import torch import torch.nn as nn # Build a simple neural network that predicts policy logits. # Keep it simple: CartPole isn't worth deep architectures. model = nn.Sequential( <YOUR CODE: define a neural network that predicts policy logits> ) ``` #### Predict function Note: output value of this function is not a torch tensor, it's a numpy array. So, here gradient calculation is not needed. <br> Use [no_grad](https://pytorch.org/docs/stable/autograd.html#torch.autograd.no_grad) to suppress gradient calculation. <br> Also, `.detach()` (or legacy `.data` property) can be used instead, but there is a difference: <br> With `.detach()` computational graph is built but then disconnected from a particular tensor, so `.detach()` should be used if that graph is needed for backprop via some other (not detached) tensor; <br> In contrast, no graph is built by any operation in `no_grad()` context, thus it's preferable here. ``` def predict_probs(states): """ Predict action probabilities given states. :param states: numpy array of shape [batch, state_shape] :returns: numpy array of shape [batch, n_actions] """ # convert states, compute logits, use softmax to get probability <YOUR CODE> return <YOUR CODE> test_states = np.array([env.reset() for _ in range(5)]) test_probas = predict_probs(test_states) assert isinstance(test_probas, np.ndarray), \ "you must return np array and not %s" % type(test_probas) assert tuple(test_probas.shape) == (test_states.shape[0], env.action_space.n), \ "wrong output shape: %s" % np.shape(test_probas) assert np.allclose(np.sum(test_probas, axis=1), 1), "probabilities do not sum to 1" ``` ### Play the game We can now use our newly built agent to play the game. ``` def generate_session(env, t_max=1000): """ Play a full session with REINFORCE agent. Returns sequences of states, actions, and rewards. """ # arrays to record session states, actions, rewards = [], [], [] s = env.reset() for t in range(t_max): # action probabilities array aka pi(a|s) action_probs = predict_probs(np.array([s]))[0] # Sample action with given probabilities. a = <YOUR CODE> new_s, r, done, info = env.step(a) # record session history to train later states.append(s) actions.append(a) rewards.append(r) s = new_s if done: break return states, actions, rewards # test it states, actions, rewards = generate_session(env) ``` ### Computing cumulative rewards $$ \begin{align*} G_t &= r_t + \gamma r_{t + 1} + \gamma^2 r_{t + 2} + \ldots \\ &= \sum_{i = t}^T \gamma^{i - t} r_i \\ &= r_t + \gamma * G_{t + 1} \end{align*} $$ ``` def get_cumulative_rewards(rewards, # rewards at each step gamma=0.99 # discount for reward ): """ Take a list of immediate rewards r(s,a) for the whole session and compute cumulative returns (a.k.a. G(s,a) in Sutton '16). G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ... A simple way to compute cumulative rewards is to iterate from the last to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently You must return an array/list of cumulative rewards with as many elements as in the initial rewards. """ <YOUR CODE> return <YOUR CODE: array of cumulative rewards> get_cumulative_rewards(rewards) assert len(get_cumulative_rewards(list(range(100)))) == 100 assert np.allclose( get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9), [1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0]) assert np.allclose( get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0]) assert np.allclose( get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0), [0, 0, 1, 2, 3, 4, 0]) print("looks good!") ``` #### Loss function and updates We now need to define objective and update over policy gradient. Our objective function is $$ J \approx { 1 \over N } \sum_{s_i,a_i} G(s_i,a_i) $$ REINFORCE defines a way to compute the gradient of the expected reward with respect to policy parameters. The formula is as follows: $$ \nabla_\theta \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \nabla_\theta \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$ We can abuse PyTorch's capabilities for automatic differentiation by defining our objective function as follows: $$ \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$ When you compute the gradient of that function with respect to network weights $\theta$, it will become exactly the policy gradient. ``` def to_one_hot(y_tensor, ndims): """ helper: take an integer vector and convert it to 1-hot matrix. """ y_tensor = y_tensor.type(torch.LongTensor).view(-1, 1) y_one_hot = torch.zeros( y_tensor.size()[0], ndims).scatter_(1, y_tensor, 1) return y_one_hot # Your code: define optimizers optimizer = torch.optim.Adam(model.parameters(), 1e-3) def train_on_session(states, actions, rewards, gamma=0.99, entropy_coef=1e-2): """ Takes a sequence of states, actions and rewards produced by generate_session. Updates agent's weights by following the policy gradient above. Please use Adam optimizer with default parameters. """ # cast everything into torch tensors states = torch.tensor(states, dtype=torch.float32) actions = torch.tensor(actions, dtype=torch.int32) cumulative_returns = np.array(get_cumulative_rewards(rewards, gamma)) cumulative_returns = torch.tensor(cumulative_returns, dtype=torch.float32) # predict logits, probas and log-probas using an agent. logits = model(states) probs = nn.functional.softmax(logits, -1) log_probs = nn.functional.log_softmax(logits, -1) assert all(isinstance(v, torch.Tensor) for v in [logits, probs, log_probs]), \ "please use compute using torch tensors and don't use predict_probs function" # select log-probabilities for chosen actions, log pi(a_i|s_i) log_probs_for_actions = torch.sum( log_probs * to_one_hot(actions, env.action_space.n), dim=1) # Compute loss here. Don't forgen entropy regularization with `entropy_coef` entropy = <YOUR CODE> loss = <YOUR CODE> # Gradient descent step <YOUR CODE> # technical: return session rewards to print them later return np.sum(rewards) ``` ### The actual training ``` for i in range(100): rewards = [train_on_session(*generate_session(env)) for _ in range(100)] # generate new sessions print("mean reward:%.3f" % (np.mean(rewards))) if np.mean(rewards) > 500: print("You Win!") # but you can train even further break ``` ### Results & video ``` # Record sessions import gym.wrappers with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor: sessions = [generate_session(env_monitor) for _ in range(100)] # Show video. This may not work in some setups. If it doesn't # work for you, you can download the videos and view them locally. from pathlib import Path from base64 import b64encode from IPython.display import HTML video_paths = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4']) video_path = video_paths[-1] # You can also try other indices if 'google.colab' in sys.modules: # https://stackoverflow.com/a/57378660/1214547 with video_path.open('rb') as fp: mp4 = fp.read() data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode() else: data_url = str(video_path) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format(data_url)) ```
true
code
0.728176
null
null
null
null
# BE 240 Lecture 4 # Sub-SBML ## Modeling diffusion, shared resources, and compartmentalized systems ## _Ayush Pandey_ ``` # This notebook is designed to be converted to a HTML slide show # To do this in the command prompt type (in the folder containing the notebook): # jupyter nbconvert BE240_Lecture4_Sub-SBML.ipynb --to slides ``` ![image.png](attachment:image.png) ![image.png](attachment:image.png) # An example: ### Three different "subsystems" - each with its SBML model ### Another "signal in mixture" subsystem - models signal in the environment / mixture ### Using Sub-SBML we can obtain the combined model for such a system with * transport across membrane * shared resources : ATP, Ribosome etc * resolve naming conflicts (Ribo, Ribosome, RNAP, RNAPolymerase etc.) ![image.png](attachment:image.png) # Installing Sub-SBML ``` git clone https://github.com/BuildACell/subsbml.git ``` cd to `subsbml` directory then run the following command to install the package in your environment: ``` python setup.py install ``` # Dependencies: 1. python-libsbml : Run `pip install python-libsbml`, if you don't have it already. You probably already have this installed as it is also a dependency for bioscrape 1. A simulator: You will need a simulator of your choice to simulate the SBML models that Sub-SBML generates. Bioscrape is an example of a simulator and we will be using that for simulations. # Update your bioscrape installation From the bioscrape directory, run the following if you do not have a remote fork (your own Github fork of the original bioscrape repository - `biocircuits/bioscrape`. To list all remote repositories that your bioscrape directory is connected to you can run `git remote -v`. The `origin` in the next two commands corresponds to the biocircuits/bioscrape github repository (you should change it if your remote has a different name) ``` git pull origin master python setup.py install ``` Update your BioCRNpyler installation as well - if you plan to use your own BioCRNpyler models with Sub-SBML. Run the same commands as for bioscrape from the BioCRNpyler directory. ## Sub-SBML notes: ## On "name" and "identifier": > SBML elements can have a name and an identifier argument. A `name` is supposed to be a human readable name of the particular element in the model. On the other hand, an `identifier` is what the software tool reads. Hence, `identifier` argument in an SBML model is mandatory whereas `name` argument is optional. Sub-SBML works with `name` arguments of various model components to figure out what components interact/get combined/shared etc. Bioscrape/BioCRNpyler and other common software tools generate SBML models with `name` arguments added to various components such as species, parameters. As an example, to combine two species, Sub-SBML looks at the names of the two species and if they are the same - they are combined together and given a new identifier but the name remains the same. ## A simple Sub-SBML use case: A simple example where we have two different models : transcription and translation. Using Sub-SBML, we can combine these two together and run simulations. ``` # Import statements from subsbml.Subsystem import createNewSubsystem, createSubsystem import numpy as np import pylab as plt ``` ## Transcription Model: Consider the following simple transcription-only model where $G$ is a gene, $T$ is a transcript, and $S$ is the signaling molecule. We can write the following reduced order dynamics: 1. $G \xrightarrow[]{\rho_{tx}(G, S)} G + T$; \begin{align} \rho_{tx}(G, S) = G K_{X}\frac{S^{2}}{K_{S}^{2}+S^{2}} \\ \end{align} Here, $S$ is the inducer signal that cooperatively activates the transcription of the gene $G$. Since, this is a positive activation of the gene by the inducer, we have a positive proportional Hill function. 1. $T \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$. ## Translation model: 1. $T \xrightarrow[]{\rho_{tl}(T)} T+X$; \begin{align} \rho_{tl}(T) = K_{TR} \frac{T}{K_{R} + T} \\ \end{align} Here $X$ is the protein species. The lumped parameters $K_{TR}$ and $K_R$ model effects due to ribosome saturation. This is the similar Hill function as derived in the enzymatic reaction example. 1. $X \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$. ``` # Import SBML models by creating Subsystem class objects ss1 = createSubsystem('transcription_SBML_model.xml') ss2 = createSubsystem('translation_SBML_model.xml') ss1.renameSName('mRNA_T', 'T') # Combine the two subsystems together tx_tl_subsystem = ss1 + ss2 # The longer way to do the same thing: # tx_tl_subsystem = createNewSubsystem() # tx_tl_subsystem.combineSubsystems([ss1,ss2], verbose = True) # Set signal concentration (input) - manually and get ID for protein X X_id = tx_tl_subsystem.getSpeciesByName('X').getId() # Writing a Subsystem to an SBML file (Export SBML) _ = tx_tl_subsystem.writeSBML('txtl_ss.xml') tx_tl_subsystem.setSpeciesAmount('S',10) try: # Simulate with Bioscrape and plot the result timepoints = np.linspace(0,100,100) results, _ = tx_tl_subsystem.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'S = 10') tx_tl_subsystem.setSpeciesAmount('S',5) results, _ = tx_tl_subsystem.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'S = 5') plt.title('Protein X dynamics') plt.ylabel('[X]') plt.xlabel('Time') plt.legend() plt.show() except: print('Simulator not found') # Viewing the change log for the changes that Sub-SBML made # print(ss1.changeLog) # print(ss2.changeLog) print(tx_tl_subsystem.changeLog) ``` ## Signal induction model: 1. $\varnothing \xrightarrow[]{\rho(I)} S$; \begin{align} \rho(S) = K_{0} \frac{I^2}{K_{I} + I^2} \\ \end{align} Here $S$ is the signal produced on induction by an inducer $I$. The lumped parameters $K_{0}$ and $K_S$ model effects of cooperative production of the signal by the inducer. This is the similar Hill function as derived in the enzymatic reaction example. ``` ss3 = createSubsystem('signal_in_mixture.xml') # Signal subsystem (production of signal molecule) combined_ss = ss1 + ss2 + ss3 # Alternatively combined_ss = createNewSubsystem() combined_ss.combineSubsystems([ss1,ss2,ss3]) # Writing a Subsystem to an SBML file (Export SBML) combined_ss.writeSBML('txtl_combined.xml') # Set signal concentration (input) - manually and get ID for protein X combined_ss.setSpeciesAmount('I',10) X_id = combined_ss.getSpeciesByName('X').getId() try: # Simulate with Bioscrape and plot the result timepoints = np.linspace(0,100,100) results, _ = combined_ss.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'I = 10') combined_ss.setSpeciesAmount('I',2) results, _ = combined_ss.simulateWithBioscrape(timepoints) plt.plot(timepoints, results[X_id], linewidth = 3, label = 'I = 5') plt.title('Protein X dynamics') plt.ylabel('[X]') plt.xlabel('Time') plt.legend() plt.show() except: print('Simulator not found') combined_ss.changeLog ``` ## What does Sub-SBML look for? 1. For compartments: if two compartments have the same `name` and the same `size` attributes => they are combined together. 1. For species: if two species have the same `name` attribute => they are combined together. If initial amount is not the same, the first amount is set. It is easy to set species amounts later. 1. For parameters: if two paraemters have the same `name` attribute **and** the same `value` => they are combined together. 1. For reactions: if two reactions have the same `name` **and** the same reaction string (reactants -> products) => they are combined together. 1. Other SBML components are also merged. # Utility functions for Subsystems 1. Set `verbose` keyword argument to `True` to get a list of detailed warning messages that describe the changes being made to the models. Helpful in debugging and creating clean models when combining multiple models. 1. Use `renameSName` method for a `Subsystem` to rename any species' names throughout a model and `renameSIdRefs` to rename identifiers. 1. Use `createBasicSubsystem()` function to get a basic "empty" subsystem model. 1. Use `getSpeciesByName` to get all species with a given name in a Subsystem model. 1. use `shareSubsystems` method similar to `combineSubsystems` method if you are only interested in getting a model with shared resource species combined together. 1. Set `combineNames` keyword argument to `False` in `combineSubsystems` method to combine the Subsystem objects but treating the elements with the same `name` as different. # Modeling transport across membranes ![image.png](attachment:image.png) ## System 1 : TX-TL with IPTG reservoir and no membrane ``` from subsbml.System import System, combineSystems cell_1 = System('cell_1') ss1 = createSubsystem('txtl_ss.xml') ss1.renameSName('S', 'IPTG') ss2 = createSubsystem('IPTG_reservoir.xml') IPTG_external_conc = ss2.getSpeciesByName('IPTG').getInitialConcentration() cell_1.setInternal([ss1]) cell_1.setExternal([ss2]) # cell_1.setMembrane() # Membrane-less system ss1.setSpeciesAmount('IPTG', IPTG_external_conc) cell_1_model = cell_1.getModel() # Get a Subsystem object that represents the combined model for cell_1 cell_1_model.writeSBML('cell_1_model.xml') ``` ## System 2 : TX-TL with IPTG reservoir and a simple membrane ### Membrane : IPTG external and internal diffusion in a one step reversible reaction ``` from subsbml import System, createSubsystem, combineSystems, createNewSubsystem ss1 = createSubsystem('txtl_ss.xml') ss1.renameSName('S','IPTG') ss2 = createSubsystem('IPTG_reservoir.xml') # Create a simple IPTG membrane where IPTG goes in an out of the membrane via a reversible reaction mb2 = createSubsystem('membrane_IPTG.xml', membrane = True) # cell_2 = System('cell_2',ListOfInternalSubsystems = [ss1], # ListOfExternalSubsystems = [ss2], # ListOfMembraneSubsystems = [mb2]) cell_2 = System('cell_2') cell_2.setInternal(ss1) cell_2.setExternal(ss2) cell_2.setMembrane(mb2) cell_2_model = cell_2.getModel() cell_2_model.setSpeciesAmount('IPTG', 1e4, compartment = 'cell_2_external') cell_2_model.writeSBML('cell_2_model.xml') ``` ## System 3 : TX-TL with IPTG reservoir and a detailed membrane diffusion ### Membrane : IPTG external binds to a transport protein and forms a complex. This complex causes the diffusion of IPTG in the internal of the cell. ``` # Create a more detailed IPTG membrane where IPTG binds to an intermediate transporter protein, forms a complex # then transports out of the cell system to the external environment mb3 = createSubsystem('membrane_IPTG_detailed.xml', membrane = True) cell_3 = System('cell_3',ListOfInternalSubsystems = [ss1], ListOfExternalSubsystems = [ss2], ListOfMembraneSubsystems = [mb3]) cell_3_model = cell_3.getModel() cell_3_model.setSpeciesAmount('IPTG', 1e4, compartment = 'cell_3_external') cell_3_model.writeSBML('cell_3_model.xml') combined_model = combineSystems([cell_1, cell_2, cell_3]) try: import numpy as np import matplotlib.pyplot as plt timepoints = np.linspace(0,2,100) results_1, _ = cell_1_model.simulateWithBioscrape(timepoints) results_2, _ = cell_2_model.simulateWithBioscrape(timepoints) results_3, _ = cell_3_model.simulateWithBioscrape(timepoints) X_id1 = cell_1_model.getSpeciesByName('X').getId() X_id2 = cell_2_model.getSpeciesByName('X', compartment = 'cell_2_internal').getId() X_id3 = cell_3_model.getSpeciesByName('X', compartment = 'cell_3_internal').getId() plt.plot(timepoints, results_1[X_id1], linewidth = 3, label = 'No membrane') plt.plot(timepoints, results_2[X_id2], linewidth = 3, label = 'Simple membrane') plt.plot(timepoints, results_3[X_id3], linewidth = 3, label = 'Advanced membrane') plt.xlabel('Time') plt.ylabel('[X]') plt.legend() plt.show() timepoints = np.linspace(0,200,100) results_1, _ = cell_1_model.simulateWithBioscrape(timepoints) results_2, _ = cell_2_model.simulateWithBioscrape(timepoints) results_3, _ = cell_3_model.simulateWithBioscrape(timepoints) X_id1 = cell_1_model.getSpeciesByName('X').getId() X_id2 = cell_2_model.getSpeciesByName('X', compartment = 'cell_2_internal').getId() X_id3 = cell_3_model.getSpeciesByName('X', compartment = 'cell_3_internal').getId() plt.plot(timepoints, results_1[X_id1], linewidth = 3, label = 'No membrane') plt.plot(timepoints, results_2[X_id2], linewidth = 3, label = 'Simple membrane') plt.plot(timepoints, results_3[X_id3], linewidth = 3, label = 'Advanced membrane') plt.xlabel('Time') plt.ylabel('[X]') plt.legend() plt.show() except: print('Simulator not found') ``` # Additional Sub-SBML Tools: * Create SBML models directly using `SimpleModel` class * Simulate directly using `bioscrape` or `libRoadRunner` with various simulation options * Various utility functions to edit SBML models: 1. Change species names/identifiers throughout an SBML model. 1. Edit parameter values or species initial conditions easily (directly in an SBML model). * `combineSystems` function can be used to combine multiple `System` objects together as shown in the previous cell. Also, a special use case interaction modeling function is available : `connectSubsystems`. Refer to the tutorial_interconnetion.ipynb notebook in the tutorials directory for more information about this. # Things to Try: 1. Compartmentalize your own SBML model - generate more than 1 model each with a different compartment names. Using tools in this notebook, try to combine your models together and regenerate the expected simulation. 1. Implement a diffusion model and use it as a membrane model for a `System` of your choice. 1. Implement an even more complicated diffusion model for the above example and run the simulation. 1. **The package has not been tested extensively. So, it would be really great if you could raise [issues](https://github.com/BuildACell/subsbml/issues) on Github if you face any errors with your models. Also, feel free to send a message on Slack channel or DM.**
true
code
0.647213
null
null
null
null
# Examples of usage of Gate Angle Placeholder The word "Placeholder" is used in Qubiter (we are in good company, Tensorflow uses this word in the same way) to mean a variable for which we delay/postpone assigning a numerical value (evaluating it) until a later time. In the case of Qubiter, it is useful to define gates with placeholders standing for angles. One can postpone evaluating those placeholders until one is ready to call the circuit simulator, and then pass the values of the placeholders as an argument to the simulator’s constructor. Placeholders of this type can be useful, for example, with quantum neural nets (QNNs). In some QNN algorithms, the circuit gate structure is fixed but the angles of the gates are varied many times, gradually, trying to lower a cost function each time. > In Qubiter, legal variable names must be of form `#3` or `-#3` or `#3*.5` or `-#3*.5` where 3 can be replaced by any non-negative int, and .5 can be replaced by anything that can be an argument of float() without throwing an exception. In this example, the 3 that follows the hash character is called the variable number >NEW! (functional placeholder variables) Now legal variable names can ALSO be of the form `my_fun#1#2` or `-my_fun#1#2`, where * the 1 and 2 can be replaced by any non-negative integers and there might be any number > 0 of hash variables. Thus, there need not always be precisely 2 hash variables as in the example. * `my_fun` can be replaced by the name of any function with one or more input floats (2 inputs in the example), as long as the first character of the function's name is a lower case letter. >The strings `my_fun#1#2` or `-my_fun#1#2` indicate than one wants to use for the angle being replaced, the values of `my_fun(#1, #2)` or `-my_fun(#1, #2)`, respectively, where the inputs #1 and #2 are floats standing for radians and the output is also a float standing for radians. ``` import os import sys print(os.getcwd()) os.chdir('../../') print(os.getcwd()) sys.path.insert(0,os.getcwd()) ``` We begin by writing a simple circuit with 4 qubits. As usual, the following code will write an English and a Picture file in the `io_folder` directory. Note that some angles have been entered into the write() Python functions as legal variable names instead of floats. In the English file, you will see those legal names where the numerical values of those angles would have been. ``` from qubiter.SEO_writer import * from qubiter.SEO_reader import * from qubiter.EchoingSEO_reader import * from qubiter.SEO_simulator import * num_bits = 4 file_prefix = 'placeholder_test' emb = CktEmbedder(num_bits, num_bits) wr = SEO_writer(file_prefix, emb) wr.write_Rx(2, rads=np.pi/7) wr.write_Rx(1, rads='#2*.5') wr.write_Rx(1, rads='my_fun1#2') wr.write_Rn(3, rads_list=['#1', '-#1*3', '#3']) wr.write_Rx(1, rads='-my_fun2#2#1') wr.write_cnot(2, 3) wr.close_files() ``` The following 2 files were just written: 1. <a href='../io_folder/placeholder_test_4_eng.txt'>../io_folder/placeholder_test_4_eng.txt</a> 2. <a href='../io_folder/placeholder_test_4_ZLpic.txt'>../io_folder/placeholder_test_4_ZLpic.txt</a> Simply by creating an object of the class SEO_reader with the flag `write_log` set equal to True, you can create a log file which contains * a list of distinct variable numbers * a list of distinct function names encountered in the English file ``` rdr = SEO_reader(file_prefix, num_bits, write_log=True) ``` The following log file was just written: <a href='../io_folder/placeholder_test_4_log.txt'>../io_folder/placeholder_test_4_log.txt</a> Next, let us create two functions that will be used for the functional placeholders ``` def my_fun1(x): return x*.5 def my_fun2(x, y): return x + y ``` **Partial Substitution** This creates new files with `#1=30`, `#2=60`, `'my_fun1'->my_fun1`, but `#3` and `'my_fun2'` still undecided ``` vman = PlaceholderManager(eval_all_vars=False, var_num_to_rads={1: np.pi/6, 2: np.pi/3}, fun_name_to_fun={'my_fun1': my_fun1}) wr = SEO_writer(file_prefix + '_eval01', emb) EchoingSEO_reader(file_prefix, num_bits, wr, vars_manager=vman) ``` The following 2 files were just written: 1. <a href='../io_folder/placeholder_test_eval01_4_eng.txt'>../io_folder/placeholder_test_eval01_4_eng.txt</a> 2. <a href='../io_folder/placeholder_test_eval01_4_ZLpic.txt'>../io_folder/placeholder_test_eval01_4_ZLpic.txt</a> The following code runs the simulator after substituting `#1=30`, `#2=60`, `#3=90`, `'my_fun1'->my_fun1`, `'my_fun2'->my_fun2` ``` vman = PlaceholderManager( var_num_to_rads={1: np.pi/6, 2: np.pi/3, 3: np.pi/2}, fun_name_to_fun={'my_fun1': my_fun1, 'my_fun2': my_fun2} ) sim = SEO_simulator(file_prefix, num_bits, verbose=False, vars_manager=vman) StateVec.describe_st_vec_dict(sim.cur_st_vec_dict) ```
true
code
0.230227
null
null
null
null
# The art of using pipelines Pipelines are a natural way to think about a machine learning system. Indeed with some practice a data scientist can visualise data "flowing" through a series of steps. The input is typically some raw data which has to be processed in some manner. The goal is to represent the data in such a way that is can be ingested by a machine learning algorithm. Along the way some steps will extract features, while others will normalize the data and remove undesirable elements. Pipelines are simple, and yet they are a powerful way of designing sophisticated machine learning systems. Both [scikit-learn](https://stackoverflow.com/questions/33091376/python-what-is-exactly-sklearn-pipeline-pipeline) and [pandas](https://tomaugspurger.github.io/method-chaining) make it possible to use pipelines. However it's quite rare to see pipelines being used in practice (at least on Kaggle). Sometimes you get to see people using scikit-learn's `pipeline` module, however the `pipe` method from `pandas` is sadly underappreciated. A big reason why pipelines are not given much love is that it's easier to think of batch learning in terms of a script or a notebook. Indeed many people doing data science seem to prefer a procedural style to a declarative style. Moreover in practice pipelines can be a bit rigid if one wishes to do non-orthodox operations. Although pipelines may be a bit of an odd fit for batch learning, they make complete sense when they are used for online learning. Indeed the UNIX philosophy has advocated the use of pipelines for data processing for many decades. If you can visualise data as a stream of observations then using pipelines should make a lot of sense to you. We'll attempt to convince you by writing a machine learning algorithm in a procedural way and then converting it to a declarative pipeline in small steps. Hopefully by the end you'll be convinced, or not! In this notebook we'll manipulate data from the [Kaggle Recruit Restaurants Visitor Forecasting competition](https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting). The data is directly available through `river`'s `datasets` module. ``` from pprint import pprint from river import datasets for x, y in datasets.Restaurants(): pprint(x) pprint(y) break ``` We'll start by building and running a model using a procedural coding style. The performance of the model doesn't matter, we're simply interested in the design of the model. ``` from river import feature_extraction from river import linear_model from river import metrics from river import preprocessing from river import stats means = ( feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)), feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)), feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)) ) scaler = preprocessing.StandardScaler() lin_reg = linear_model.LinearRegression() metric = metrics.MAE() for x, y in datasets.Restaurants(): # Derive date features x['weekday'] = x['date'].weekday() x['is_weekend'] = x['date'].weekday() in (5, 6) # Process the rolling means of the target for mean in means: x = {**x, **mean.transform_one(x)} mean.learn_one(x, y) # Remove the key/value pairs that aren't features for key in ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']: x.pop(key) # Rescale the data x = scaler.learn_one(x).transform_one(x) # Fit the linear regression y_pred = lin_reg.predict_one(x) lin_reg.learn_one(x, y) # Update the metric using the out-of-fold prediction metric.update(y, y_pred) print(metric) ``` We're not using many features. We can print the last `x` to get an idea of the features (don't forget they've been scaled!) ``` pprint(x) ``` The above chunk of code is quite explicit but it's a bit verbose. The whole point of libraries such as `river` is to make life easier for users. Moreover there's too much space for users to mess up the order in which things are done, which increases the chance of there being target leakage. We'll now rewrite our model in a declarative fashion using a pipeline *à la sklearn*. ``` from river import compose def get_date_features(x): weekday = x['date'].weekday() return {'weekday': weekday, 'is_weekend': weekday in (5, 6)} model = compose.Pipeline( ('features', compose.TransformerUnion( ('date_features', compose.FuncTransformer(get_date_features)), ('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))), ('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))), ('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))) )), ('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')), ('scale', preprocessing.StandardScaler()), ('lin_reg', linear_model.LinearRegression()) ) metric = metrics.MAE() for x, y in datasets.Restaurants(): # Make a prediction without using the target y_pred = model.predict_one(x) # Update the model using the target model.learn_one(x, y) # Update the metric using the out-of-fold prediction metric.update(y, y_pred) print(metric) ``` We use a `Pipeline` to arrange each step in a sequential order. A `TransformerUnion` is used to merge multiple feature extractors into a single transformer. The `for` loop is now much shorter and is thus easier to grok: we get the out-of-fold prediction, we fit the model, and finally we update the metric. This way of evaluating a model is typical of online learning, and so we put it wrapped it inside a function called `progressive_val_score` part of the `evaluate` module. We can use it to replace the `for` loop. ``` from river import evaluate model = compose.Pipeline( ('features', compose.TransformerUnion( ('date_features', compose.FuncTransformer(get_date_features)), ('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))), ('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))), ('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))) )), ('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')), ('scale', preprocessing.StandardScaler()), ('lin_reg', linear_model.LinearRegression()) ) evaluate.progressive_val_score(dataset=datasets.Restaurants(), model=model, metric=metrics.MAE()) ``` Notice that you couldn't have used the `progressive_val_score` method if you wrote the model in a procedural manner. Our code is getting shorter, but it's still a bit difficult on the eyes. Indeed there is a lot of boilerplate code associated with pipelines that can get tedious to write. However `river` has some special tricks up it's sleeve to save you from a lot of pain. The first trick is that the name of each step in the pipeline can be omitted. If no name is given for a step then `river` automatically infers one. ``` model = compose.Pipeline( compose.TransformerUnion( compose.FuncTransformer(get_date_features), feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)), feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)), feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)) ), compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'), preprocessing.StandardScaler(), linear_model.LinearRegression() ) evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE()) ``` Under the hood a `Pipeline` inherits from `collections.OrderedDict`. Indeed this makes sense because if you think about it a `Pipeline` is simply a sequence of steps where each step has a name. The reason we mention this is because it means you can manipulate a `Pipeline` the same way you would manipulate an ordinary `dict`. For instance we can print the name of each step by using the `keys` method. ``` for name in model.steps: print(name) ``` The first step is a `FeatureUnion` and it's string representation contains the string representation of each of it's elements. Not having to write names saves up some time and space and is certainly less tedious. The next trick is that we can use mathematical operators to compose our pipeline. For example we can use the `+` operator to merge `Transformer`s into a `TransformerUnion`. ``` model = compose.Pipeline( compose.FuncTransformer(get_date_features) + \ feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) + \ feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) + \ feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)), compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'), preprocessing.StandardScaler(), linear_model.LinearRegression() ) evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE()) ``` Likewhise we can use the `|` operator to assemble steps into a `Pipeline`. ``` model = ( compose.FuncTransformer(get_date_features) + feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) + feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) + feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)) ) to_discard = ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'] model = model | compose.Discard(*to_discard) | preprocessing.StandardScaler() model |= linear_model.LinearRegression() evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE()) ``` Hopefully you'll agree that this is a powerful way to express machine learning pipelines. For some people this should be quite remeniscent of the UNIX pipe operator. One final trick we want to mention is that functions are automatically wrapped with a `FuncTransformer`, which can be quite handy. ``` model = get_date_features for n in [7, 14, 21]: model += feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(n)) model |= compose.Discard(*to_discard) model |= preprocessing.StandardScaler() model |= linear_model.LinearRegression() evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE()) ``` Naturally some may prefer the procedural style we first used because they find it easier to work with. It all depends on your style and you should use what you feel comfortable with. However we encourage you to use operators because we believe that this will increase the readability of your code, which is very important. To each their own! Before finishing we can take an interactive look at our pipeline. ``` model ```
true
code
0.661868
null
null
null
null
# Tutorial - Time Series Forecasting - Autoregression (AR) The goal is to forecast time series with the Autoregression (AR) Approach. 1) JetRail Commuter, 2) Air Passengers, 3) Function Autoregression with Air Passengers, and 5) Function Autoregression with Wine Sales. References Jason Brownlee - https://machinelearningmastery.com/time-series-forecasting-methods-in-python-cheat-sheet/ ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import datetime import warnings warnings.filterwarnings("ignore") # Load File url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/JetRail%20Avg%20Hourly%20Traffic%20Data%20-%202012-2013.csv' df = pd.read_csv(url) df.info() df.Datetime = pd.to_datetime(df.Datetime,format='%Y-%m-%d %H:%M') df.index = df.Datetime ``` # Autoregression (AR) Approach with JetRail The autoregression (AR) method models the next step in the sequence as a linear function of the observations at prior time steps. The notation for the model involves specifying the order of the model p as a parameter to the AR function, e.g. AR(p). For example, AR(1) is a first-order autoregression model. The method is suitable for univariate time series without trend and seasonal components. ``` #Split Train Test import math total_size=len(df) split = 10392 / 11856 train_size=math.floor(split*total_size) train=df.head(train_size) test=df.tail(len(df) -train_size) from statsmodels.tsa.ar_model import AR model = AR(train.Count) fit1 = model.fit() y_hat = test.copy() y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False) #Plotting data plt.figure(figsize=(12,8)) plt.plot(train.index, train['Count'], label='Train') plt.plot(test.index,test['Count'], label='Test') plt.plot(y_hat.index,y_hat['AR'], label='AR') plt.legend(loc='best') plt.title("Autoregression (AR) Forecast") plt.show() ``` # RMSE Calculation ``` from sklearn.metrics import mean_squared_error from math import sqrt rms = sqrt(mean_squared_error(test.Count, y_hat.AR)) print('RMSE = '+str(rms)) ``` # Autoregression (AR) Approach with Air Passagers ``` # Subsetting url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/International%20Airline%20Passengers.csv' df = pd.read_csv(url, sep =";") df.info() df.Month = pd.to_datetime(df.Month,format='%Y-%m') df.index = df.Month #df.head() #Creating train and test set import math total_size=len(df) train_size=math.floor(0.7*total_size) #(70% Dataset) train=df.head(train_size) test=df.tail(len(df) -train_size) #train.info() #test.info() from statsmodels.tsa.ar_model import AR # Create prediction table y_hat = test.copy() model = AR(train['Passengers']) fit1 = model.fit() y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False) y_hat.describe() plt.figure(figsize=(12,8)) plt.plot(train.index, train['Passengers'], label='Train') plt.plot(test.index,test['Passengers'], label='Test') plt.plot(y_hat.index,y_hat['AR'], label='AR') plt.legend(loc='best') plt.title("Autoregression (AR)") plt.show() from sklearn.metrics import mean_squared_error from math import sqrt rms = sqrt(mean_squared_error(test.Passengers, y_hat.AR)) print('RMSE = '+str(rms)) ``` # Function Autoregression (AR) Approach with variables ``` def AR_forecasting(mydf,colval,split): #print(split) import math from statsmodels.tsa.api import Holt from sklearn.metrics import mean_squared_error from math import sqrt global y_hat, train, test total_size=len(mydf) train_size=math.floor(split*total_size) #(70% Dataset) train=mydf.head(train_size) test=mydf.tail(len(mydf) -train_size) y_hat = test.copy() model = AR(train[colval]) fit1 = model.fit() y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False) plt.figure(figsize=(12,8)) plt.plot(train.index, train[colval], label='Train') plt.plot(test.index,test[colval], label='Test') plt.plot(y_hat.index,y_hat['AR'], label='AR') plt.legend(loc='best') plt.title("Autoregression (AR) Forecast") plt.show() rms = sqrt(mean_squared_error(test[colval], y_hat.AR)) print('RMSE = '+str(rms)) AR_forecasting(df,'Passengers',0.7) ``` # Testing Function Autoregression (AR) Approach with Wine Dataset ``` url = 'https://raw.githubusercontent.com/tristanga/Data-Cleaning/master/Converting%20Time%20Series/Wine_Sales_R_Dataset.csv' df = pd.read_csv(url) df.info() df.Date = pd.to_datetime(df.Date,format='%Y-%m-%d') df.index = df.Date AR_forecasting(df,'Sales',0.7) ```
true
code
0.467696
null
null
null
null
# Tune TensorFlow Serving ## Guidelines ### CPU-only If your system is CPU-only (no GPU), then consider the following values: * `num_batch_threads` equal to the number of CPU cores * `max_batch_size` to infinity (ie. MAX_INT) * `batch_timeout_micros` to 0. Then experiment with batch_timeout_micros values in the 1-10 millisecond (1000-10000 microsecond) range, while keeping in mind that 0 may be the optimal value. ### GPU If your model uses a GPU device for part or all of your its inference work, consider the following value: * `num_batch_threads` to the number of CPU cores. * `batch_timeout_micros` to infinity while tuning `max_batch_size` to achieve the desired balance between throughput and average latency. Consider values in the hundreds or thousands. For online serving, tune `batch_timeout_micros` to rein in tail latency. The idea is that batches normally get filled to max_batch_size, but occasionally when there is a lapse in incoming requests, to avoid introducing a latency spike it makes sense to process whatever's in the queue even if it represents an underfull batch. The best value for `batch_timeout_micros` is typically a few milliseconds, and depends on your context and goals. Zero is a value to consider as it works well for some workloads. For bulk-processing batch jobs, choose a large value, perhaps a few seconds, to ensure good throughput but not wait too long for the final (and likely underfull) batch. ## Close TensorFlow Serving and Load Test Terminals ## Open a Terminal through Jupyter Notebook ### (Menu Bar -> File -> New...) ![Jupyter Terminal](http://pipeline.io/img/jupyter-terminal.png) ## Enable Request Batching ## Start TensorFlow Serving in Separate Terminal The params are as follows: * `port` for TensorFlow Serving (int) * `model_name` (anything) * `model_base_path` (/path/to/model/ above all versioned sub-directories) * `enable_batching` (true|false) ``` tensorflow_model_server \ --port=9000 \ --model_name=linear \ --model_base_path=/root/models/linear_fully_optimized/cpu \ --batching_parameters_file=/root/config/tf_serving/batch_config.txt \ --enable_batching=true \ ``` ### `batch_config.txt` * `num_batch_threads` (usually equal to the number of CPU cores or a multiple thereof) * `max_batch_size` (# of requests - start with infinity, tune down to find the right balance between latency and throughput) * `batch_timeout_micros` (minimum batch window duration) ``` num_batch_threads { value: 100 } max_batch_size { value: 99999999 } batch_timeout_micros { value: 100000 } ``` ## Start Load Test in the Terminal ``` loadtest high ``` Notice the throughput and avg/min/max latencies: ``` summary ... = 301.1/s Avg: 227 Min: 3 Max: 456 Err: 0 (0.00%) ``` ## Modify Request Batching Parameters, Repeat Load Test Gain intuition on the performance impact of changing the request batching parameters.
true
code
0.738592
null
null
null
null
# Bayesian Optimization [Bayesian optimization](https://en.wikipedia.org/wiki/Bayesian_optimization) is a powerful strategy for minimizing (or maximizing) objective functions that are costly to evaluate. It is an important component of [automated machine learning](https://en.wikipedia.org/wiki/Automated_machine_learning) toolboxes such as [auto-sklearn](https://automl.github.io/auto-sklearn/stable/), [auto-weka](http://www.cs.ubc.ca/labs/beta/Projects/autoweka/), and [scikit-optimize](https://scikit-optimize.github.io/), where Bayesian optimization is used to select model hyperparameters. Bayesian optimization is used for a wide range of other applications as well; as cataloged in the review [2], these include interactive user-interfaces, robotics, environmental monitoring, information extraction, combinatorial optimization, sensor networks, adaptive Monte Carlo, experimental design, and reinforcement learning. ## Problem Setup We are given a minimization problem $$ x^* = \text{arg}\min \ f(x), $$ where $f$ is a fixed objective function that we can evaluate pointwise. Here we assume that we do _not_ have access to the gradient of $f$. We also allow for the possibility that evaluations of $f$ are noisy. To solve the minimization problem, we will construct a sequence of points $\{x_n\}$ that converge to $x^*$. Since we implicitly assume that we have a fixed budget (say 100 evaluations), we do not expect to find the exact minumum $x^*$: the goal is to get the best approximate solution we can given the allocated budget. The Bayesian optimization strategy works as follows: 1. Place a prior on the objective function $f$. Each time we evaluate $f$ at a new point $x_n$, we update our model for $f(x)$. This model serves as a surrogate objective function and reflects our beliefs about $f$ (in particular it reflects our beliefs about where we expect $f(x)$ to be close to $f(x^*)$). Since we are being Bayesian, our beliefs are encoded in a posterior that allows us to systematically reason about the uncertainty of our model predictions. 2. Use the posterior to derive an "acquisition" function $\alpha(x)$ that is easy to evaluate and differentiate (so that optimizing $\alpha(x)$ is easy). In contrast to $f(x)$, we will generally evaluate $\alpha(x)$ at many points $x$, since doing so will be cheap. 3. Repeat until convergence: + Use the acquisition function to derive the next query point according to $$ x_{n+1} = \text{arg}\min \ \alpha(x). $$ + Evaluate $f(x_{n+1})$ and update the posterior. A good acquisition function should make use of the uncertainty encoded in the posterior to encourage a balance between exploration&mdash;querying points where we know little about $f$&mdash;and exploitation&mdash;querying points in regions we have good reason to think $x^*$ may lie. As the iterative procedure progresses our model for $f$ evolves and so does the acquisition function. If our model is good and we've chosen a reasonable acquisition function, we expect that the acquisition function will guide the query points $x_n$ towards $x^*$. In this tutorial, our model for $f$ will be a Gaussian process. In particular we will see how to use the [Gaussian Process module](http://docs.pyro.ai/en/0.3.1/contrib.gp.html) in Pyro to implement a simple Bayesian optimization procedure. ``` import matplotlib.gridspec as gridspec import matplotlib.pyplot as plt import torch import torch.autograd as autograd import torch.optim as optim from torch.distributions import constraints, transform_to import pyro import pyro.contrib.gp as gp assert pyro.__version__.startswith('1.5.2') pyro.set_rng_seed(1) ``` ## Define an objective function For the purposes of demonstration, the objective function we are going to consider is the [Forrester et al. (2008) function](https://www.sfu.ca/~ssurjano/forretal08.html): $$f(x) = (6x-2)^2 \sin(12x-4), \quad x\in [0, 1].$$ This function has both a local minimum and a global minimum. The global minimum is at $x^* = 0.75725$. ``` def f(x): return (6 * x - 2)**2 * torch.sin(12 * x - 4) ``` Let's begin by plotting $f$. ``` x = torch.linspace(0, 1) plt.figure(figsize=(8, 4)) plt.plot(x.numpy(), f(x).numpy()) plt.show() ``` ## Setting a Gaussian Process prior [Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process) are a popular choice for a function priors due to their power and flexibility. The core of a Gaussian Process is its covariance function $k$, which governs the similarity of $f(x)$ for pairs of input points. Here we will use a Gaussian Process as our prior for the objective function $f$. Given inputs $X$ and the corresponding noisy observations $y$, the model takes the form $$f\sim\mathrm{MultivariateNormal}(0,k(X,X)),$$ $$y\sim f+\epsilon,$$ where $\epsilon$ is i.i.d. Gaussian noise and $k(X,X)$ is a covariance matrix whose entries are given by $k(x,x^\prime)$ for each pair of inputs $(x,x^\prime)$. We choose the [Matern](https://en.wikipedia.org/wiki/Mat%C3%A9rn_covariance_function) kernel with $\nu = \frac{5}{2}$ (as suggested in reference [1]). Note that the popular [RBF](https://en.wikipedia.org/wiki/Radial_basis_function_kernel) kernel, which is used in many regression tasks, results in a function prior whose samples are infinitely differentiable; this is probably an unrealistic assumption for most 'black-box' objective functions. ``` # initialize the model with four input points: 0.0, 0.33, 0.66, 1.0 X = torch.tensor([0.0, 0.33, 0.66, 1.0]) y = f(X) gpmodel = gp.models.GPRegression(X, y, gp.kernels.Matern52(input_dim=1), noise=torch.tensor(0.1), jitter=1.0e-4) ``` The following helper function `update_posterior` will take care of updating our `gpmodel` each time we evaluate $f$ at a new value $x$. ``` def update_posterior(x_new): y = f(x_new) # evaluate f at new point. X = torch.cat([gpmodel.X, x_new]) # incorporate new evaluation y = torch.cat([gpmodel.y, y]) gpmodel.set_data(X, y) # optimize the GP hyperparameters using Adam with lr=0.001 optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001) gp.util.train(gpmodel, optimizer) ``` ## Define an acquisition function There are many reasonable options for the acquisition function (see references [1] and [2] for a list of popular choices and a discussion of their properties). Here we will use one that is 'simple to implement and interpret,' namely the 'Lower Confidence Bound' acquisition function. It is given by $$ \alpha(x) = \mu(x) - \kappa \sigma(x) $$ where $\mu(x)$ and $\sigma(x)$ are the mean and square root variance of the posterior at the point $x$, and the arbitrary constant $\kappa>0$ controls the trade-off between exploitation and exploration. This acquisition function will be minimized for choices of $x$ where either: i) $\mu(x)$ is small (exploitation); or ii) where $\sigma(x)$ is large (exploration). A large value of $\kappa$ means that we place more weight on exploration because we prefer candidates $x$ in areas of high uncertainty. A small value of $\kappa$ encourages exploitation because we prefer candidates $x$ that minimize $\mu(x)$, which is the mean of our surrogate objective function. We will use $\kappa=2$. ``` def lower_confidence_bound(x, kappa=2): mu, variance = gpmodel(x, full_cov=False, noiseless=False) sigma = variance.sqrt() return mu - kappa * sigma ``` The final component we need is a way to find (approximate) minimizing points $x_{\rm min}$ of the acquisition function. There are several ways to proceed, including gradient-based and non-gradient-based techniques. Here we will follow the gradient-based approach. One of the possible drawbacks of gradient descent methods is that the minimization algorithm can get stuck at a local minimum. In this tutorial, we adopt a (very) simple approach to address this issue: - First, we seed our minimization algorithm with 5 different values: i) one is chosen to be $x_{n-1}$, i.e. the candidate $x$ used in the previous step; and ii) four are chosen uniformly at random from the domain of the objective function. - We then run the minimization algorithm to approximate convergence for each seed value. - Finally, from the five candidate $x$s identified by the minimization algorithm, we select the one that minimizes the acquisition function. Please refer to reference [2] for a more detailed discussion of this problem in Bayesian Optimization. ``` def find_a_candidate(x_init, lower_bound=0, upper_bound=1): # transform x to an unconstrained domain constraint = constraints.interval(lower_bound, upper_bound) unconstrained_x_init = transform_to(constraint).inv(x_init) unconstrained_x = unconstrained_x_init.clone().detach().requires_grad_(True) minimizer = optim.LBFGS([unconstrained_x], line_search_fn='strong_wolfe') def closure(): minimizer.zero_grad() x = transform_to(constraint)(unconstrained_x) y = lower_confidence_bound(x) autograd.backward(unconstrained_x, autograd.grad(y, unconstrained_x)) return y minimizer.step(closure) # after finding a candidate in the unconstrained domain, # convert it back to original domain. x = transform_to(constraint)(unconstrained_x) return x.detach() ``` ## The inner loop of Bayesian Optimization With the various helper functions defined above, we can now encapsulate the main logic of a single step of Bayesian Optimization in the function `next_x`: ``` def next_x(lower_bound=0, upper_bound=1, num_candidates=5): candidates = [] values = [] x_init = gpmodel.X[-1:] for i in range(num_candidates): x = find_a_candidate(x_init, lower_bound, upper_bound) y = lower_confidence_bound(x) candidates.append(x) values.append(y) x_init = x.new_empty(1).uniform_(lower_bound, upper_bound) argmin = torch.min(torch.cat(values), dim=0)[1].item() return candidates[argmin] ``` ## Running the algorithm To illustrate how Bayesian Optimization works, we make a convenient plotting function that will help us visualize our algorithm's progress. ``` def plot(gs, xmin, xlabel=None, with_title=True): xlabel = "xmin" if xlabel is None else "x{}".format(xlabel) Xnew = torch.linspace(-0.1, 1.1) ax1 = plt.subplot(gs[0]) ax1.plot(gpmodel.X.numpy(), gpmodel.y.numpy(), "kx") # plot all observed data with torch.no_grad(): loc, var = gpmodel(Xnew, full_cov=False, noiseless=False) sd = var.sqrt() ax1.plot(Xnew.numpy(), loc.numpy(), "r", lw=2) # plot predictive mean ax1.fill_between(Xnew.numpy(), loc.numpy() - 2*sd.numpy(), loc.numpy() + 2*sd.numpy(), color="C0", alpha=0.3) # plot uncertainty intervals ax1.set_xlim(-0.1, 1.1) ax1.set_title("Find {}".format(xlabel)) if with_title: ax1.set_ylabel("Gaussian Process Regression") ax2 = plt.subplot(gs[1]) with torch.no_grad(): # plot the acquisition function ax2.plot(Xnew.numpy(), lower_confidence_bound(Xnew).numpy()) # plot the new candidate point ax2.plot(xmin.numpy(), lower_confidence_bound(xmin).numpy(), "^", markersize=10, label="{} = {:.5f}".format(xlabel, xmin.item())) ax2.set_xlim(-0.1, 1.1) if with_title: ax2.set_ylabel("Acquisition Function") ax2.legend(loc=1) ``` Our surrogate model `gpmodel` already has 4 function evaluations at its disposal; however, we have yet to optimize the GP hyperparameters. So we do that first. Then in a loop we call the `next_x` and `update_posterior` functions repeatedly. The following plot illustrates how Gaussian Process posteriors and the corresponding acquisition functions change at each step in the algorith. Note how query points are chosen both for exploration and exploitation. ``` plt.figure(figsize=(12, 30)) outer_gs = gridspec.GridSpec(5, 2) optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001) gp.util.train(gpmodel, optimizer) for i in range(8): xmin = next_x() gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer_gs[i]) plot(gs, xmin, xlabel=i+1, with_title=(i % 2 == 0)) update_posterior(xmin) plt.show() ``` Because we have assumed that our observations contain noise, it is improbable that we will find the exact minimizer of the function $f$. Still, with a relatively small budget of evaluations (8) we see that the algorithm has converged to very close to the global minimum at $x^* = 0.75725$. While this tutorial is only intended to be a brief introduction to Bayesian Optimization, we hope that we have been able to convey the basic underlying ideas. Consider watching the lecture by Nando de Freitas [3] for an excellent exposition of the basic theory. Finally, the reference paper [2] gives a review of recent research on Bayesian Optimization, together with many discussions about important technical details. ## References [1] `Practical bayesian optimization of machine learning algorithms`,<br />&nbsp;&nbsp;&nbsp;&nbsp; Jasper Snoek, Hugo Larochelle, and Ryan P. Adams [2] `Taking the human out of the loop: A review of bayesian optimization`,<br />&nbsp;&nbsp;&nbsp;&nbsp; Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando De Freitas [3] [Machine learning - Bayesian optimization and multi-armed bandits](https://www.youtube.com/watch?v=vz3D36VXefI)
true
code
0.798482
null
null
null
null
# Exploratory Data Analysis In this notebook, I have illuminated some of the strategies that one can use to explore the data and gain some insights about it. We will start from finding metadata about the data, to determining what techniques to use, to getting some important insights about the data. This is based on the IBM's Data Analysis with Python course on Coursera. ## The Problem The problem is to find the variables that impact the car price. For this problem, we will use a real-world dataset that details information about cars. The dataset used is an open-source dataset made available by Jeffrey C. Schlimmer. The one used in this notebook is hosted on the IBM Cloud. The dataset provides details of some cars. It includes properties like make, horse-power, price, wheel-type and so on. ## Loading data and finding the metadata Import libraries ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy import stats %matplotlib inline ``` Load the data as pandas dataframe ``` path='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/automobileEDA.csv' df = pd.read_csv(path) df.head() ``` ### Metadata: The columns's types Finding column's types is an important step. It serves two purposes: 1. See if we need to convert some data. For example, price may be in string instead of numbers. This is very important as it could throw everything that we do afterwards off. 2. Find out what type of analysis we need to do with what column. After fixing the problems given above, the type of the object is often a great indicator of whether the data is categorical or numerical. This is important as it would determine what kind of exploratory analysis we can and want to do. To find out the type, we can simply use `.dtypes` property of the dataframe. Here's an example using the dataframe we loaded above. ``` df.dtypes ``` From the results above, we can see that we can roughly divide the types into two categories: numeric (int64 and float64) and object. Although object type can contain lots of things, it's used often to store string variables. A quick glance at the table tells us that there's no glaring errors in object types. Now we divide them into two categories: numerical variables and categorical variables. Numerical, as the name states, are the variables that hold numerical data. Categorical variables hold string that describes a certain property of the data (such as Audi as the make). Make a special note that our target variable, price, is numerical. So the relationships we would be exploring would be between numerical-and-numerical data and numerical-and-categorical data. ## Relationship between Numerical Data First we will explore the relationship between two numerical data and see if we can learn some insights out of it. In the beginning, it's helpful to get the correlation between the variables. For this, we can use the `corr()` method to find out the correlation between all the variables. Do note that the method finds out the Pearson correlation. Natively, pandas also support Spearman and the Kendall Tau correlation. You can also pass in a custom callable if you want. Check out the docs for more info. Here's how to do it with the dataframe that we have: ``` df.corr() ``` Note that the diagonal elements are always one; because correlation with itself is always one. Now, it seems somewhat daunting, and frankly, unneccessary to have this big of a table and correlation between things we don't care (say bore and stroke). If we want to find out the correlation with just price, using `corrwith()` method is helpful. Here's how to do it: ``` corr = df.corrwith(df['price']) # Prettify pd.DataFrame(data=corr.values, index=corr.index, columns=['Correlation']) ``` From the table above, we have some idea about what can we expect the relationship should be like. As a refresher, in Pearson correlation, values range in [-1, 1] with -1 and 1 implying a perfect linear relationship and 0 implying none. A positive value implies a positive relationship (value increase in response to increment) and negative value implies negative relationship (value decrease in response to increment). The next step is to have a more visual outlook on the relationship. ### Visualizing Relationships Continuous numerical variables are variables that may contain any value within some range. In pandas dtype, continuous numerical variables can have the type "int64" or "float64". Scatterplots are a great way to visualize these variables is by using scatterplots. To take it further, it's better to use a scatter plot with a regression line. This should also be able to provide us with some preliminary ways to test our hypothesis of the relationship between them. In this notebook, we would be using the `regplot()` function in the `seaborn` package. Below are some examples. <h4>Positive linear relationship</h4> Let's plot "engine-size" vs "price" since the correlation between them seems strong. ``` plt.figure(figsize=(5,5)) sns.regplot(x="engine-size", y="price", data=df); ``` As the engine-size goes up, the price goes up. This indicates a decent positive direct correlation between these two variables. Thus, we can say that the engine size is a good predictor of price since the regression line is almost a perfect diagonal line. We can also check this with the Pearson correlation we got above. It's 0.87, which means sense. Let's also try highway mpg too since the correlation between them is -0.7 ``` sns.regplot(x="highway-mpg", y="price", data=df); ``` The graph shows a decent negative realtionship. So, it could be a potential indicator. Although, it seems that the relationship isn't exactly normal--given the curve of the points. Let's try a higher order regression line. ``` sns.regplot(x="highway-mpg", y="price", data=df, order=2); ``` There. It seems much better. ### Weak Linear Relationship Not all variables have to be correlated. Let's check out the graph of "Peak-rpm" as a predictor variable for "price". ``` sns.regplot(x="peak-rpm", y="price", data=df); ``` From the graph, it's clear that peak rpm is a bad indicator of price. It seems that there is no relationship between them. It seems almost random. A quick check at the correlation value confirms this. The value is -0.1. It's very close to zero, implying no relationship. Although there are cases in which low value can be misguiding, it's usually only for relationships that show a non-linear relationship in which value goes down and up. But the graph confirms there is none. ## Relationship between Numerical and Categorical data Categorical variables, like their name imply, divide the data into certain categories. They essentially describe a 'characteristic' of the data unit, and are often selected from a small group of categories. Although they commonly have "object" type, it's possible to have them has "int64" too (for example 'Level of happiness'). ### Visualizing with Boxplots Boxplots are a great way to visualize such relationships. Boxplots essentially show the spread of the data. You can use the `boxplot()` function in the seaborn package. Alternatively, you can use boxen or violin plots too. Here's an example by plotting relationship between "body-style" and "price" ``` sns.boxplot(x="body-style", y="price", data=df); ``` We can infer that there is likely to be no significant relationship as there is a decent over lap. Let's examine engine "engine-location" and "price" ``` sns.boxplot(x="engine-location", y="price", data=df); ``` Although there are a lot of outliers for the front, the distribution of price between these two engine-location categories is distinct enough to take engine-location as a potential good predictor of price. Let's examine "drive-wheels" and "price". ``` sns.boxplot(x="drive-wheels", y="price", data=df); ``` <p>Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price.</p> ### Statistical method to checking for a significant realtionship - ANOVA Although visualisation is helpful, it does not give us a concrete and certain vision in this (and often in others) case. So, it follows that we would want a metric to evaluate it by. For correlation between categorical and continuous variable, there are various tests. ANOVA family of tests is a common one to use. The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. Do note that ANOVA is an _omnibus_ test statistic and it can't tell you what groups are the ones that have correlation among them. Only that there are at least two groups with a significant difference. In python, we can calculate the ANOVA statistic fairly easily using the `scipy.stats` module. The function `f_oneway()` calculates and returns: __F-test score__: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means. Although the degree of the 'largeneess' differs from data to data. You can use the F-table to find out the critical F-value by using the significance level and degrees of freedom for numerator and denominator and compare it with the calculated F-test score. __P-value__: P-value tells how statistically significant is our calculated score value. If the variables are strongly correlated, the expectation is to have ANOVA to return a sizeable F-test score and a small p-value. #### Drive Wheels Since ANOVA analyzes the difference between different groups of the same variable, the `groupby()` function will come in handy. With this, we can easily and concisely seperate the dataset into groups of drive-wheels. Essentially, the function allows us to split the dataset into groups and perform calculations on groups moving forward. Check out Grouping below for more explanation. Let's see if different types 'drive-wheels' impact 'price', we group the data. ``` grouped_anova = df[['drive-wheels', 'price']].groupby(['drive-wheels']) grouped_anova.head(2) ``` We can obtain the values of the method group using the method `get_group()` ``` grouped_anova.get_group('4wd')['price'] ``` Finally, we use the function `f_oneway()` to obtain the F-test score and P-value. ``` # ANOVA f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price'], grouped_anova.get_group('4wd')['price']) print( "ANOVA results: F=", f_val, ", P =", p_val) ``` From the result, we can see that we have a large F-test score and a very small p-value. Still, we need to check if all three tested groups are highly correlated? #### Separately: fwd and rwd ``` f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price']) print( "ANOVA results: F=", f_val, ", P =", p_val ) ``` Seems like the result is significant and they are correlated. Let's examine the other groups #### 4wd and rwd ``` f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('rwd')['price']) print( "ANOVA results: F=", f_val, ", P =", p_val) ``` <h4>4wd and fwd</h4> ``` f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('fwd')['price']) print("ANOVA results: F=", f_val, ", P =", p_val) ``` ## Relationship between Categorical Data: Corrected Cramer's V A good way to test relation between two categorical variable is Corrected Cramer's V. **Note:** A p-value close to zero means that our variables are very unlikely to be completely unassociated in some population. However, this does not mean the variables are strongly associated; a weak association in a large sample size may also result in p = 0.000. **General Rule of Thumb:** * V ∈ [0.1,0.3]: weak association * V ∈ [0.4,0.5]: medium association * V > 0.5: strong association Here's how to do it in python: ```python import scipy.stats as ss import pandas as pd import numpy as np def cramers_corrected_stat(x, y): """ calculate Cramers V statistic for categorial-categorial association. uses correction from Bergsma and Wicher, Journal of the Korean Statistical Society 42 (2013): 323-328 """ result = -1 if len(x.value_counts()) == 1: print("First variable is constant") elif len(y.value_counts()) == 1: print("Second variable is constant") else: conf_matrix = pd.crosstab(x, y) if conf_matrix.shape[0] == 2: correct = False else: correct = True chi2, p = ss.chi2_contingency(conf_matrix, correction=correct)[0:2] n = sum(conf_matrix.sum()) phi2 = chi2/n r, k = conf_matrix.shape phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1)) rcorr = r - ((r-1)**2)/(n-1) kcorr = k - ((k-1)**2)/(n-1) result = np.sqrt(phi2corr / min((kcorr-1), (rcorr-1))) return round(result, 6), round(p, 6) ``` ## Descriptive Statistical Analysis Although the insights gained above are significant, it's clear we need more work. Since we are exploring the data, performing some common and useful descriptive statistical analysis would be nice. However, there are a lot of them and would require a lot of work to do them by scratch. Fortunately, `pandas` library has a neat method that computes all of them for us. The `describe()` method, when invoked on a dataframe automatically computes basic statistics for all continuous variables. Do note that any NaN values are automatically skipped in these statistics. By default, it will show stats for numerical data. Here's what it will show: * Count of that variable * Mean * Standard Deviation (std) * Minimum Value * IQR (Interquartile Range: 25%, 50% and 75%) * Maximum Value If you want, you can change the percentiles too. Check out the docs for that. Here's how to do it in our dataframe: ``` df.describe() ``` To get the information about categorical variables, we need to specifically tell it to pandas to include them. For categorical variables, it shows: * Count * Unique values * The most common value or 'top' * Frequency of the 'top' ``` df.describe(include=['object']) ``` ### Value Counts Sometimes, we need to understand the distribution of the categorical data. This could mean understanding how many units of each characteristic/variable we have. `value_counts()` is a method in pandas that can help with it. If we use it with a series, it will give us the unique values and how many of them exist. _Caution:_ Using it with DataFrame works like count of unique rows by combination of all columns (like in SQL). This may or may not be what you want. For example, using it with drive-wheels and engine-location would give you the number of rows with unique pair of values. Here's an example of doing it with the drive-wheels column. ``` df['drive-wheels'].value_counts().to_frame() ``` `.to_frame()` method is added to make it into a dataframe, hence making it look better. You can play around and rename the column and index name if you want. We can repeat the above process for the variable 'engine-location'. ``` df['engine-location'].value_counts().to_frame() ``` Examining the value counts of the engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, this result is skewed. Thus, we are not able to draw any conclusions about the engine location. ## Grouping Grouping is a useful technique to explore the data. With grouping, we can split data and apply various transforms. For example, we can find out the mean of different body styles. This would help us to have more insight into whether there's a relationsip between our target variable and the variable we are using grouping on. Although oftenly used on categorical data, grouping can also be used with numerical data by seperating them into categories. For example we might seperate car by prices into affordable and luxury groups. In pandas, we can use the `groupby()` method. Let's try it with the 'drive-wheels' variable. First we will find out how many unique values there are. We do that by `unique()` method. ``` df['drive-wheels'].unique() ``` If we want to know, on average, which type of drive wheel is most valuable, we can group "drive-wheels" and then average them. ``` df[['drive-wheels','body-style','price']].groupby(['drive-wheels']).mean() ``` From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price. It's also possible to group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'. Let's store it in the variable `grouped_by_wheels_and_body`. ``` grouped_by_wheels_and_body = df[['drive-wheels','body-style','price']].groupby(['drive-wheels','body-style']).mean() grouped_by_wheels_and_body ``` Although incredibly useful, it's a little hard to read. It's better to convert it to a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. There are various ways to do so. A way to do that is to use the method `pivot()`. However, with groups like the one above (multi-index), one can simply call the `unstack()` method. ``` grouped_by_wheels_and_body = grouped_by_wheels_and_body.unstack() grouped_by_wheels_and_body ``` Often, we won't have data for some of the pivot cells. Often, it's filled with the value 0, but any other value could potentially be used as well. This could be mean or some other flag. ``` grouped_by_wheels_and_body.fillna(0) ``` Let's do the same for body-style only ``` df[['price', 'body-style']].groupby('body-style').mean() ``` ### Visualizing Groups Heatmaps are a great way to visualize groups. They can show relationships clearly in this case. Do note that you need to be careful with the color schemes. Since chosing appropriate colorscheme is not only appropriate for your 'story' of the data, it is also important since it can impact the perception of the data. [This resource](https://matplotlib.org/tutorials/colors/colormaps.html) gives a great idea on what to choose as a color scheme and when it's appropriate. It also has samples of the scheme below too for a quick preview along with when should one use them. Here's an example of using it with the pivot table we created with the `seaborn` package. ``` sns.heatmap(grouped_by_wheels_and_body, cmap="Blues"); ``` This heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'. ## Correlation and Causation Correlation and causation are terms that are used often and confused with each other--or worst considered to imply the other. Here's a quick overview of them: __Correlation__: The degree of association (or resemblance) of variables with each other. __Causation__: A relationship of cause and effect between variables. It is important to know the difference between these two. Note that correlation does __not__ imply causation. Determining correlation is much simpler. We can almost always use methods such as Pearson Correlation, ANOVA method, and graphs. Determining causation may require independent experimentation. ### Pearson Correlation Described earlier, Pearson Correlation is great way to measure linear dependence between two variables. It's also the default method in the method corr. ``` df.corr() ``` ### Cramer's V Cramer's V is a great method to calculate the relationship between two categorical variables. Read above about Cramer's V to get a better estimate. **General Rule of Thumb:** * V ∈ [0.1,0.3]: weak association * V ∈ [0.4,0.5]: medium association * V > 0.5: strong association ### ANOVA Method As discussed previously, ANOVA method is great to conduct analysis to determine whether there's a significant realtionship between categorical and continous variables. Check out the ANOVA section above for more details. Now, just knowing the correlation statistics is not enough. We also need to know whether the relationship is statistically significant or not. We can use p-value for that. ### P-value In very simple terms, p-value checks the probability whether the result we have could be just a random chance. For example, for a p-value of 0.05, we are certain that our results are insignificant about 5% of time and are significant 95% of the time. It's recommended to define a tolerance level of the p-value beforehand. Here's some common interpretations of p-value: * The p-value is $<$ 0.001: A strong evidence that the correlation is significant. * The p-value is $<$ 0.05: A moderate evidence that the correlation is significant. * The p-value is $<$ 0.1: A weak evidence that the correlation is significant. * The p-value is $>$ 0.1: No evidence that the correlation is significant. We can obtain this information using `stats` module in the `scipy` library. Let's calculate it for wheel-base vs price ``` pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price']) print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value) ``` Since the p-value is $<$ 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585) Let's try one more example: horsepower vs price. ``` pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price']) print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value) ``` Since the p-value is $<$ 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1). ### Conclusion: Important Variables We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. Some more analysis later, we can find that the important variables are: Continuous numerical variables: * Length * Width * Curb-weight * Engine-size * Horsepower * City-mpg * Highway-mpg * Wheel-base * Bore Categorical variables: * Drive-wheels If needed, we can now mone onto into building machine learning models as we now know what to feed our model. P.S. [This medium article](https://medium.com/@outside2SDs/an-overview-of-correlation-measures-between-categorical-and-continuous-variables-4c7f85610365#:~:text=A%20simple%20approach%20could%20be,variance%20of%20the%20continuous%20variable.&text=If%20the%20variables%20have%20no,similar%20to%20the%20original%20variance) is a great resource that talks about various ways of correlation between categorical and continous variables. ## Author By Abhinav Garg
true
code
0.33231
null
null
null
null
# Deep learning for Natural Language Processing * Simple text representations, bag of words * Word embedding and... not just another word2vec this time * 1-dimensional convolutions for text * Aggregating several data sources "the hard way" * Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning Special thanks to Irina Golzmann for help with technical part. # NLTK You will require nltk v3.2 to solve this assignment __It is really important that the version is 3.2, otherwize russian tokenizer might not work__ Install/update * `sudo pip install --upgrade nltk==3.2` * If you don't remember when was the last pip upgrade, `sudo pip install --upgrade pip` If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer. ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline ``` # Dataset Ex-kaggle-competition on job salary prediction ![img](http://www.kdnuggets.com/images/cartoon-data-scientist-salary-negotiation.gif) Original conest - https://www.kaggle.com/c/job-salary-prediction ### Download Go [here](https://www.kaggle.com/c/job-salary-prediction) and download as usual CSC cloud: data should already be here somewhere, just poke the nearest instructor. # What's inside Different kinds of features: * 2 text fields - title and description * Categorical fields - contract type, location Only 1 binary target whether or not such advertisement contains prohibited materials * criminal, misleading, human reproduction-related, etc * diving into the data may result in prolonged sleep disorders ``` df = pd.read_csv("./Train_rev1.csv",sep=',') print df.shape, df.SalaryNormalized.mean() df[:5] ``` # Tokenizing First, we create a dictionary of all existing words. Assign each word a number - it's Id ``` from nltk.tokenize import RegexpTokenizer from collections import Counter,defaultdict tokenizer = RegexpTokenizer(r"\w+") #Dictionary of tokens token_counts = Counter() #All texts all_texts = np.hstack([df.FullDescription.values,df.Title.values]) #Compute token frequencies for s in all_texts: if type(s) is not str: continue s = s.decode('utf8').lower() tokens = tokenizer.tokenize(s) for token in tokens: token_counts[token] +=1 ``` ### Remove rare tokens We are unlikely to make use of words that are only seen a few times throughout the corpora. Again, if you want to beat Kaggle competition metrics, consider doing something better. ``` #Word frequency distribution, just for kicks _=plt.hist(token_counts.values(),range=[0,50],bins=50) #Select only the tokens that had at least 10 occurences in the corpora. #Use token_counts. min_count = 5 tokens = <tokens from token_counts keys that had at least min_count occurences throughout the dataset> token_to_id = {t:i+1 for i,t in enumerate(tokens)} null_token = "NULL" token_to_id[null_token] = 0 print "# Tokens:",len(token_to_id) if len(token_to_id) < 10000: print "Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc" if len(token_to_id) > 100000: print "Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc" ``` ### Replace words with IDs Set a maximum length for titles and descriptions. * If string is longer that that limit - crop it, if less - pad with zeros. * Thus we obtain a matrix of size [n_samples]x[max_length] * Element at i,j - is an identifier of word j within sample i ``` def vectorize(strings, token_to_id, max_len=150): token_matrix = [] for s in strings: if type(s) is not str: token_matrix.append([0]*max_len) continue s = s.decode('utf8').lower() tokens = tokenizer.tokenize(s) token_ids = map(lambda token: token_to_id.get(token,0), tokens)[:max_len] token_ids += [0]*(max_len - len(token_ids)) token_matrix.append(token_ids) return np.array(token_matrix) desc_tokens = vectorize(df.FullDescription.values,token_to_id,max_len = 500) title_tokens = vectorize(df.Title.values,token_to_id,max_len = 15) ``` ### Data format examples ``` print "Matrix size:",title_tokens.shape for title, tokens in zip(df.Title.values[:3],title_tokens[:3]): print title,'->', tokens[:10],'...' ``` __ As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network __ # Non-sequences Some data features are categorical data. E.g. location, contract type, company They require a separate preprocessing step. ``` #One-hot-encoded category and subcategory from sklearn.feature_extraction import DictVectorizer categories = [] data_cat = df[["Category","LocationNormalized","ContractType","ContractTime"]] categories = [A list of dictionaries {"category":category_name, "subcategory":subcategory_name} for each data sample] vectorizer = DictVectorizer(sparse=False) df_non_text = vectorizer.fit_transform(categories) df_non_text = pd.DataFrame(df_non_text,columns=vectorizer.feature_names_) ``` # Split data into training and test ``` #Target variable - whether or not sample contains prohibited material target = df.is_blocked.values.astype('int32') #Preprocessed titles title_tokens = title_tokens.astype('int32') #Preprocessed tokens desc_tokens = desc_tokens.astype('int32') #Non-sequences df_non_text = df_non_text.astype('float32') #Split into training and test set. #Difficulty selector: #Easy: split randomly #Medium: split by companies, make sure no company is in both train and test set #Hard: do whatever you want, but score yourself using kaggle private leaderboard title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = <define_these_variables> ``` ## Save preprocessed data [optional] * The next tab can be used to stash all the essential data matrices and get rid of the rest of the data. * Highly recommended if you have less than 1.5GB RAM left * To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True. ``` save_prepared_data = True #save read_prepared_data = False #load #but not both at once assert not (save_prepared_data and read_prepared_data) if save_prepared_data: print "Saving preprocessed data (may take up to 3 minutes)" import pickle with open("preprocessed_data.pcl",'w') as fout: pickle.dump(data_tuple,fout) with open("token_to_id.pcl",'w') as fout: pickle.dump(token_to_id,fout) print "done" elif read_prepared_data: print "Reading saved data..." import pickle with open("preprocessed_data.pcl",'r') as fin: data_tuple = pickle.load(fin) title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple with open("token_to_id.pcl",'r') as fin: token_to_id = pickle.load(fin) #Re-importing libraries to allow staring noteboook from here import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline print "done" ``` # Train the monster Since we have several data sources, our neural network may differ from what you used to work with. * Separate input for titles * cnn+global max or RNN * Separate input for description * cnn+global max or RNN * Separate input for categorical features * Few dense layers + some black magic if you want These three inputs must be blended somehow - concatenated or added. * Output: a simple regression task ``` #libraries import lasagne from theano import tensor as T import theano #3 inputs and a refere output title_token_ids = T.matrix("title_token_ids",dtype='int32') desc_token_ids = T.matrix("desc_token_ids",dtype='int32') categories = T.matrix("categories",dtype='float32') target_y = T.vector("is_blocked",dtype='float32') ``` # NN architecture ``` title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids) descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids) cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories) # Descriptions #word-wise embedding. We recommend to start from some 64 and improving after you are certain it works. descr_nn = lasagne.layers.EmbeddingLayer(descr_inp, input_size=len(token_to_id)+1, output_size=?) #reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1]) descr_nn = 1D convolution over embedding, maybe several ones in a stack #pool over time descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max) #Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way #1dconv -> 1d max pool ->1dconv and finally global pool # Titles title_nn = <Process titles somehow (title_inp)> # Non-sequences cat_nn = <Process non-sequences(cat_inp)> nn = <merge three layers into one (e.g. lasagne.layers.concat) > nn = lasagne.layers.DenseLayer(nn,your_lucky_number) nn = lasagne.layers.DropoutLayer(nn,p=maybe_use_me) nn = lasagne.layers.DenseLayer(nn,1,nonlinearity=lasagne.nonlinearities.linear) ``` # Loss function * The standard way: * prediction * loss * updates * training and evaluation functions ``` #All trainable params weights = lasagne.layers.get_all_params(nn,trainable=True) #Simple NN prediction prediction = lasagne.layers.get_output(nn)[:,0] #loss function loss = lasagne.objectives.squared_error(prediction,target_y).mean() #Weight optimization step updates = <your favorite optimizer> ``` ### Determinitic prediction * In case we use stochastic elements, e.g. dropout or noize * Compile a separate set of functions with deterministic prediction (deterministic = True) * Unless you think there's no neet for dropout there ofc. Btw is there? ``` #deterministic version det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0] #equivalent loss function det_loss = <an excercise in copy-pasting and editing> ``` ### Coffee-lation ``` train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates) eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction]) ``` # Training loop * The regular way with loops over minibatches * Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset ``` # Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z) def iterate_minibatches(*arrays,**kwargs): batchsize=kwargs.get("batchsize",100) shuffle = kwargs.get("shuffle",True) if shuffle: indices = np.arange(len(arrays[0])) np.random.shuffle(indices) for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield [arr[excerpt] for arr in arrays] ``` ### Tweaking guide * batch_size - how many samples are processed per function call * optimization gets slower, but more stable, as you increase it. * May consider increasing it halfway through training * minibatches_per_epoch - max amount of minibatches per epoch * Does not affect training. Lesser value means more frequent and less stable printing * Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch * n_epochs - total amount of epochs to train for * `n_epochs = 10**10` and manual interrupting is still an option Tips: * With small minibatches_per_epoch, network quality may jump up and down for several epochs * Plotting metrics over training time may be a good way to analyze which architectures work better. * Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential ``` from sklearn.metrics import mean_squared_error,mean_absolute_error n_epochs = 100 batch_size = 100 minibatches_per_epoch = 100 for i in range(n_epochs): #training epoch_y_true = [] epoch_y_pred = [] b_c = b_loss = 0 for j, (b_desc,b_title,b_cat, b_y) in enumerate( iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)): if j > minibatches_per_epoch:break loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y) b_loss += loss b_c +=1 epoch_y_true.append(b_y) epoch_y_pred.append(pred_probas) epoch_y_true = np.concatenate(epoch_y_true) epoch_y_pred = np.concatenate(epoch_y_pred) print "Train:" print '\tloss:',b_loss/b_c print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5 print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred) #evaluation epoch_y_true = [] epoch_y_pred = [] b_c = b_loss = 0 for j, (b_desc,b_title,b_cat, b_y) in enumerate( iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)): if j > minibatches_per_epoch: break loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y) b_loss += loss b_c +=1 epoch_y_true.append(b_y) epoch_y_pred.append(pred_probas) epoch_y_true = np.concatenate(epoch_y_true) epoch_y_pred = np.concatenate(epoch_y_pred) print "Val:" print '\tloss:',b_loss/b_c print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5 print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred) print "If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. " ``` # Final evaluation Evaluate network over the entire test set ``` #evaluation epoch_y_true = [] epoch_y_pred = [] b_c = b_loss = 0 for j, (b_desc,b_title,b_cat, b_y) in enumerate( iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)): loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y) b_loss += loss b_c +=1 epoch_y_true.append(b_y) epoch_y_pred.append(pred_probas) epoch_y_true = np.concatenate(epoch_y_true) epoch_y_pred = np.concatenate(epoch_y_pred) print "Scores:" print '\tloss:',b_loss/b_c print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5 print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred) ``` Now tune the monster for least MSE you can get! # Next time in our show * Recurrent neural networks * How to apply them to practical problems? * What else can they do? * Why so much hype around LSTM? * Stay tuned!
true
code
0.343094
null
null
null
null
# Regarding this Notebook This is a replication of the original analysis performed in the paper by [Waade & Enevoldsen 2020](missing). This replication script will not be updated as it is intended for reproducibility. Any deviations from the paper is marked with bold for transparency. Footnotes and internal documentation references are removed from this example to avoid confusion. --- # 2.2 Using tomsup One of the advantages of computational models of cognitive processes is that the implications of the model can be worked out by simulating the model’s behavior in a variety of situations. tomsup in particular, allows to test the k-ToM model as it plays a wide set of game-theoretical situations (e.g. Matching Pennies or Prisoner’s Dilemma), in interaction with a variety of different agents (e.g. other k-ToM or less sophisticated agents), within different possible settings (e.g. repeated interactions with the same opponent, or round robin tournaments). In order to better understand the setup of the tomsup package, we start with the case of two simple agents interacting, followed by a simple exampleusing k-ToM agents, which will also illustrate how one might implement tomsup in an experiment. Lastly, we will show how to run a simulation using multiple agents as well as how to plot the evolving internal states of a k-ToM agent. In this simple scenario two agents are playing the Matching Pennies game. One agent hides a penny in one hand: let’s say chooses 0 for hiding in the left hand, and 1 in the right. The other agent has to guess where the penny is. If the second agent guesses (chooses the same hand as the first), it wins and the first loses. In other words, the first agent wants to choose the hand that the second will not choose and the second wants to choose the hand that the first chooses. In this example, one of the agents implements the Random Bias strategy (e.g. has a 60 percent probability of choosing right over left), while the other implements a classic Q-learning strategy (a model free reinforcement learning mechanism updating the expected reward of choosing a specific option on a trial by trial basis). The full list of strategies already implemented in tomsup is accessible using the function `valid_agents()`. The user first has to install the tomsup package developed using python 3.6 (Van Rossum & Drake, 2009). The package can be downloaded and installed using pip: ```pip3 install tomsup``` **However, in this notebook we will assume the user simply downloaded the git. Feel free to skip the next code chunk if that is not the case.** ``` # assuming you are in the github folder change the path - not relevant if tomsup is installed via. pip import os os.chdir("..") # go out of the tutorials folder ``` Both approaches will also install the required dependencies. Now tomsup can be imported into Python following the lines; ``` import tomsup as ts ``` We will also set a arbitrary seed for to ensure reproducibility; ``` import random import numpy as np np.random.seed(1995) random.seed(1995) # The year of birth of the first author ``` First we need to set up the Matching Pennies game. As different games are defined by different payoff matrices, we set up the game by creating the appropriate payoff matrix using the ```PayoffMatrix``` class. ``` # initiate the competitive matching pennies game penny = ts.PayoffMatrix(name="penny_competitive") # print the payoff matrix print(penny) ``` The Matching Pennies game is a zero sum game, meaning that for one agent to get a reward, the opponent has to lose. Agents have thus to predict their opponents' behavior, which is ideal for investigating \gls{tom}. Note that to explore other payoff matrices included in the package, or to learn how to specify a custom payoff matrix, the user can type the `help(ts.PayoffMatrix)` command. Then we create the first of the two competing agents: ``` # define the random bias agent, which chooses 1 70 percent of the time, and call the agent "jung" jung = ts.RB(bias=0.7) # Examine Agent print(f"jung is a class of type: {type(jung)}") if isinstance(jung, ts.Agent): print(f"but jung is also an instance of the parent class ts.Agent") # let us have Jung make a choice choice = jung.compete() print(f"jung chose {choice} and its probability for choosing 1 was {jung.get_bias()}.") ``` Note that it is possible to create one or more agents simultaneously using the convenient `create\_agents()` and passing any starting parameters to it in the form of a dictionary. ``` # create a reinforcement learning agent skinner = ts.create_agents(agents="QL", start_params={"save_history": True}) ``` Now that both agents are created, we have them play against each other. ``` # have the agents compete for 30 rounds results = ts.compete(jung, skinner, p_matrix=penny, n_rounds=30) # examine results print(results.head()) # inspect the first 5 rows of the dataframe ``` ** Note: you can remove the print() to get a nicer printout of the dataframe ** ``` results.head() # inspect the first 5 rows of the dataframe ``` The data frame stores the choice of each agent as well as their resulting payoff. Simply summing the payoff columns would determine the winner. ## k-ToM Here we will present some simple examples of the k-ToM agent. For a more in-depth description we recommend checking the expanded introduction on the [Github repository](https://github.com/KennethEnevoldsen/tomsup/blob/master/tutorials/introduction_to_tom.ipynb). We will start of by creating a 0-ToM with default priors and `save_history=True` to examine the workings of it. Notice that setting `save_history` is turned off by default to save on memory which is especially problematic for ToM agents with high sophistication level. ``` # Creating a simple 1-ToM with default parameters tom_1 = ts.TOM(level=1, dilution=None, save_history=True) # Extract the parameters tom_1.print_parameters() ``` Note that k-ToM agents as default uses agnostic starting beliefs. These can be shown in detail and specified as desired, as shown in **appendix in the paper**. To increase the agent's tendency to choose one we could simply increase its bias. Similarly, if we want the agent to behave in a more more deterministic fashion we can decrease the behavioural temperature. When the parameter values are set, we can play the agent against an opponent using the `.compete()` method. Where `agent` denote the agent in the payoff matrix (0 or 1) and the `op_choice` denote the choice of the opponent during the previous round. ``` tom_2 = ts.TOM( level=2, volatility=-2, b_temp=-2, # more deterministic bias=0, dilution=None, save_history=True, ) choice = tom_2.compete(p_matrix=penny, agent=0, op_choice=None) print("tom_2 chose:", choice) ``` The user is recommended to have the 1-ToM and the 2-ToM agents compete using the previously presented `ts.compete()` function for simplicity. However, to make the process more transparent for the user in the following we create a simple for-loop: ``` tom_2.reset() # reset before start prev_choice_1tom = None prev_choice_2tom = None for trial in range(1, 4): # note that op_choice is choice on previous turn # and that agent is the agent you respond to in the payoff matrix choice_1 = tom_1.compete(p_matrix=penny, agent=0, op_choice=prev_choice_1tom) choice_2 = tom_2.compete(p_matrix=penny, agent=1, op_choice=prev_choice_2tom) # update previous choice prev_choice_1tom = choice_1 prev_choice_2tom = choice_2 print( f"Round {trial}", f" 1-ToM choose {choice_1}", f" 2-ToM choose {choice_2}", sep="\n", ) ``` A for loop like this can be used to implement k-ToM in an experimental setting by replacing the agent with the behavior of a participant. Examples of such implementations (interfacing with PsychoPy are available in the [documentation](https://github.com/KennethEnevoldsen/tomsup/tree/master/tutorials/psychopy_experiment)). ``` tom_2.print_internal( keys=["p_k", "p_op"], level=[0, 1] # print these two states ) # for the agent simulated opponents 0-ToM and 1-ToM ``` For instance, we can note that the estimate of the opponent's sophistication level (\texttt{p\_k}) slightly favors a 1-ToM as opposed to a 0-ToM and that the average probability of the opponent choosing one (`p_op`) slightly favors 1 (which was indeed the option the opponent chose). These estimates are quite uncertain due to the few rounds played. More information on how to interpret the internal states of the ToM agent is available in the documentation of the package, e.g. by using the help function `help(tom_2.print_internal)` ## Multiple Agents and Visualizing Results The above syntax is useful for small setups. However, the user might want to build larger simulations involving several agents to simulate data for experimental setup or test underlying assumptions. The package provides syntax for quickly iterating over multiple agents, rounds and even simulations. We will here show a quick example along with how to visualize the results and internal states of ToM agents. ``` # Create a list of agents agents = ["RB", "QL", "WSLS", "1-TOM", "2-TOM"] # And set their starting parameters. An empty dictionary denotes default values start_params = [{"bias": 0.7}, {"learning_rate": 0.5}, {}, {}, {}] group = ts.create_agents(agents, start_params) # create a group of agents # Specify the environment # round_robin e.g. each agent will play against all other agents group.set_env(env="round_robin") # Finally, we make the group compete 20 simulations of 30 rounds results = group.compete(p_matrix=penny, n_rounds=30, n_sim=20, save_history=True) ``` Following the simulation, a data frame can be extracted as before, with additional columns reporting simulation number, competing agent pair (`agent0` and `agent1`) and if `save_history=True` it will also add two columns denoting the internal states of each agent, e.g. estimates and expectations at each trial. ``` res = group.get_results() print(res.head(1)) # print the first row ``` **Again, removing the print statement gives you a more readable output** ``` res.head(1) ``` ** to allow other authors to examine these results we have also saved the results to a new lines delimited .ndjson** ``` res.to_json("tutorials/paper.ndjson", orient="records", lines=True) ``` The package also provides convenient functions for plotting the agent's choices and performance. > for nicer plots we will increase the figure size using the following code. This is excluded from the paper for simplicity ``` import matplotlib.pyplot as plt # Set figure size plt.rcParams["figure.figsize"] = [10, 10] # plot a heatmap of the rewards for all agent in the tournament group.plot_heatmap(cmap="RdBu_r") plt.rcParams["figure.figsize"] = [5, 5] # plot the choices of the 1-ToM agent when competing against the WSLS agent group.plot_choice(agent0="WSLS", agent1="1-TOM", agent=1) # plot the choices of the 1-ToM agent when competing against the WSLS agent group.plot_choice(agent0="RB", agent1="1-TOM", agent=1) # plot the score of the 1-ToM agent when competing against the WSLS agent group.plot_score(agent0="WSLS", agent1="1-TOM", agent=1) # plot the score of the 2-ToM agent when competing against the WSLS agent group.plot_score(agent0="WSLS", agent1="2-TOM", agent=1) ``` As seen in the heatmap we see that the k-ToM model compares favorably against simpler agents such as the QL. Furthermore notice that the 1-ToM and 2-ToM compares especially favorably against the WSLS agent as this agent act as a deterministic 0-ToM. Similarly, we see that the 2-ToM agent incurs a cost for being more complex by being less able to take advantage of the deterministic nature of WSLS. We can examine this further in the figures, where we see that the 1-ToM is almost perfectly able to predict the behaviour of the WSLS agent after a turn 5 across simulations while the 2-ToM, take longer to estimate the behaviour. The figures also show that 1-ToM differs in behavioural patterns figures when playing against a RB agents showing a bias estimation behaviour, while when playing against the WSLS it shows a oscillating choice pattern. Ultimately these are meant for initial investigation and more elaborate plots can be constructed from the results data frame. > here we just refer to the figures, for more exact references please see the paper Besides these general plots the package also contains a series of shortcuts for plotting $k$-ToM's internal states such as its estimate of its opponent's sophistication level, in which it is seen that the 2-ToM correctly estimates the opponents estimates as having a sophistication level of 1 on average. ``` # plot 2-ToM estimate of its opponent sophistication level group.plot_p_k(agent0="1-TOM", agent1="2-TOM", agent=1, level=0) group.plot_p_k(agent0="1-TOM", agent1="2-TOM", agent=1, level=1) ``` It is also easy to plot k-ToM's estimates of its opponent's model parameters. As an example, the following code plots the 2-ToM's estimate of 1-ToM's volatility and bias. We see that the ToM agent approaches a correct estimate of the default volatility of -2 as well as correctly estimated its opponent as having no inherent bias. ``` # plot 2-ToM estimate of its opponent's volatility while believing the opponent to be level 1. group.plot_tom_op_estimate( agent0="1-TOM", agent1="2-TOM", agent=1, estimate="volatility", level=1, plot="mean" ) # plot 2-ToM estimate of its opponent's bias while believing the opponent to be level 1. group.plot_tom_op_estimate( agent0="1-TOM", agent1="2-TOM", agent=1, estimate="bias", level=1, plot="mean" ) ``` Use `help(ts.AgentGroup.plot_tom_op_estimate)` for information on how to plot the other estimated parameters or k-ToM's uncertainty in these parameters. Additional information can be found in the history column in the results data frame, if needed. This includes all k-ToM's internal states (the changing variables in the model) which for example include choice probability, gradient, estimate uncertainties as well as k-ToM's estimates of its opponent's internal states. Documentation, examples and further tutorials can be found on the Github repository, this also includes a more in-depth description of the dynamics of **the k-ToM model implementation**. --- ## Are you left with any questions? Feel free to open a github issue with questions and or bug reports. Best, *Enevoldsen and Waade*
true
code
0.830457
null
null
null
null
# Building the dataset In this notebook, I'm going to be working with three datasets to create the dataset that the chatbot will be trained on. ``` import pandas as pd files_path = 'D:/Sarcastic Chatbot/Input/' ``` # First dataset **The Wordball Joke Dataset**, [link](https://www.kaggle.com/bfinan/jokes-question-and-answer/). This dataset consists of three files, namely: 1. <i>qajokes1.1.2.csv</i>: with <i>75,114</i> pairs. 2. <i>t_lightbulbs.csv</i>: with <i>2,640</i> pairs. 3. <i>t_nosubject.csv</i>: with <i>32,120</i> pairs. However, I'm not going to incorporate <i>t_lightbulbs.csv</i> in my dataset because I don't want that many examples of one topic. Besides, all the examples are similar in structure (they all start with <i>how many</i>). Read the data files into pandas dataframes: ``` wordball_qajokes = pd.read_csv(files_path + 'qajokes1.1.2.csv', usecols=['Question', 'Answer']) wordball_nosubj = pd.read_csv(files_path + 't_nosubject.csv', usecols=['Question', 'Answer']) print(len(wordball_qajokes)) print(len(wordball_nosubj)) wordball_qajokes.head() wordball_nosubj.head() ``` Concatenate both dataframes into one: ``` wordball = pd.concat([wordball_qajokes, wordball_nosubj], ignore_index=True) wordball.head() print(f"Number of question-answer pairs in the Wordball dataset: {len(wordball)}") ``` ## Text Preprocessing It turns out that not all cells are of type string. So, we can just apply the *str* function to make sure that all of them are of the same desired type. ``` wordball = wordball.applymap(str) ``` Let's look at the characters used in this dataset: ``` def distinct_chars(data, cols): """ This method takes in a pandas dataframe and prints all distinct characters. data: a pandas dataframe. cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name of the questions column and the second item should be the name of the column corresponding to answers. """ if cols is None: cols = list(data.columns) # join all questions into one string questions = ' '.join(data[cols[0]]) # join all answers into one string answers = ' '.join(data[cols[1]]) # get distinct characters used in the data (all questions and answers) dis_chars = set(questions+answers) # print the distinct characters that are used in the data print(f"Number of distinct characters used in the dataset: {len(dis_chars)}") # print(dis_chars) dis_chars = list(dis_chars) # Now let's print those characters in an organized way digits = [char for char in dis_chars if char.isdigit()] alphabets = [char for char in dis_chars if char.isalpha()] special = [char for char in dis_chars if not (char.isdigit() | char.isalpha())] # sort them to make them easier to read digits = sorted(digits) alphabets = sorted(alphabets) special = sorted(special) print(f"Digits: {digits}") print(f"Alphabets: {alphabets}") print(f"Special characters: {special}") distinct_chars(wordball, ['Question', 'Answer']) ``` The following function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data. ``` def clean_text(text): """ This method takes a string, applies different text preprocessing (characters replacement, removal of unwanted characters, removal of extra whitespaces) operations and returns a string. text: a string. """ import re text = str(text) # REPLACEMENT # replace " with ' (because they basically mean the same thing) # text = text.replace('\"','\'') text = re.sub('\"', '\'', text) # replace “ and ” with ' # text = text.replace("“",'\'').replace("”",'\'') text = re.sub("“", '\'', text) text = re.sub("”", '\'', text) # replace ’ with ' # text = text.replace('’','\'') text = re.sub('’', '\'', text) # replace [] and {} with () #text = text.replace('[','(').replace(']',')').replace('{','(').replace('}',')') text = re.sub('\[','(', text) text = re.sub('\]',')', text) text = re.sub('\{','(', text) text = re.sub('\}',')', text) # replace ? with itself and a whitespace preceding it # ex. what's your name? (we want the word name and question mark to be separate tokens) # text = re.sub('\?', ' ?', text) # creating a space between a word and the punctuation following it # punctuation we're using: . , : ; ' ? ! + - * / = % $ @ & ( ) text = re.sub("([?.!,:;'?!+\-*/=%$@&()])", r" \1 ", text) # REMOVAL OF UNWANTED CHARACTERS # accept only alphanumeric and some special characters and remove all others # a-zA-Z0-9 : matches any alphanumeric character and the underscore. # \. : matches . # \, : matches , # \: : matches : # \; : matches ; # \' : matches ' # \? : matches ? # \! : matches ! # \+ : matches + # \- : matches - # \* : matches * # \/ : matches / # \= : matches = # \% : matches % # \$ : matches $ # \@ : matches @ # \& : matches & # ^ is added to the beginning of the set to express that we want the regex to recognize all other characters except # these that are explicitly specified, so that we can omit them. # define the pattern pattern = re.compile('[^a-zA-Z0-9_\.\,\:\;\'\?\!\+\-\*\/\=\%\$\@\&\(\)]') # remove unwanted characters text = re.sub(pattern, ' ', text) # lower case the characters in the string text = text.lower() # REMOVAL OF EXTRA WHITESPACES # remove duplicated spaces text = re.sub(' +', ' ', text) # remove leading and trailing spaces text = text.strip() return text ``` Let's try it out: ``` clean_text("A nice quote I read today: “Everything that you are going through is preparing you for what you asked for”. @hi % & =+-*/") ``` The following method prints a question-answer pair from the dataset, it will be helpful to give us a sense of what the *clean_text* function results in: ``` def print_question_answer(df, index, cols): print(f"Question: ({index})") print(df.loc[index][cols[0]]) print(f"Answer: ({index})") print(df.loc[index][cols[1]]) print("Before applying text preprocessing:") print_question_answer(wordball, 102, ['Question', 'Answer']) print_question_answer(wordball, 200, ['Question', 'Answer']) print_question_answer(wordball, 88376, ['Question', 'Answer']) print_question_answer(wordball, 94351, ['Question', 'Answer']) ``` Apply text preprocessing (characters replacement, removal of unwanted characters, removal of extra whitespaces): ``` wordball = wordball.applymap(clean_text) print("After applying text preprocessing:") print_question_answer(wordball, 102, ['Question', 'Answer']) print_question_answer(wordball, 200, ['Question', 'Answer']) print_question_answer(wordball, 88376, ['Question', 'Answer']) print_question_answer(wordball, 94351, ['Question', 'Answer']) ``` The following function applies some preprocessing operations on the data, concretely: 1. Drops unecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)* 2. Drops rows with empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) * 3. Drops rows with more than 30 words in either the question or the answer or if the answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)* ``` def preprocess_data(data, cols): """ This method preprocess data and does the following: 1. drops unecessary duplicate pairs. 2. drops rows with empty strings. 3. drops rows with more than 30 words in either the question or the answer, or if the an answer has less than two characters. Arguments: data: a pandas dataframe. cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name of the questions column and the second item should be the name of the column corresponding to answers. Returns: a pandas dataframe. """ # (1) Remove unecessary duplicate pairs but keep only one instance of all duplicates. print('Removing unecessary duplicate pairs:') data_len_before = len(data) # len of data before removing duplicates print(f"# of examples before removing duplicates: {data_len_before}") # drop duplicates data = data.drop_duplicates(keep='first') data_len_after = len(data) # len of data after removing duplicates print(f"# of examples after removing duplicates: {data_len_after}") print(f"# of removed duplicates: {data_len_before-data_len_after}") # (2) Drop rows with empty strings. print('Removing empty string rows:') if cols is None: cols = list(data.columns) data_len_before = len(data) # len of data before removing empty strings print(f"# of examples before removing rows with empty question/answers: {data_len_before}") # I am going to use boolean masking to filter out rows with an empty question or answer data = data[(data[cols[0]] != '') & (data[cols[1]] != '')] # also, the following row results in the same as the above. # data = data.query('Answer != "" and Question != ""') data_len_after = len(data) # len of data after removing empty strings print(f"# of examples after removing with empty question/answers: {data_len_after}") print(f"# of removed empty string rows: {data_len_before-data_len_after}") # (3) Drop rows with more than 30 words in either the question or the answer # or if the an answer has less than two characters. def accepted_length(qa_pair): q_len = len(qa_pair[0].split(' ')) a_len = len(qa_pair[1].split(' ')) if (q_len <= 30) & ((a_len <= 30) & (len(qa_pair[1]) > 1)): return True return False print('Removing rows with more than 30 words in either the question or the answer:') data_len_before = len(data) # len of data before dropping those rows (30+ words) print(f"# of examples before removing rows with more than 30 words: {data_len_before}") # filter out rows with more than 30 words accepted_mask = data.apply(accepted_length, axis=1) data = data[accepted_mask] data_len_after = len(data) # len of data after dropping those rows (50+ words) print(f"# of examples after removing rows with more than 30 words: {data_len_after}") print(f"# of removed empty rows with more than 30 words: {data_len_before-data_len_after}") print("Data preprocessing is done.") return data wordball = preprocess_data(wordball, ['Question', 'Answer']) print(f"# of question-answer pairs we have left in the Wordball dataset: {len(wordball)}") ``` Let's look at the characters after cleaning the data: ``` distinct_chars(wordball, ['Question', 'Answer']) ``` # Second Dataset **reddit /r/Jokes**, [here](https://www.kaggle.com/cuddlefish/reddit-rjokes#jokes_score_name_clean.csv). This dataset consists of two files, namely: 1. <i>jokes_score_name_clean.csv</i>: with <i>133,992</i> pairs. 2. <i>all_jokes.csv</i> However, I'm not going to incorporate <i>all_jokes.csv</i> in the dataset because it's so messy. ``` reddit_jokes = pd.read_csv(files_path + 'jokes_score_name_clean.csv', usecols=['q', 'a']) ``` Let's rename the columns to have them aligned with the previous dataset: ``` reddit_jokes.rename(columns={'q':'Question', 'a':'Answer'}, inplace=True) reddit_jokes.head() print(len(reddit_jokes)) distinct_chars(reddit_jokes, ['Question', 'Answer']) ``` ## Text Preprocessing ``` reddit_jokes = reddit_jokes.applymap(str) ``` Reddit data has some special tags like <i>[removed]</i> or <i>[deleted]</i> (these two mean that the comment has been removed/deleted). Also, they're written in an inconsistent way, i.e. you may find the tag <i>[removed]</i> capitalized or lowercased.<br> The next function will address reddit tags as follows: 1. Drops rows with deleted, removed or censored tags. 2. Replaces other tags found in text with a whitespace. *(i.e. some comments have tags like <i>[censored], [gaming], [long], [request] and [dirty]</i> and we want to omit these tags from the text)* ``` def clean_reddit_tags(data, cols): """ This function removes reddit-related tags from the data and does the following: 1. drops rows with deleted, removed or censored tags. 2. replaces other tags found in text with a whitespace. Arguments: data: a pandas dataframe. cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name of the questions column and the second item should be the name of the column corresponding to answers. Returns: a pandas dataframe. """ import re if cols is None: cols = list(data.columns) # First, I'm going to lowercase all the text to address these tags # however, I'm not going to alter the original dataframe because I don't want text to be lowercased. data_copy = data.copy() data_copy[cols[0]] = data_copy[cols[0]].str.lower() data_copy[cols[1]] = data_copy[cols[1]].str.lower() # drop rows with deleted, removed or censored tags. # qa_pair[0] is the question, qa_pair[1] is the answer mask = data_copy.apply(lambda qa_pair: False if (qa_pair[0]=='[removed]') | (qa_pair[0]=='[deleted]') | (qa_pair[0]=='[censored]') | (qa_pair[1]=='[removed]') | (qa_pair[1]=='[deleted]') | (qa_pair[1]=='[censored]') else True, axis=1) # drop the rows, notice we're using the mask to filter out those rows # in the original dataframe 'data', because we don't need it anymore data = data[mask] print(f"# of rows dropped with [deleted], [removed] or [censored] tags: {mask.sum()}") # replaces other tags found in text with a whitespace. def sub_tag(pair): """ This method substitute tags (square brackets with words inside) with whitespace. Arguments: pair: a Pandas Series, where the first item is the question and the second is the answer. Returns: pair: a Pandas Series. """ # \[(.*?)\] is a regex to recognize square brackets [] with anything in between p=re.compile("\[(.*?)\]") pair[0] = re.sub(p, ' ', pair[0]) pair[1] = re.sub(p, ' ', pair[1]) return pair # substitute tags with whitespaces. data = data.apply(sub_tag, axis=1) return data print("Before addressing tags:") print_question_answer(reddit_jokes, 1825, ['Question', 'Answer']) print_question_answer(reddit_jokes, 52906, ['Question', 'Answer']) print_question_answer(reddit_jokes, 59924, ['Question', 'Answer']) print_question_answer(reddit_jokes, 1489, ['Question', 'Answer']) ``` **Note:** the following cell may take multiple seconds to finish. ``` reddit_jokes = clean_reddit_tags(reddit_jokes, ['Question', 'Answer']) reddit_jokes print("After addressing tags:") # because rows with [removed], [deleted] and [censored] tags have been dropped # we're not going to print the rows (index=1825, index=59924) since they contain # those tags, or we're going to have a KeyError print_question_answer(reddit_jokes, 52906, ['Question', 'Answer']) print_question_answer(reddit_jokes, 1489, ['Question', 'Answer']) ``` **Note:** notice the question whose index is 52906, has some leading whitespaces. That's because it had the <i>[Corny]</i> tag and the function replaced it with whitespaces. Also, the question whose index is 1489 has an empty answer and that's because of the fact that the original answer just square brackets with some whitespaces in between. We're going to address all of that next! Now, let's apply the *clean_text* function on the reddit data.<br> **Remember:** the *clean_text* function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data. ``` reddit_jokes = reddit_jokes.applymap(clean_text) print_question_answer(reddit_jokes, 52906, ['Question', 'Answer']) print_question_answer(reddit_jokes, 1489, ['Question', 'Answer']) ``` Everything looks good!<br> Now, let's apply the *preprocess_data* function on the data.<br> **Remember:** the *preprocess_data* function applies the following preprocessing operations: 1. Drops unecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)* 2. Drops rows with empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) * 3. Drops rows with more than 30 words in either the question or the answer or if the an answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)* ``` reddit_jokes = preprocess_data(reddit_jokes, ['Question', 'Answer']) print(f"Number of question answer pairs in the reddit /r/Jokes dataset: {len(reddit_jokes)}") distinct_chars(reddit_jokes, ['Question', 'Answer']) ``` # Third Dataset **Question-Answer Jokes**, [here](https://www.kaggle.com/jiriroz/qa-jokes). This dataset consists of one file, namely: * <i>jokes_score_name_clean.csv</i>: with <i>38,269</i> pairs. ``` qa_jokes = pd.read_csv(files_path + 'jokes.csv', usecols=['Question', 'Answer']) qa_jokes print(len(qa_jokes)) distinct_chars(qa_jokes, ['Question', 'Answer']) ``` ## Text Preprocessing If you look at some examples in the dataset, you notice that some examples has 'Q:' at beginning of the question and 'A:' at the beginning of the answer, so we need to get rid of these prefixes because they don't convey useful information.<br> You also notice some examples where both 'Q:' and 'A:' are found in either the question or the answer, although I'm not going to omit these because they probably convey information and are part of the answer. However, some of them have 'Q:' in the question and 'Q: question A: answer' where the question in the answer is the same question, so we need to fix that. ``` def clean_qa_prefixes(data, cols): """ This function removes special prefixes ('Q:' and 'A:') found in the data. i.e. input="Q: how's your day?" --> output=" how's your day?" Arguments: data: a pandas dataframe. cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name of the questions column and the second item should be the name of the column corresponding to answers. Returns: a pandas dataframe. """ def removes_prefixes(pair): """ This function removes prefixes ('Q:' and 'A:') from the question and answer. Examples: Input: qusetion="Q: what is your favorite Space movie?", answer='A: Interstellar!' Output: qusetion=' what is your favorite Space movie?', answer=' Interstellar!' Input: question="Q: how\'s your day?", answer='Q: how\'s your day? A: good, thanks.' Output: qusetion=" how's your day?", answer='good, thanks.' Input: qusetion='How old are you?', answer='old enough' Output: qusetion='How old are you?', answer='old enough' Arguments: pair: a Pandas Series, where the first item is the question and the second is the answer. Returns: pair: a Pandas Series. """ # pair[0] corresponds to the question # pair[1] corresponds to the answer # if the question contains 'Q:' and the answer contains 'A:' but doesn't contain 'Q:' if ('Q:' in pair[0]) and ('A:' in pair[1]) and ('Q:' not in pair[1]): pair[0] = pair[0].replace('Q:','') pair[1] = pair[1].replace('A:','') # if the answer contains both 'Q:' and 'A:' elif ('A:' in pair[1]) and ('Q:' in pair[1]): pair[0] = pair[0].replace('Q:','') # now we should check if the text between 'Q:' and 'A:' is the same text in the question (pair[0]) # because if they are, this means that the question is repeated in the answer and we should address that. q_start = pair[1].find('Q:') + 2 # index of the start of the text that we want to extract q_end = pair[1].find('A:') # index of the end of the text that we want to extract q_txt = pair[1][q_start:q_end].strip() # if the question is repeated in the answer if q_txt == pair[0].strip(): # in case the question is repeated in the answer, removes it from the answer pair[1] = pair[1][q_end+2:].strip() return pair return data.apply(removes_prefixes, axis=1) print("Before removing unnecessary prefixes:") print_question_answer(qa_jokes, 44, ['Question', 'Answer']) print_question_answer(qa_jokes, 22, ['Question', 'Answer']) print_question_answer(qa_jokes, 31867, ['Question', 'Answer']) qa_jokes = clean_qa_prefixes(qa_jokes, ['Question', 'Answer']) print("After removing unnecessary prefixes:") print_question_answer(qa_jokes, 44, ['Question', 'Answer']) print_question_answer(qa_jokes, 22, ['Question', 'Answer']) print_question_answer(qa_jokes, 31867, ['Question', 'Answer']) ``` Notice that the third example both 'Q:' and 'A:' are part of the answer and conveys information. Now, let's apply the *clean_text* function on the Question-Answer Jokes data.<br> **Remember:** the *clean_text* function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data. ``` qa_jokes = qa_jokes.applymap(clean_text) ``` Now, let's apply the *preprocess_data* function on the data.<br> **Remember:** the *preprocess_data* function applies the following preprocessing operations: 1. Drops unnecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)* 2. Drops rows with an empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) * 3. Drops rows with more than 30 words in either the question or the answer or if the an answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)* ``` qa_jokes = preprocess_data(qa_jokes, ['Question', 'Answer']) print(f"Number of question-answer pairs in the Question-Answer Jokes dataset: {len(qa_jokes)}") distinct_chars(qa_jokes, ['Question', 'Answer']) ``` # Putting it together Let's concatenate all the data we have to create our final dataset. ``` dataset = pd.concat([wordball, reddit_jokes, qa_jokes], ignore_index=True) dataset.head() print(f"Number of question-answer pairs in the dataset: {len(dataset)}") ``` There may be duplicate examples in the data so let's drop them: ``` data_len_before = len(dataset) # len of data before removing duplicates print(f"# of examples before removing duplicates: {data_len_before}") # drop duplicates dataset = dataset.drop_duplicates(keep='first') data_len_after = len(dataset) # len of data after removing duplicates print(f"# of examples after removing duplicates: {data_len_after}") print(f"# of removed duplicates: {data_len_before-data_len_after}") ``` Let's drop rows with NaN values if there's any: ``` dataset.dropna(inplace=True) dataset ``` Let's make sure that all our cells are of the same type: ``` dataset = dataset.applymap(str) print(f"Number of question-answer pairs in the dataset: {len(dataset)}") distinct_chars(dataset, ['Question', 'Answer']) ``` Finally, let's save the dataset: ``` dataset.to_csv(files_path + '/dataset.csv') ```
true
code
0.390243
null
null
null
null
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_10_3_text_generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # T81-558: Applications of Deep Neural Networks **Module 10: Time Series in Keras** * Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx) * For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). # Module 10 Material * Part 10.1: Time Series Data Encoding for Deep Learning [[Video]](https://www.youtube.com/watch?v=dMUmHsktl04&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_1_timeseries.ipynb) * Part 10.2: Programming LSTM with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=wY0dyFgNCgY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_2_lstm.ipynb) * **Part 10.3: Text Generation with Keras and TensorFlow** [[Video]](https://www.youtube.com/watch?v=6ORnRAz3gnA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_3_text_generation.ipynb) * Part 10.4: Image Captioning with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=NmoW_AYWkb4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_4_captioning.ipynb) * Part 10.5: Temporal CNN in Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=i390g8acZwk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_5_temporal_cnn.ipynb) # Google CoLab Instructions The following code ensures that Google CoLab is running the correct version of TensorFlow. ``` try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False ``` # Part 10.3: Text Generation with LSTM Recurrent neural networks are also known for their ability to generate text. As a result, the output of the neural network can be free-form text. In this section, we will see how to train an LSTM can on a textual document, such as classic literature, and learn to output new text that appears to be of the same form as the training material. If you train your LSTM on [Shakespeare](https://en.wikipedia.org/wiki/William_Shakespeare), it will learn to crank out new prose similar to what Shakespeare had written. Don't get your hopes up. You are not going to teach your deep neural network to write the next [Pulitzer Prize for Fiction](https://en.wikipedia.org/wiki/Pulitzer_Prize_for_Fiction). The prose generated by your neural network will be nonsensical. However, it will usually be nearly grammatically and of a similar style as the source training documents. A neural network generating nonsensical text based on literature may not seem useful at first glance. However, this technology gets so much interest because it forms the foundation for many more advanced technologies. The fact that the LSTM will typically learn human grammar from the source document opens a wide range of possibilities. You can use similar technology to complete sentences when a user is entering text. Simply the ability to output free-form text becomes the foundation of many other technologies. In the next part, we will use this technique to create a neural network that can write captions for images to describe what is going on in the picture. ### Additional Information The following are some of the articles that I found useful in putting this section together. * [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) * [Keras LSTM Generation Example](https://keras.io/examples/lstm_text_generation/) ### Character-Level Text Generation There are several different approaches to teaching a neural network to output free-form text. The most basic question is if you wish the neural network to learn at the word or character level. In many ways, learning at the character level is the more interesting of the two. The LSTM is learning to construct its own words without even being shown what a word is. We will begin with character-level text generation. In the next module, we will see how we can use nearly the same technique to operate at the word level. We will implement word-level automatic captioning in the next module. We begin by importing the needed Python packages and defining the sequence length, named **maxlen**. Time-series neural networks always accept their input as a fixed-length array. Because you might not use all of the sequence elements, it is common to fill extra elements with zeros. You will divide the text into sequences of this length, and the neural network will train to predict what comes after this sequence. ``` from tensorflow.keras.callbacks import LambdaCallback from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM from tensorflow.keras.optimizers import RMSprop from tensorflow.keras.utils import get_file import numpy as np import random import sys import io import requests import re ``` For this simple example, we will train the neural network on the classic children's book [Treasure Island](https://en.wikipedia.org/wiki/Treasure_Island). We begin by loading this text into a Python string and displaying the first 1,000 characters. ``` r = requests.get("https://data.heatonresearch.com/data/t81-558/text/"\ "treasure_island.txt") raw_text = r.text print(raw_text[0:1000]) ``` We will extract all unique characters from the text and sort them. This technique allows us to assign a unique ID to each character. Because we sorted the characters, these IDs should remain the same. If we add new characters to the original text, then the IDs would change. We build two dictionaries. The first **char2idx** is used to convert a character into its ID. The second **idx2char** converts an ID back into its character. ``` processed_text = raw_text.lower() processed_text = re.sub(r'[^\x00-\x7f]',r'', processed_text) print('corpus length:', len(processed_text)) chars = sorted(list(set(processed_text))) print('total chars:', len(chars)) char_indices = dict((c, i) for i, c in enumerate(chars)) indices_char = dict((i, c) for i, c in enumerate(chars)) ``` We are now ready to build the actual sequences. Just like previous neural networks, there will be an $x$ and $y$. However, for the LSTM, $x$ and $y$ will both be sequences. The $x$ input will specify the sequences where $y$ are the expected output. The following code generates all possible sequences. ``` # cut the text in semi-redundant sequences of maxlen characters maxlen = 40 step = 3 sentences = [] next_chars = [] for i in range(0, len(processed_text) - maxlen, step): sentences.append(processed_text[i: i + maxlen]) next_chars.append(processed_text[i + maxlen]) print('nb sequences:', len(sentences)) sentences print('Vectorization...') x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool) y = np.zeros((len(sentences), len(chars)), dtype=np.bool) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): x[i, t, char_indices[char]] = 1 y[i, char_indices[next_chars[i]]] = 1 x.shape y.shape ``` The dummy variables for $y$ are shown below. ``` y[0:10] ``` Next, we create the neural network. This neural network's primary feature is the LSTM layer, which allows the sequences to be processed. ``` # build the model: a single LSTM print('Build model...') model = Sequential() model.add(LSTM(128, input_shape=(maxlen, len(chars)))) model.add(Dense(len(chars), activation='softmax')) optimizer = RMSprop(lr=0.01) model.compile(loss='categorical_crossentropy', optimizer=optimizer) model.summary() ``` The LSTM will produce new text character by character. We will need to sample the correct letter from the LSTM predictions each time. The **sample** function accepts the following two parameters: * **preds** - The output neurons. * **temperature** - 1.0 is the most conservative, 0.0 is the most confident (willing to make spelling and other errors). The sample function below is essentially performing a [softmax]() on the neural network predictions. This causes each output neuron to become a probability of its particular letter. ``` def sample(preds, temperature=1.0): # helper function to sample an index from a probability array preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) ``` Keras calls the following function at the end of each training Epoch. The code generates sample text generations that visually demonstrate the neural network better at text generation. As the neural network trains, the generations should look more realistic. ``` def on_epoch_end(epoch, _): # Function invoked at end of each epoch. Prints generated text. print("******************************************************") print('----- Generating text after Epoch: %d' % epoch) start_index = random.randint(0, len(processed_text) - maxlen - 1) for temperature in [0.2, 0.5, 1.0, 1.2]: print('----- temperature:', temperature) generated = '' sentence = processed_text[start_index: start_index + maxlen] generated += sentence print('----- Generating with seed: "' + sentence + '"') sys.stdout.write(generated) for i in range(400): x_pred = np.zeros((1, maxlen, len(chars))) for t, char in enumerate(sentence): x_pred[0, t, char_indices[char]] = 1. preds = model.predict(x_pred, verbose=0)[0] next_index = sample(preds, temperature) next_char = indices_char[next_index] generated += next_char sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() print() ``` We are now ready to train. It can take up to an hour to train this network, depending on how fast your computer is. If you have a GPU available, please make sure to use it. ``` # Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future. # See https://github.com/tensorflow/tensorflow/issues/31308 import logging, os logging.disable(logging.WARNING) os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" # Fit the model print_callback = LambdaCallback(on_epoch_end=on_epoch_end) model.fit(x, y, batch_size=128, epochs=60, callbacks=[print_callback]) ```
true
code
0.582491
null
null
null
null
# Test For The Best Machine Learning Algorithm For Prediction This notebook takes about 40 minutes to run, but we've already run it and saved the data for you. Please read through it, though, so that you understand how we came to the conclusions we'll use moving forward. ## Six Algorithms We're going to compare six different algorithms to determine the best one to produce an accurate model for our predictions. ### Logistic Regression Logistic Regression (LR) is a technique borrowed from the field of statistics. It is the go-to method for binary classification problems (problems with two class values). ![](./docs/logisticfunction.png) Logistic Regression is named for the function used at the core of the method: the logistic function. The logistic function is a probablistic method used to determine whether or not the driver will be the winner. Logistic Regression predicts probabilities. ### Decision Tree A tree has many analogies in real life, and it turns out that it has influenced a wide area of machine learning, covering both classification and regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. ![](./docs/decisiontree.png) This methodology is more commonly known as a "learning decision tree" from data, and the above tree is called a Classification tree because the goal is to classify a driver as the winner or not. ### Random Forest Random forest is a supervised learning algorithm. The "forest" it builds is an **ensemble of decision trees**, usually trained with the “bagging” method, a combination of learning models which increases the accuracy of the result. A random forest eradicates the limitations of a decision tree algorithm. It reduces the overfitting of datasets and increases precision. It generates predictions without requiring many configurations. ![](./docs/randomforest.png) Here's the difference between the Decision Tree and Random Forest methods: ![](./docs/treefortheforest.jpg) ### Support Vector Machine Algorithm (SVC) Support Vector Machines (SVMs) are a set of supervised learning methods used for classification, regression and detection of outliers. The advantages of support vector machines are: - Effective in high dimensional spaces - Still effective in cases where number of dimensions is greater than the number of samples - Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient - Versatile: different kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels The objective of a SVC (Support Vector Classifier) is to fit to the data you provide, returning a "best fit" hyperplane that divides, or categorizes, your data. ### Gaussian Naive Bayes Algorithm Naive Bayes is a classification algorithm for binary (two-class) and multi-class classification problems. The technique is easiest to understand when described using binary or categorical input values. The representation used for naive Bayes is probabilities. A list of probabilities is stored to a file for a learned Naive Bayes model. This includes: - **Class Probabilities:** The probabilities of each class in the training dataset. - **Conditional Probabilities:** The conditional probabilities of each input value given each class value. Naive Bayes can be extended to real-value attributes, most commonly by assuming a Gaussian distribution. This extension of Naive Bayes is called Gaussian Naive Bayes. Other functions can be used to estimate the distribution of the data, but the Gaussian (or normal distribution) is the easiest to work with because you only need to estimate the mean and the standard deviation from your training data. ### k Nearest Neighbor Algorithm (kNN) The k-Nearest Neighbors (KNN) algorithm is a simple, supervised machine learning algorithm that can be used to solve both classification and regression problems. kNN works by finding the distances between a query and all of the examples in the data, selecting the specified number examples (k) closest to the query, then voting for the most frequent label (in the case of classification) or averages the labels (in the case of regression). The kNN algorithm assumes the similarity between the new case/data and available cases, and puts the new case into the category that is most similar to the available categories. ![](./docs/knn.png) ## Analyzing the Data ### Feature Importance Another great quality of the random forest algorithm is that it's easy to measure the relative importance of each feature to the prediction. The Scikit-learn Python Library provides a great tool for this which measures a feature's importance by looking at how much the tree nodes that use that feature reduce impurity across all trees in the forest. It computes this score automatically for each feature after training, and scales the results so the sum of all importance is equal to one. ### Data Visualization When Building a Model How do you visualize the influence of the data? How do you frame the problem? An important tool in the data scientist's toolkit is the power to visualize data using several excellent libraries such as Seaborn or MatPlotLib. Representing your data visually might allow you to uncover hidden correlations that you can leverage. Your visualizations might also help you to uncover bias or unbalanced data. ![](./docs/visualization.png) ### Splitting the Dataset Prior to training, you need to split your dataset into two or more parts of unequal size that still represent the data well. 1. Training. This part of the dataset is fit to your model to train it. This set constitutes the majority of the original dataset. 2. Testing. A test dataset is an independent group of data, often a subset of the original data, that you use to confirm the performance of the model you built. 3. Validating. A validation set is a smaller independent group of examples that you use to tune the model's hyperparameters, or architecture, to improve the model. Depending on your data's size and the question you are asking, you might not need to build this third set. ## Building the Model Using your training data, your goal is to build a model, or a statistical representation of your data, using various algorithms to train it. Training a model exposes it to data and allows it to make assumptions about perceived patterns it discovers, validates, and accepts or rejects. ### Decide on a Training Method Depending on your question and the nature of your data, you will choose a method to train it. Stepping through Scikit-learn's documentation, you can explore many ways to train a model. Depending on the results you get, you might have to try several different methods to build the best model. You are likely to go through a process whereby data scientists evaluate the performance of a model by feeding it unseen data, checking for accuracy, bias, and other quality-degrading issues, and selecting the most appropriate training method for the task at hand. ### Train a Model Armed with your training data, you are ready to "fit" it to create a model. In many ML libraries you will find the code 'model.fit' - it is at this time that you send in your data as an array of values (usually 'X') and a feature variable (usually 'y'). ### Evaluate the Model Once the training process is complete, you will be able to evaluate the model's quality by using test data to gauge its performance. This data is a subset of the original data that the model has not previously analyzed. You can print out a table of metrics about your model's quality. #### Model Fitting In the Machine Learning context, model fitting refers to the accuracy of the model's underlying function as it attempts to analyze data with which it is not familiar. #### Underfitting and Overfitting Underfitting and overfitting are common problems that degrade the quality of the model, as the model either doesn't fit well enough, or it fits too well. This causes the model to make predictions either too closely aligned or too loosely aligned with its training data. An overfit model predicts training data too well because it has learned the data's details and noise too well. An underfit model is not accurate as it can neither accurately analyze its training data nor data it has not yet 'seen'. ![](./docs/overfit.png) Let's test out some algorithms to choose our path for modelling our predictions. ``` import warnings warnings.filterwarnings("ignore") import time start = time.time() import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pickle from sklearn.metrics import confusion_matrix, precision_score from sklearn.metrics import accuracy_score from sklearn.preprocessing import StandardScaler,LabelEncoder,OneHotEncoder from sklearn.model_selection import cross_val_score,StratifiedKFold,RandomizedSearchCV from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.metrics import confusion_matrix,precision_score,f1_score,recall_score from sklearn.neural_network import MLPClassifier, MLPRegressor plt.style.use('seaborn') np.set_printoptions(precision=4) data = pd.read_csv('./data_f1/data_filtered.csv') data.head() len(data) dnf_by_driver = data.groupby('driver').sum()['driver_dnf'] driver_race_entered = data.groupby('driver').count()['driver_dnf'] driver_dnf_ratio = (dnf_by_driver/driver_race_entered) driver_confidence = 1-driver_dnf_ratio driver_confidence_dict = dict(zip(driver_confidence.index,driver_confidence)) driver_confidence_dict dnf_by_constructor = data.groupby('constructor').sum()['constructor_dnf'] constructor_race_entered = data.groupby('constructor').count()['constructor_dnf'] constructor_dnf_ratio = (dnf_by_constructor/constructor_race_entered) constructor_reliability = 1-constructor_dnf_ratio constructor_reliability_dict = dict(zip(constructor_reliability.index,constructor_reliability)) constructor_reliability_dict data['driver_confidence'] = data['driver'].apply(lambda x:driver_confidence_dict[x]) data['constructor_reliability'] = data['constructor'].apply(lambda x:constructor_reliability_dict[x]) #removing retired drivers and constructors active_constructors = ['Alpine F1', 'Williams', 'McLaren', 'Ferrari', 'Mercedes', 'AlphaTauri', 'Aston Martin', 'Alfa Romeo', 'Red Bull', 'Haas F1 Team'] active_drivers = ['Daniel Ricciardo', 'Mick Schumacher', 'Carlos Sainz', 'Valtteri Bottas', 'Lance Stroll', 'George Russell', 'Lando Norris', 'Sebastian Vettel', 'Kimi Räikkönen', 'Charles Leclerc', 'Lewis Hamilton', 'Yuki Tsunoda', 'Max Verstappen', 'Pierre Gasly', 'Fernando Alonso', 'Sergio Pérez', 'Esteban Ocon', 'Antonio Giovinazzi', 'Nikita Mazepin','Nicholas Latifi'] data['active_driver'] = data['driver'].apply(lambda x: int(x in active_drivers)) data['active_constructor'] = data['constructor'].apply(lambda x: int(x in active_constructors)) data.head() data.columns ``` ## Directory to store Models ``` import os if not os.path.exists('./models'): os.mkdir('./models') def position_index(x): if x<4: return 1 if x>10: return 3 else : return 2 ``` ## Model considering only Drivers ``` x_d= data[['GP_name','quali_pos','driver','age_at_gp_in_days','position','driver_confidence','active_driver']] x_d = x_d[x_d['active_driver']==1] sc = StandardScaler() le = LabelEncoder() x_d['GP_name'] = le.fit_transform(x_d['GP_name']) x_d['driver'] = le.fit_transform(x_d['driver']) x_d['GP_name'] = le.fit_transform(x_d['GP_name']) x_d['age_at_gp_in_days'] = sc.fit_transform(x_d[['age_at_gp_in_days']]) X_d = x_d.drop(['position','active_driver'],1) y_d = x_d['position'].apply(lambda x: position_index(x)) #cross validation for diffrent models models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()] names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier'] model_dict = dict(zip(models,names)) mean_results_dri = [] results_dri = [] name = [] for model in models: cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True) result = cross_val_score(model,X_d,y_d,cv=cv,scoring='accuracy') mean_results_dri.append(result.mean()) results_dri.append(result) name.append(model_dict[model]) print(f'{model_dict[model]} : {result.mean()}') plt.figure(figsize=(15,10)) plt.boxplot(x=results_dri,labels=name) plt.xlabel('Models') plt.ylabel('Accuracy') plt.title('Model performance comparision (drivers only)') plt.show() ``` ## Model considering only Constructors ``` x_c = data[['GP_name','quali_pos','constructor','position','constructor_reliability','active_constructor']] x_c = x_c[x_c['active_constructor']==1] sc = StandardScaler() le = LabelEncoder() x_c['GP_name'] = le.fit_transform(x_c['GP_name']) x_c['constructor'] = le.fit_transform(x_c['constructor']) X_c = x_c.drop(['position','active_constructor'],1) y_c = x_c['position'].apply(lambda x: position_index(x)) #cross validation for diffrent models models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()] names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier'] model_dict = dict(zip(models,names)) mean_results_const = [] results_const = [] name = [] for model in models: cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True) result = cross_val_score(model,X_c,y_c,cv=cv,scoring='accuracy') mean_results_const.append(result.mean()) results_const.append(result) name.append(model_dict[model]) print(f'{model_dict[model]} : {result.mean()}') plt.figure(figsize=(15,10)) plt.boxplot(x=results_const,labels=name) plt.xlabel('Models') plt.ylabel('Accuracy') plt.title('Model performance comparision (Teams only)') plt.show() ``` # Model considering both Drivers and Constructors ``` cleaned_data = data[['GP_name','quali_pos','constructor','driver','position','driver_confidence','constructor_reliability','active_driver','active_constructor']] cleaned_data = cleaned_data[(cleaned_data['active_driver']==1)&(cleaned_data['active_constructor']==1)] cleaned_data.to_csv('./data_f1/cleaned_data.csv',index=False) ``` ### Build your X dataset with next columns: - GP_name - quali_pos to predict the classification cluster (1,2,3) - constructor - driver - position - driver confidence - constructor_reliability - active_driver - active_constructor ### Filter the dataset for this Model "Driver + Constructor" all active drivers and constructors ### Create Standard Scaler and Label Encoder for the different features in order to have a similar scale for all features ### Prepare the X (Features dataset) and y for predicted value. In our case, we want to calculate the cluster of final position for ech driver using the "position_index" function ``` # Implement X, y ``` ### Applied the same list of ML Algorithms for cross validation of different models And Store the accuracy Mean Value in order to compare with previous ML Models ``` mean_results = [] results = [] name = [] # cross validation for different models ``` ### Use the same boxplot plotter used in the previous Models ``` # Implement boxplot ``` # Comparing The 3 ML Models Let's see mean score of our three assumptions. ``` lr = [mean_results[0],mean_results_dri[0],mean_results_const[0]] dtc = [mean_results[1],mean_results_dri[1],mean_results_const[1]] rfc = [mean_results[2],mean_results_dri[2],mean_results_const[2]] svc = [mean_results[3],mean_results_dri[3],mean_results_const[3]] gnb = [mean_results[4],mean_results_dri[4],mean_results_const[4]] knn = [mean_results[5],mean_results_dri[5],mean_results_const[5]] font1 = { 'family':'serif', 'color':'black', 'weight':'normal', 'size':18 } font2 = { 'family':'serif', 'color':'black', 'weight':'bold', 'size':12 } x_ax = np.arange(3) plt.figure(figsize=(30,15)) bar1 = plt.bar(x_ax,lr,width=0.1,align='center', label="Logistic Regression") bar2 = plt.bar(x_ax+0.1,dtc,width=0.1,align='center', label="DecisionTree") bar3 = plt.bar(x_ax+0.2,rfc,width=0.1,align='center', label="RandomForest") bar4 = plt.bar(x_ax+0.3,svc,width=0.1,align='center', label="SVC") bar5 = plt.bar(x_ax+0.4,gnb,width=0.1,align='center', label="GaussianNB") bar6 = plt.bar(x_ax+0.5,knn,width=0.1,align='center', label="KNN") plt.text(0.05,1,'CV score for combined data',fontdict=font1) plt.text(1.04,1,'CV score only driver data',fontdict=font1) plt.text(2,1,'CV score only team data',fontdict=font1) for bar in bar1.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar2.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar3.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar4.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar5.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) for bar in bar6.patches: yval = bar.get_height() plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2) plt.legend(loc='center', bbox_to_anchor=(0.5, -0.10), shadow=False, ncol=6) plt.show() end = time.time() import datetime str(datetime.timedelta(seconds=(end - start))) print(str(end - start)+" seconds") ```
true
code
0.362038
null
null
null
null
* 比较不同组合组合优化器在不同规模问题上的性能; * 下面的结果主要比较``alphamind``和``python``中其他优化器的性能差别,我们将尽可能使用``cvxopt``中的优化器,其次选择``scipy``; * 由于``scipy``在``ashare_ex``上面性能太差,所以一般忽略``scipy``在这个股票池上的表现; * 时间单位都是毫秒。 * 请在环境变量中设置`DB_URI`指向数据库 ``` import os import timeit import numpy as np import pandas as pd import cvxpy from alphamind.api import * from alphamind.portfolio.linearbuilder import linear_builder from alphamind.portfolio.meanvariancebuilder import mean_variance_builder from alphamind.portfolio.meanvariancebuilder import target_vol_builder pd.options.display.float_format = '{:,.2f}'.format ``` ## 0. 数据准备 ------------------ ``` ref_date = '2018-02-08' u_names = ['sh50', 'hs300', 'zz500', 'zz800', 'zz1000', 'ashare_ex'] b_codes = [16, 300, 905, 906, 852, None] risk_model = 'short' factor = 'EPS' lb = 0.0 ub = 0.1 data_source = os.environ['DB_URI'] engine = SqlEngine(data_source) universes = [Universe(u_name) for u_name in u_names] codes_set = [engine.fetch_codes(ref_date, universe=universe) for universe in universes] data_set = [engine.fetch_data(ref_date, factor, codes, benchmark=b_code, risk_model=risk_model) for codes, b_code in zip(codes_set, b_codes)] ``` ## 1. 线性优化(带线性限制条件) --------------------------------- ``` df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): factor_data = sample_data['factor'] er = factor_data[factor].values n = len(er) lbound = np.ones(n) * lb ubound = np.ones(n) * ub risk_constraints = np.ones((n, 1)) risk_target = (np.array([1.]), np.array([1.])) status, y, x1 = linear_builder(er, lbound, ubound, risk_constraints, risk_target) elasped_time1 = timeit.timeit("linear_builder(er, lbound, ubound, risk_constraints, risk_target)", number=number, globals=globals()) / number * 1000 A_eq = risk_constraints.T b_eq = np.array([1.]) w = cvxpy.Variable(n) curr_risk_exposure = w * risk_constraints constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0]] objective = cvxpy.Minimize(-w.T * er) prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df prob.value ``` ## 2. 线性优化(带L1限制条件) ----------------------- ``` from cvxpy import pnorm df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind (clp simplex)', 'alphamind (clp interior)', 'alphamind (ecos)']) turn_over_target = 0.5 number = 1 for u_name, sample_data in zip(u_names, data_set): factor_data = sample_data['factor'] er = factor_data[factor].values n = len(er) lbound = np.ones(n) * lb ubound = np.ones(n) * ub if 'weight' in factor_data: current_position = factor_data.weight.values else: current_position = np.ones_like(er) / len(er) risk_constraints = np.ones((len(er), 1)) risk_target = (np.array([1.]), np.array([1.])) status, y, x1 = linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='interior') elasped_time1 = timeit.timeit("""linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='interior')""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) curr_risk_exposure = risk_constraints.T @ w constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0], pnorm(w - current_position, 1) <= turn_over_target] objective = cvxpy.Minimize(-w.T * er) prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 status, y, x2 = linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='simplex') elasped_time3 = timeit.timeit("""linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='simplex')""", number=number, globals=globals()) / number * 1000 status, y, x3 = linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='ecos') elasped_time4 = timeit.timeit("""linear_builder(er, lbound, ubound, risk_constraints, risk_target, turn_over_target=turn_over_target, current_position=current_position, method='ecos')""", number=number, globals=globals()) / number * 1000 np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4) np.testing.assert_almost_equal(x2 @ er, np.array(w.value).flatten() @ er, 4) np.testing.assert_almost_equal(x3 @ er, np.array(w.value).flatten() @ er, 4) df.loc['alphamind (clp interior)', u_name] = elasped_time1 df.loc['alphamind (clp simplex)', u_name] = elasped_time3 df.loc['alphamind (ecos)', u_name] = elasped_time4 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df ``` ## 3. Mean - Variance 优化 (无约束) ----------------------- ``` from cvxpy import * df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) bm = np.zeros(n) lbound = -np.ones(n) * np.inf ubound = np.ones(n) * np.inf risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None, lam=1) elasped_time1 = timeit.timeit("""mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None, lam=1)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er + 0.5 * risk) prob = cvxpy.Problem(objective) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1 x2 = np.array(w.value).flatten() u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2 np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df ``` ## 4. Mean - Variance 优化 (Box约束) --------------- ``` df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) bm = np.zeros(n) lbound = np.zeros(n) ubound = np.ones(n) * 0.1 risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None) elasped_time1 = timeit.timeit("""mean_variance_builder(er, risk_model, bm, lbound, ubound, None, None)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er + 0.5 * risk) constraints = [w >= lbound, w <= ubound] prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1 x2 = np.array(w.value).flatten() u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2 np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df ``` ## 5. Mean - Variance 优化 (Box约束以及线性约束) ---------------- ``` df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) bm = np.zeros(n) lbound = np.zeros(n) ubound = np.ones(n) * 0.1 risk_constraints = np.ones((len(er), 1)) risk_target = (np.array([1.]), np.array([1.])) risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = mean_variance_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target) elasped_time1 = timeit.timeit("""mean_variance_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er + 0.5 * risk) curr_risk_exposure = risk_constraints.T @ w constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0]] prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1 x2 = np.array(w.value).flatten() u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2 np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df ``` ## 6. 线性优化(带二次限制条件) ------------------------- ``` df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind']) number = 1 target_vol = 0.5 for u_name, sample_data in zip(u_names, data_set): all_styles = risk_styles + industry_styles + ['COUNTRY'] factor_data = sample_data['factor'] risk_cov = sample_data['risk_cov'][all_styles].values risk_exposure = factor_data[all_styles].values special_risk = factor_data.srisk.values sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000 er = factor_data[factor].values n = len(er) if 'weight' in factor_data: bm = factor_data.weight.values else: bm = np.ones_like(er) / n lbound = np.zeros(n) ubound = np.ones(n) * 0.1 risk_constraints = np.ones((n, 1)) risk_target = (np.array([bm.sum()]), np.array([bm.sum()])) risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.) status, y, x1 = target_vol_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target, vol_target=target_vol) elasped_time1 = timeit.timeit("""target_vol_builder(er, risk_model, bm, lbound, ubound, risk_constraints, risk_target, vol_target=target_vol)""", number=number, globals=globals()) / number * 1000 w = cvxpy.Variable(n) risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.) objective = cvxpy.Minimize(-w.T * er) curr_risk_exposure = risk_constraints.T @ w constraints = [w >= lbound, w <= ubound, curr_risk_exposure == risk_target[0], risk <= target_vol * target_vol] prob = cvxpy.Problem(objective, constraints) prob.solve(solver='ECOS') elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')", number=number, globals=globals()) / number * 1000 u1 = -x1 @ er x2 = np.array(w.value).flatten() u2 = -x2 @ er np.testing.assert_array_almost_equal(u1, u2, 4) df.loc['alphamind', u_name] = elasped_time1 df.loc['cvxpy', u_name] = elasped_time2 alpha_logger.info(f"{u_name} is finished") df ```
true
code
0.600598
null
null
null
null
<a href="https://colab.research.google.com/github/Abhishekauti21/dsmp-pre-work/blob/master/practice_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` class test: def __init__(self,a): self.a=a def display(self): print(self.a) obj=test() obj.display() def f1(): x=100 print(x) x=+1 f1() area = { 'living' : [400, 450], 'living' : [650, 800], 'kitchen' : [300, 250], 'garage' : [250, 0]} print (area['living']) List_1=[2,6,7,8] List_2=[2,6,7,8] print(List_1[-2] + List_2[2]) d = {0: 'a', 1: 'b', 2: 'c'} for x, y in d.items(): print(x, y) Numbers=[10,5,7,8,9,5] print(max(Numbers)-min(Numbers)) fo = open("foo.txt", "read+") print("Name of the file: ", fo.name) # Assuming file has following 5 lines # This is 1st line # This is 2nd line # This is 3rd line # This is 4th line # This is 5th line for index in range(5): line = fo.readline() print("Line No {} - {}".format(index, line)) #Close opened file fo.close() x = "abcdef" while i in x: print(i, end=" ") def cube(x): return x * x * x x = cube(3) print (x) print(((True) or (False) and (False) or (False))) x1=int('16') x2=8 + 8 x3= (4**2) print(x1 is x2 is x3) Word='warrior knights' ,A=Word[9:14],B=Word[-13:-16:-1] B+A def to_upper(k): return k.upper() x = ['ab', 'cd'] print(list(map(to_upper, x))) my_string = "hello world" k = [(i.upper(), len(i)) for i in my_string] print(k) from csv import reader def explore_data(dataset, start, end, rows_and_columns=False): """Explore the elements of a list. Print the elements of a list starting from the index 'start'(included) upto the index 'end' (excluded). Keyword arguments: dataset -- list of which we want to see the elements start -- index of the first element we want to see, this is included end -- index of the stopping element, this is excluded rows_and_columns -- this parameter is optional while calling the function. It takes binary values, either True or False. If true, print the dimension of the list, else dont. """ dataset_slice = dataset[start:end] for row in dataset_slice: print(row) print('\n') # adds a new (empty) line between rows if rows_and_columns: print('Number of rows:', len(dataset)) print('Number of columns:', len(dataset[0])) def duplicate_and_unique_movies(dataset, index_): """Check the duplicate and unique entries. We have nested list. This function checks if the rows in the list is unique or duplicated based on the element at index 'index_'. It prints the Number of duplicate entries, along with some examples of duplicated entry. Keyword arguments: dataset -- two dimensional list which we want to explore index_ -- column index at which the element in each row would be checked for duplicacy """ duplicate = [] unique = [] for movie in dataset: name = movie[index_] if name in unique: duplicate.append(name) else: unique.append(name) print('Number of duplicate Movies:', len(duplicate)) print('\n') print('Examples of duplicate Movies:', duplicate[:15]) def movies_lang(dataset, index_, lang_): """Extract the movies of a particular language. Of all the movies available in all languages, this function extracts all the movies in a particular laguage. Once you ahve extracted the movies, call the explore_data() to print first few rows. Keyword arguments: dataset -- list containing the details of the movie index_ -- index which is to be compared for langauges lang_ -- desired language for which we want to filter out the movies Returns: movies_ -- list with details of the movies in selected language """ movies_ = [] for movie in movies: lang = movie[index_] if lang == lang_: movies_.append(movie) print("Examples of Movies in English Language:") explore_data(movies_, 0, 3, True) return movies_ def rate_bucket(dataset, rate_low, rate_high): """Extract the movies within the specified ratings. This function extracts all the movies that has rating between rate_low and high_rate. Once you ahve extracted the movies, call the explore_data() to print first few rows. Keyword arguments: dataset -- list containing the details of the movie rate_low -- lower range of rating rate_high -- higher range of rating Returns: rated_movies -- list of the details of the movies with required ratings """ rated_movies = [] for movie in dataset: vote_avg = float(movie[-4]) if ((vote_avg >= rate_low) & (vote_avg <= rate_high)): rated_movies.append(movie) print("Examples of Movies in required rating bucket:") explore_data(rated_movies, 0, 3, True) return rated_movies # Read the data file and store it as a list 'movies' opened_file = open(path, encoding="utf8") read_file = reader(opened_file) movies = list(read_file) # The first row is header. Extract and store it in 'movies_header'. movies_header = movies[0] print("Movies Header:\n", movies_header) # Subset the movies dataset such that the header is removed from the list and store it back in movies movies = movies[1:] # Delete wrong data # Explore the row #4553. You will see that as apart from the id, description, status and title, no other information is available. # Hence drop this row. print("Entry at index 4553:") explore_data(movies, 4553, 4554) del movies[4553] # Using explore_data() with appropriate parameters, view the details of the first 5 movies. print("First 5 Entries:") explore_data(movies, 0, 5, True) # Our dataset might have more than one entry for a movie. Call duplicate_and_unique_movies() with index of the name to check the same. duplicate_and_unique_movies(movies, 13) # We saw that there are 3 movies for which the there are multiple entries. # Create a dictionary, 'reviews_max' that will have the name of the movie as key, and the maximum number of reviews as values. reviews_max = {} for movie in movies: name = movie[13] n_reviews = float(movie[12]) if name in reviews_max and reviews_max[name] < n_reviews: reviews_max[name] = n_reviews elif name not in reviews_max: reviews_max[name] = n_reviews len(reviews_max) # Create a list 'movies_clean', which will filter out the duplicate movies and contain the rows with maximum number of reviews for duplicate movies, as stored in 'review_max'. movies_clean = [] already_added = [] for movie in movies: name = movie[13] n_reviews = float(movie[12]) if (reviews_max[name] == n_reviews) and (name not in already_added): movies_clean.append(movie) already_added.append(name) len(movies_clean) # Calling movies_lang(), extract all the english movies and store it in movies_en. movies_en = movies_lang(movies_clean, 3, 'en') # Call the rate_bucket function to see the movies with rating higher than 8. high_rated_movies = rate_bucket(movies_en, 8, 10) ```
true
code
0.452113
null
null
null
null
# Detecting COVID-19 with Chest X Ray using PyTorch Image classification of Chest X Rays in one of three classes: Normal, Viral Pneumonia, COVID-19 Dataset from [COVID-19 Radiography Dataset](https://www.kaggle.com/tawsifurrahman/covid19-radiography-database) on Kaggle # Importing Libraries ``` from google.colab import drive drive.mount('/gdrive') %matplotlib inline import os import shutil import copy import random import torch import torch.nn as nn import torchvision import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import seaborn as sns import time from sklearn.metrics import confusion_matrix from PIL import Image import matplotlib.pyplot as plt torch.manual_seed(0) print('Using PyTorch version', torch.__version__) ``` # Preparing Training and Test Sets ``` class_names = ['Non-Covid', 'Covid'] root_dir = '/gdrive/My Drive/Research_Documents_completed/Data/Data/' source_dirs = ['non', 'covid'] ``` # Creating Custom Dataset ``` class ChestXRayDataset(torch.utils.data.Dataset): def __init__(self, image_dirs, transform): def get_images(class_name): images = [x for x in os.listdir(image_dirs[class_name]) if x.lower().endswith('png') or x.lower().endswith('jpg')] print(f'Found {len(images)} {class_name} examples') return images self.images = {} self.class_names = ['Non-Covid', 'Covid'] for class_name in self.class_names: self.images[class_name] = get_images(class_name) self.image_dirs = image_dirs self.transform = transform def __len__(self): return sum([len(self.images[class_name]) for class_name in self.class_names]) def __getitem__(self, index): class_name = random.choice(self.class_names) index = index % len(self.images[class_name]) image_name = self.images[class_name][index] image_path = os.path.join(self.image_dirs[class_name], image_name) image = Image.open(image_path).convert('RGB') return self.transform(image), self.class_names.index(class_name) ``` # Image Transformations ``` train_transform = torchvision.transforms.Compose([ torchvision.transforms.Resize(size=(224, 224)), torchvision.transforms.RandomHorizontalFlip(), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) test_transform = torchvision.transforms.Compose([ torchvision.transforms.Resize(size=(224, 224)), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) ``` # Prepare DataLoader ``` train_dirs = { 'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/non/', 'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/covid/' } #train_dirs = { # 'Non-Covid': '/gdrive/My Drive/Data/Data/non/', # 'Covid': '/gdrive/My Drive/Data/Data/covid/' #} train_dataset = ChestXRayDataset(train_dirs, train_transform) test_dirs = { 'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/non/', 'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/covid/' } test_dataset = ChestXRayDataset(test_dirs, test_transform) batch_size = 25 dl_train = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) dl_test = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True) print(dl_train) print('Number of training batches', len(dl_train)) print('Number of test batches', len(dl_test)) ``` # Data Visualization ``` class_names = train_dataset.class_names def show_images(images, labels, preds): plt.figure(figsize=(30, 20)) for i, image in enumerate(images): plt.subplot(1, 25, i + 1, xticks=[], yticks=[]) image = image.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) image = image * std + mean image = np.clip(image, 0., 1.) plt.imshow(image) col = 'green' if preds[i] != labels[i]: col = 'red' plt.xlabel(f'{class_names[int(labels[i].numpy())]}') plt.ylabel(f'{class_names[int(preds[i].numpy())]}', color=col) plt.tight_layout() plt.show() images, labels = next(iter(dl_train)) show_images(images, labels, labels) images, labels = next(iter(dl_test)) show_images(images, labels, labels) ``` # Creating the Model ``` resnet18 = torchvision.models.resnet18(pretrained=True) print(resnet18) resnet18.fc = torch.nn.Linear(in_features=512, out_features=2) loss_fn = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(resnet18.parameters(), lr=3e-5) print(resnet18) def show_preds(): resnet18.eval() images, labels = next(iter(dl_test)) outputs = resnet18(images) _, preds = torch.max(outputs, 1) show_images(images, labels, preds) show_preds() ``` # Training the Model ``` def train(epochs): best_model_wts = copy.deepcopy(resnet18.state_dict()) b_acc = 0.0 t_loss = [] t_acc = [] avg_t_loss=[] avg_t_acc=[] v_loss = [] v_acc=[] avg_v_loss = [] avg_v_acc = [] ep = [] print('Starting training..') for e in range(0, epochs): ep.append(e+1) print('='*20) print(f'Starting epoch {e + 1}/{epochs}') print('='*20) train_loss = 0. val_loss = 0. train_accuracy = 0 total_train = 0 correct_train = 0 resnet18.train() # set model to training phase for train_step, (images, labels) in enumerate(dl_train): optimizer.zero_grad() outputs = resnet18(images) _, pred = torch.max(outputs, 1) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() train_loss += loss.item() train_loss /= (train_step + 1) _, predicted = torch.max(outputs, 1) total_train += labels.nelement() correct_train += sum((predicted == labels).numpy()) train_accuracy = correct_train / total_train t_loss.append(train_loss) t_acc.append(train_accuracy) if train_step % 20 == 0: print('Evaluating at step', train_step) print(f'Training Loss: {train_loss:.4f}, Training Accuracy: {train_accuracy:.4f}') accuracy = 0. resnet18.eval() # set model to eval phase for val_step, (images, labels) in enumerate(dl_test): outputs = resnet18(images) loss = loss_fn(outputs, labels) val_loss += loss.item() _, preds = torch.max(outputs, 1) accuracy += sum((preds == labels).numpy()) val_loss /= (val_step + 1) accuracy = accuracy/len(test_dataset) print(f'Validation Loss: {val_loss:.4f}, Validation Accuracy: {accuracy:.4f}') v_loss.append(val_loss) v_acc.append(accuracy) show_preds() resnet18.train() if accuracy > b_acc: b_acc = accuracy avg_t_loss.append(sum(t_loss)/len(t_loss)) avg_v_loss.append(sum(v_loss)/len(v_loss)) avg_t_acc.append(sum(t_acc)/len(t_acc)) avg_v_acc.append(sum(v_acc)/len(v_acc)) best_model_wts = copy.deepcopy(resnet18.state_dict()) print('Best validation Accuracy: {:4f}'.format(b_acc)) print('Training complete..') plt.plot(ep, avg_t_loss, 'g', label='Training loss') plt.plot(ep, avg_v_loss, 'b', label='validation loss') plt.title('Training and Validation loss for each epoch') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_loss.png') plt.show() plt.plot(ep, avg_t_acc, 'g', label='Training accuracy') plt.plot(ep, avg_v_acc, 'b', label='validation accuracy') plt.title('Training and Validation Accuracy for each epoch') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_accuarcy.png') plt.show() torch.save(resnet18.state_dict(),'/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18.pt') %%time train(epochs=5) ``` # Final Results VALIDATION LOSS AND TRAINING LOSS VS EPOCH VALIDATION ACCURACY AND TRAINING ACCURACY VS EPOCH BEST ACCURACY ERROR.. ``` show_preds() ```
true
code
0.723615
null
null
null
null
<table class="ee-notebook-buttons" align="left"> <td><a target="_blank" href="https://github.com/giswqs/geemap/tree/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td> <td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td> </table> ## Install Earth Engine API and geemap Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`. The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet. **Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving). ``` # Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize() ``` ## Create an interactive map ``` import geemap Map = geemap.Map(center=(40, -100), zoom=4) Map.add_minimap(position='bottomright') Map ``` ## Add tile layers For example, you can Google Map tile layer: ``` url = 'https://mt1.google.com/vt/lyrs=m&x={x}&y={y}&z={z}' Map.add_tile_layer(url, name='Google Map', attribution='Google') ``` Add Google Terrain tile layer: ``` url = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}' Map.add_tile_layer(url, name='Google Terrain', attribution='Google') ``` ## Add WMS layers More WMS layers can be found at <https://viewer.nationalmap.gov/services/>. For example, you can add NAIP imagery. ``` url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?' Map.add_wms_layer(url=url, layers='0', name='NAIP Imagery', format='image/png') ``` Add USGS 3DEP Elevation Dataset ``` url = 'https://elevation.nationalmap.gov/arcgis/services/3DEPElevation/ImageServer/WMSServer?' Map.add_wms_layer(url=url, layers='3DEPElevation:None', name='3DEP Elevation', format='image/png') ``` ## Capture user inputs ``` import geemap from ipywidgets import Label from ipyleaflet import Marker Map = geemap.Map(center=(40, -100), zoom=4) label = Label() display(label) coordinates = [] def handle_interaction(**kwargs): latlon = kwargs.get('coordinates') if kwargs.get('type') == 'mousemove': label.value = str(latlon) elif kwargs.get('type') == 'click': coordinates.append(latlon) Map.add_layer(Marker(location=latlon)) Map.on_interaction(handle_interaction) Map print(coordinates) ``` ## A simpler way for capturing user inputs ``` import geemap Map = geemap.Map(center=(40, -100), zoom=4) cluster = Map.listening(event='click', add_marker=True) Map # Get the last mouse clicked coordinates Map.last_click # Get all the mouse clicked coordinates Map.all_clicks ``` ## SplitMap control ``` import geemap from ipyleaflet import * Map = geemap.Map(center=(47.50, -101), zoom=7) right_layer = WMSLayer( url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2017_CIR/ImageServer/WMSServer?', layers = 'AerialImage_ND_2017_CIR', name = 'AerialImage_ND_2017_CIR', format = 'image/png' ) left_layer = WMSLayer( url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2018_CIR/ImageServer/WMSServer?', layers = 'AerialImage_ND_2018_CIR', name = 'AerialImage_ND_2018_CIR', format = 'image/png' ) control = SplitMapControl(left_layer=left_layer, right_layer=right_layer) Map.add_control(control) Map.add_control(LayersControl(position='topright')) Map.add_control(FullScreenControl()) Map import geemap Map = geemap.Map() Map.split_map(left_layer='HYBRID', right_layer='ESRI') Map ```
true
code
0.540439
null
null
null
null
## **Nigerian Music scraped from Spotify - an analysis** Clustering is a type of [Unsupervised Learning](https://wikipedia.org/wiki/Unsupervised_learning) that presumes that a dataset is unlabelled or that its inputs are not matched with predefined outputs. It uses various algorithms to sort through unlabeled data and provide groupings according to patterns it discerns in the data. [**Pre-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/27/) ### **Introduction** [Clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) is very useful for data exploration. Let's see if it can help discover trends and patterns in the way Nigerian audiences consume music. > ✅ Take a minute to think about the uses of clustering. In real life, clustering happens whenever you have a pile of laundry and need to sort out your family members' clothes 🧦👕👖🩲. In data science, clustering happens when trying to analyze a user's preferences, or determine the characteristics of any unlabeled dataset. Clustering, in a way, helps make sense of chaos, like a sock drawer. In a professional setting, clustering can be used to determine things like market segmentation, determining what age groups buy what items, for example. Another use would be anomaly detection, perhaps to detect fraud from a dataset of credit card transactions. Or you might use clustering to determine tumors in a batch of medical scans. ✅ Think a minute about how you might have encountered clustering 'in the wild', in a banking, e-commerce, or business setting. > 🎓 Interestingly, cluster analysis originated in the fields of Anthropology and Psychology in the 1930s. Can you imagine how it might have been used? Alternately, you could use it for grouping search results - by shopping links, images, or reviews, for example. Clustering is useful when you have a large dataset that you want to reduce and on which you want to perform more granular analysis, so the technique can be used to learn about data before other models are constructed. ✅ Once your data is organized in clusters, you assign it a cluster Id, and this technique can be useful when preserving a dataset's privacy; you can instead refer to a data point by its cluster id, rather than by more revealing identifiable data. Can you think of other reasons why you'd refer to a cluster Id rather than other elements of the cluster to identify it? ### Getting started with clustering > 🎓 How we create clusters has a lot to do with how we gather up the data points into groups. Let's unpack some vocabulary: > > 🎓 ['Transductive' vs. 'inductive'](https://wikipedia.org/wiki/Transduction_(machine_learning)) > > Transductive inference is derived from observed training cases that map to specific test cases. Inductive inference is derived from training cases that map to general rules which are only then applied to test cases. > > An example: Imagine you have a dataset that is only partially labelled. Some things are 'records', some 'cds', and some are blank. Your job is to provide labels for the blanks. If you choose an inductive approach, you'd train a model looking for 'records' and 'cds', and apply those labels to your unlabeled data. This approach will have trouble classifying things that are actually 'cassettes'. A transductive approach, on the other hand, handles this unknown data more effectively as it works to group similar items together and then applies a label to a group. In this case, clusters might reflect 'round musical things' and 'square musical things'. > > 🎓 ['Non-flat' vs. 'flat' geometry](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering) > > Derived from mathematical terminology, non-flat vs. flat geometry refers to the measure of distances between points by either 'flat' ([Euclidean](https://wikipedia.org/wiki/Euclidean_geometry)) or 'non-flat' (non-Euclidean) geometrical methods. > > 'Flat' in this context refers to Euclidean geometry (parts of which are taught as 'plane' geometry), and non-flat refers to non-Euclidean geometry. What does geometry have to do with machine learning? Well, as two fields that are rooted in mathematics, there must be a common way to measure distances between points in clusters, and that can be done in a 'flat' or 'non-flat' way, depending on the nature of the data. [Euclidean distances](https://wikipedia.org/wiki/Euclidean_distance) are measured as the length of a line segment between two points. [Non-Euclidean distances](https://wikipedia.org/wiki/Non-Euclidean_geometry) are measured along a curve. If your data, visualized, seems to not exist on a plane, you might need to use a specialized algorithm to handle it. <p > <img src="../../images/flat-nonflat.png" width="600"/> <figcaption>Infographic by Dasani Madipalli</figcaption> > 🎓 ['Distances'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf) > > Clusters are defined by their distance matrix, e.g. the distances between points. This distance can be measured a few ways. Euclidean clusters are defined by the average of the point values, and contain a 'centroid' or center point. Distances are thus measured by the distance to that centroid. Non-Euclidean distances refer to 'clustroids', the point closest to other points. Clustroids in turn can be defined in various ways. > > 🎓 ['Constrained'](https://wikipedia.org/wiki/Constrained_clustering) > > [Constrained Clustering](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) introduces 'semi-supervised' learning into this unsupervised method. The relationships between points are flagged as 'cannot link' or 'must-link' so some rules are forced on the dataset. > > An example: If an algorithm is set free on a batch of unlabelled or semi-labelled data, the clusters it produces may be of poor quality. In the example above, the clusters might group 'round music things' and 'square music things' and 'triangular things' and 'cookies'. If given some constraints, or rules to follow ("the item must be made of plastic", "the item needs to be able to produce music") this can help 'constrain' the algorithm to make better choices. > > 🎓 'Density' > > Data that is 'noisy' is considered to be 'dense'. The distances between points in each of its clusters may prove, on examination, to be more or less dense, or 'crowded' and thus this data needs to be analyzed with the appropriate clustering method. [This article](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) demonstrates the difference between using K-Means clustering vs. HDBSCAN algorithms to explore a noisy dataset with uneven cluster density. Deepen your understanding of clustering techniques in this [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-15963-cxa) ### **Clustering algorithms** There are over 100 clustering algorithms, and their use depends on the nature of the data at hand. Let's discuss some of the major ones: - **Hierarchical clustering**. If an object is classified by its proximity to a nearby object, rather than to one farther away, clusters are formed based on their members' distance to and from other objects. Hierarchical clustering is characterized by repeatedly combining two clusters. <p > <img src="../../images/hierarchical.png" width="600"/> <figcaption>Infographic by Dasani Madipalli</figcaption> - **Centroid clustering**. This popular algorithm requires the choice of 'k', or the number of clusters to form, after which the algorithm determines the center point of a cluster and gathers data around that point. [K-means clustering](https://wikipedia.org/wiki/K-means_clustering) is a popular version of centroid clustering which separates a data set into pre-defined K groups. The center is determined by the nearest mean, thus the name. The squared distance from the cluster is minimized. <p > <img src="../../images/centroid.png" width="600"/> <figcaption>Infographic by Dasani Madipalli</figcaption> - **Distribution-based clustering**. Based in statistical modeling, distribution-based clustering centers on determining the probability that a data point belongs to a cluster, and assigning it accordingly. Gaussian mixture methods belong to this type. - **Density-based clustering**. Data points are assigned to clusters based on their density, or their grouping around each other. Data points far from the group are considered outliers or noise. DBSCAN, Mean-shift and OPTICS belong to this type of clustering. - **Grid-based clustering**. For multi-dimensional datasets, a grid is created and the data is divided amongst the grid's cells, thereby creating clusters. The best way to learn about clustering is to try it for yourself, so that's what you'll do in this exercise. We'll require some packages to knock-off this module. You can have them installed as: `install.packages(c('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork'))` Alternatively, the script below checks whether you have the packages required to complete this module and installs them for you in case some are missing. ``` suppressWarnings(if(!require("pacman")) install.packages("pacman")) pacman::p_load('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork') ``` ## Exercise - cluster your data Clustering as a technique is greatly aided by proper visualization, so let's get started by visualizing our music data. This exercise will help us decide which of the methods of clustering we should most effectively use for the nature of this data. Let's hit the ground running by importing the data. ``` # Load the core tidyverse and make it available in your current R session library(tidyverse) # Import the data into a tibble df <- read_csv(file = "https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/5-Clustering/data/nigerian-songs.csv") # View the first 5 rows of the data set df %>% slice_head(n = 5) ``` Sometimes, we may want some little more information on our data. We can have a look at the `data` and `its structure` by using the [*glimpse()*](https://pillar.r-lib.org/reference/glimpse.html) function: ``` # Glimpse into the data set df %>% glimpse() ``` Good job!💪 We can observe that `glimpse()` will give you the total number of rows (observations) and columns (variables), then, the first few entries of each variable in a row after the variable name. In addition, the *data type* of the variable is given immediately after each variable's name inside `< >`. `DataExplorer::introduce()` can summarize this information neatly: ``` # Describe basic information for our data df %>% introduce() # A visual display of the same df %>% plot_intro() ``` Awesome! We have just learnt that our data has no missing values. While we are at it, we can explore common central tendency statistics (e.g [mean](https://en.wikipedia.org/wiki/Arithmetic_mean) and [median](https://en.wikipedia.org/wiki/Median)) and measures of dispersion (e.g [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation)) using `summarytools::descr()` ``` # Describe common statistics df %>% descr(stats = "common") ``` Let's look at the general values of the data. Note that popularity can be `0`, which show songs that have no ranking. We'll remove those shortly. > 🤔 If we are working with clustering, an unsupervised method that does not require labeled data, why are we showing this data with labels? In the data exploration phase, they come in handy, but they are not necessary for the clustering algorithms to work. ### 1. Explore popular genres Let's go ahead and find out the most popular genres 🎶 by making a count of the instances it appears. ``` # Popular genres top_genres <- df %>% count(artist_top_genre, sort = TRUE) %>% # Encode to categorical and reorder the according to count mutate(artist_top_genre = factor(artist_top_genre) %>% fct_inorder()) # Print the top genres top_genres ``` That went well! They say a picture is worth a thousand rows of a data frame (actually nobody ever says that 😅). But you get the gist of it, right? One way to visualize categorical data (character or factor variables) is using barplots. Let's make a barplot of the top 10 genres: ``` # Change the default gray theme theme_set(theme_light()) # Visualize popular genres top_genres %>% slice(1:10) %>% ggplot(mapping = aes(x = artist_top_genre, y = n, fill = artist_top_genre)) + geom_col(alpha = 0.8) + paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") + ggtitle("Top genres") + theme(plot.title = element_text(hjust = 0.5), # Rotates the X markers (so we can read them) axis.text.x = element_text(angle = 90)) ``` Now it's way easier to identify that we have `missing` genres 🧐! > A good visualisation will show you things that you did not expect, or raise new questions about the data - Hadley Wickham and Garrett Grolemund, [R For Data Science](https://r4ds.had.co.nz/introduction.html) Note, when the top genre is described as `Missing`, that means that Spotify did not classify it, so let's get rid of it. ``` # Visualize popular genres top_genres %>% filter(artist_top_genre != "Missing") %>% slice(1:10) %>% ggplot(mapping = aes(x = artist_top_genre, y = n, fill = artist_top_genre)) + geom_col(alpha = 0.8) + paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") + ggtitle("Top genres") + theme(plot.title = element_text(hjust = 0.5), # Rotates the X markers (so we can read them) axis.text.x = element_text(angle = 90)) ``` From the little data exploration, we learn that the top three genres dominate this dataset. Let's concentrate on `afro dancehall`, `afropop`, and `nigerian pop`, additionally filter the dataset to remove anything with a 0 popularity value (meaning it was not classified with a popularity in the dataset and can be considered noise for our purposes): ``` nigerian_songs <- df %>% # Concentrate on top 3 genres filter(artist_top_genre %in% c("afro dancehall", "afropop","nigerian pop")) %>% # Remove unclassified observations filter(popularity != 0) # Visualize popular genres nigerian_songs %>% count(artist_top_genre) %>% ggplot(mapping = aes(x = artist_top_genre, y = n, fill = artist_top_genre)) + geom_col(alpha = 0.8) + paletteer::scale_fill_paletteer_d("ggsci::category10_d3") + ggtitle("Top genres") + theme(plot.title = element_text(hjust = 0.5)) ``` Let's see whether there is any apparent linear relationship among the numerical variables in our data set. This relationship is quantified mathematically by the [correlation statistic](https://en.wikipedia.org/wiki/Correlation). The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other). ``` # Narrow down to numeric variables and fid correlation corr_mat <- nigerian_songs %>% select(where(is.numeric)) %>% cor() # Visualize correlation matrix corrplot(corr_mat, order = 'AOE', col = c('white', 'black'), bg = 'gold2') ``` The data is not strongly correlated except between `energy` and `loudness`, which makes sense, given that loud music is usually pretty energetic. `Popularity` has a correspondence to `release date`, which also makes sense, as more recent songs are probably more popular. Length and energy seem to have a correlation too. It will be interesting to see what a clustering algorithm can make of this data! > 🎓 Note that correlation does not imply causation! We have proof of correlation but no proof of causation. An [amusing web site](https://tylervigen.com/spurious-correlations) has some visuals that emphasize this point. ### 2. Explore data distribution Let's ask some more subtle questions. Are the genres significantly different in the perception of their danceability, based on their popularity? Let's examine our top three genres data distribution for popularity and danceability along a given x and y axis using [density plots](https://www.khanacademy.org/math/ap-statistics/density-curves-normal-distribution-ap/density-curves/v/density-curves). ``` # Perform 2D kernel density estimation density_estimate_2d <- nigerian_songs %>% ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre)) + geom_density_2d(bins = 5, size = 1) + paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") + xlim(-20, 80) + ylim(0, 1.2) # Density plot based on the popularity density_estimate_pop <- nigerian_songs %>% ggplot(mapping = aes(x = popularity, fill = artist_top_genre, color = artist_top_genre)) + geom_density(size = 1, alpha = 0.5) + paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") + paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") + theme(legend.position = "none") # Density plot based on the danceability density_estimate_dance <- nigerian_songs %>% ggplot(mapping = aes(x = danceability, fill = artist_top_genre, color = artist_top_genre)) + geom_density(size = 1, alpha = 0.5) + paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") + paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") # Patch everything together library(patchwork) density_estimate_2d / (density_estimate_pop + density_estimate_dance) ``` We see that there are concentric circles that line up, regardless of genre. Could it be that Nigerian tastes converge at a certain level of danceability for this genre? In general, the three genres align in terms of their popularity and danceability. Determining clusters in this loosely-aligned data will be a challenge. Let's see whether a scatter plot can support this. ``` # A scatter plot of popularity and danceability scatter_plot <- nigerian_songs %>% ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre, shape = artist_top_genre)) + geom_point(size = 2, alpha = 0.8) + paletteer::scale_color_paletteer_d("futurevisions::mars") # Add a touch of interactivity ggplotly(scatter_plot) ``` A scatterplot of the same axes shows a similar pattern of convergence. In general, for clustering, you can use scatterplots to show clusters of data, so mastering this type of visualization is very useful. In the next lesson, we will take this filtered data and use k-means clustering to discover groups in this data that see to overlap in interesting ways. ## **🚀 Challenge** In preparation for the next lesson, make a chart about the various clustering algorithms you might discover and use in a production environment. What kinds of problems is the clustering trying to address? ## [**Post-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/28/) ## **Review & Self Study** Before you apply clustering algorithms, as we have learned, it's a good idea to understand the nature of your dataset. Read more on this topic [here](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html) Deepen your understanding of clustering techniques: - [Train and Evaluate Clustering Models using Tidymodels and friends](https://rpubs.com/eR_ic/clustering) - Bradley Boehmke & Brandon Greenwell, [*Hands-On Machine Learning with R*](https://bradleyboehmke.github.io/HOML/)*.* ## **Assignment** [Research other visualizations for clustering](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/assignment.md) ## THANK YOU TO: [Jen Looper](https://www.twitter.com/jenlooper) for creating the original Python version of this module ♥️ [`Dasani Madipalli`](https://twitter.com/dasani_decoded) for creating the amazing illustrations that make machine learning concepts more interpretable and easier to understand. Happy Learning, [Eric](https://twitter.com/ericntay), Gold Microsoft Learn Student Ambassador.
true
code
0.624408
null
null
null
null
# B - A Closer Look at Word Embeddings We have very briefly covered how word embeddings (also known as word vectors) are used in the tutorials. In this appendix we'll have a closer look at these embeddings and find some (hopefully) interesting results. Embeddings transform a one-hot encoded vector (a vector that is 0 in elements except one, which is 1) into a much smaller dimension vector of real numbers. The one-hot encoded vector is also known as a *sparse vector*, whilst the real valued vector is known as a *dense vector*. The key concept in these word embeddings is that words that appear in similar _contexts_ appear nearby in the vector space, i.e. the Euclidean distance between these two word vectors is small. By context here, we mean the surrounding words. For example in the sentences "I purchased some items at the shop" and "I purchased some items at the store" the words 'shop' and 'store' appear in the same context and thus should be close together in vector space. You may have also heard about *word2vec*. *word2vec* is an algorithm (actually a bunch of algorithms) that calculates word vectors from a corpus. In this appendix we use *GloVe* vectors, *GloVe* being another algorithm to calculate word vectors. If you want to know how *word2vec* works, check out a two part series [here](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) and [here](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/), and if you want to find out more about *GloVe*, check the website [here](https://nlp.stanford.edu/projects/glove/). In PyTorch, we use word vectors with the `nn.Embedding` layer, which takes a _**[sentence length, batch size]**_ tensor and transforms it into a _**[sentence length, batch size, embedding dimensions]**_ tensor. In tutorial 2 onwards, we also used pre-trained word embeddings (specifically the GloVe vectors) provided by TorchText. These embeddings have been trained on a gigantic corpus. We can use these pre-trained vectors within any of our models, with the idea that as they have already learned the context of each word they will give us a better starting point for our word vectors. This usually leads to faster training time and/or improved accuracy. In this appendix we won't be training any models, instead we'll be looking at the word embeddings and finding a few interesting things about them. A lot of the code from the first half of this appendix is taken from [here](https://github.com/spro/practical-pytorch/blob/master/glove-word-vectors/glove-word-vectors.ipynb). For more information about word embeddings, go [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/). ## Loading the GloVe vectors First, we'll load the GloVe vectors. The `name` field specifies what the vectors have been trained on, here the `6B` means a corpus of 6 billion words. The `dim` argument specifies the dimensionality of the word vectors. GloVe vectors are available in 50, 100, 200 and 300 dimensions. There is also a `42B` and `840B` glove vectors, however they are only available at 300 dimensions. ``` import torchtext.vocab glove = torchtext.vocab.GloVe(name = '6B', dim = 100) print(f'There are {len(glove.itos)} words in the vocabulary') ``` As shown above, there are 400,000 unique words in the GloVe vocabulary. These are the most common words found in the corpus the vectors were trained on. **In these set of GloVe vectors, every single word is lower-case only.** `glove.vectors` is the actual tensor containing the values of the embeddings. ``` glove.vectors.shape ``` We can see what word is associated with each row by checking the `itos` (int to string) list. Below implies that row 0 is the vector associated with the word 'the', row 1 for ',' (comma), row 2 for '.' (period), etc. ``` glove.itos[:10] ``` We can also use the `stoi` (string to int) dictionary, in which we input a word and receive the associated integer/index. If you try get the index of a word that is not in the vocabulary, you receive an error. ``` glove.stoi['the'] ``` We can get the vector of a word by first getting the integer associated with it and then indexing into the word embedding tensor with that index. ``` glove.vectors[glove.stoi['the']].shape ``` We'll be doing this a lot, so we'll create a function that takes in word embeddings and a word then returns the associated vector. It'll also throw an error if the word doesn't exist in the vocabulary. ``` def get_vector(embeddings, word): assert word in embeddings.stoi, f'*{word}* is not in the vocab!' return embeddings.vectors[embeddings.stoi[word]] ``` As before, we use a word to get the associated vector. ``` get_vector(glove, 'the').shape ``` ## Similar Contexts Now to start looking at the context of different words. If we want to find the words similar to a certain input word, we first find the vector of this input word, then we scan through our vocabulary calculating the distance between the vector of each word and our input word vector. We then sort these from closest to furthest away. The function below returns the closest 10 words to an input word vector: ``` import torch def closest_words(embeddings, vector, n = 10): distances = [(word, torch.dist(vector, get_vector(embeddings, word)).item()) for word in embeddings.itos] return sorted(distances, key = lambda w: w[1])[:n] ``` Let's try it out with 'korea'. The closest word is the word 'korea' itself (not very interesting), however all of the words are related in some way. Pyongyang is the capital of North Korea, DPRK is the official name of North Korea, etc. Interestingly, we also get 'Japan' and 'China', implies that Korea, Japan and China are frequently talked about together in similar contexts. This makes sense as they are geographically situated near each other. ``` word_vector = get_vector(glove, 'korea') closest_words(glove, word_vector) ``` Looking at another country, India, we also get nearby countries: Thailand, Malaysia and Sri Lanka (as two separate words). Australia is relatively close to India (geographically), but Thailand and Malaysia are closer. So why is Australia closer to India in vector space? This is most probably due to India and Australia appearing in the context of [cricket](https://en.wikipedia.org/wiki/Cricket) matches together. ``` word_vector = get_vector(glove, 'india') closest_words(glove, word_vector) ``` We'll also create another function that will nicely print out the tuples returned by our `closest_words` function. ``` def print_tuples(tuples): for w, d in tuples: print(f'({d:02.04f}) {w}') ``` A final word to look at, 'sports'. As we can see, the closest words are most of the sports themselves. ``` word_vector = get_vector(glove, 'sports') print_tuples(closest_words(glove, word_vector)) ``` ## Analogies Another property of word embeddings is that they can be operated on just as any standard vector and give interesting results. We'll show an example of this first, and then explain it: ``` def analogy(embeddings, word1, word2, word3, n=5): #get vectors for each word word1_vector = get_vector(embeddings, word1) word2_vector = get_vector(embeddings, word2) word3_vector = get_vector(embeddings, word3) #calculate analogy vector analogy_vector = word2_vector - word1_vector + word3_vector #find closest words to analogy vector candidate_words = closest_words(embeddings, analogy_vector, n+3) #filter out words already in analogy candidate_words = [(word, dist) for (word, dist) in candidate_words if word not in [word1, word2, word3]][:n] print(f'{word1} is to {word2} as {word3} is to...') return candidate_words print_tuples(analogy(glove, 'man', 'king', 'woman')) ``` This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'? If we think about it, the vector calculated from 'king' minus 'man' gives us a "royalty vector". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this "royality vector" to 'woman', this should travel to her royal equivalent, which is a queen! We can do this with other analogies too. For example, this gets an "acting career vector": ``` print_tuples(analogy(glove, 'man', 'actor', 'woman')) ``` For a "baby animal vector": ``` print_tuples(analogy(glove, 'cat', 'kitten', 'dog')) ``` A "capital city vector": ``` print_tuples(analogy(glove, 'france', 'paris', 'england')) ``` A "musician's genre vector": ``` print_tuples(analogy(glove, 'elvis', 'rock', 'eminem')) ``` And an "ingredient vector": ``` print_tuples(analogy(glove, 'beer', 'barley', 'wine')) ``` ## Correcting Spelling Mistakes Another interesting property of word embeddings is that they can actually be used to correct spelling mistakes! We'll put their findings into code and briefly explain them, but to read more about this, check out the [original thread](http://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411) and the associated [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26). First, we need to load up the much larger vocabulary GloVe vectors, this is due to the spelling mistakes not appearing in the smaller vocabulary. **Note**: these vectors are very large (~2GB), so watch out if you have a limited internet connection. ``` glove = torchtext.vocab.GloVe(name = '840B', dim = 300) ``` Checking the vocabulary size of these embeddings, we can see we now have over 2 million unique words in our vocabulary! ``` glove.vectors.shape ``` As the vectors were trained with a much larger vocabulary on a larger corpus of text, the words that appear are a little different. Notice how the words 'north', 'south', 'pyongyang' and 'dprk' no longer appear in the most closest words to 'korea'. ``` word_vector = get_vector(glove, 'korea') print_tuples(closest_words(glove, word_vector)) ``` Our first step to correcting spelling mistakes is looking at the vector for a misspelling of the word 'reliable'. ``` word_vector = get_vector(glove, 'relieable') print_tuples(closest_words(glove, word_vector)) ``` Notice how the correct spelling, "reliable", does not appear in the top 10 closest words. Surely the misspellings of a word should appear next to the correct spelling of the word as they appear in the same context, right? The hypothesis is that misspellings of words are all equally shifted away from their correct spelling. This is because articles of text that contain spelling mistakes are usually written in an informal manner where correct spelling doesn't matter as much (such as tweets/blog posts), thus spelling errors will appear together as they appear in context of informal articles. Similar to how we created analogies before, we can create a "correct spelling" vector. This time, instead of using a single example to create our vector, we'll use the average of multiple examples. This will hopefully give better accuracy! We first create a vector for the correct spelling, 'reliable', then calculate the difference between the "reliable vector" and each of the 8 misspellings of 'reliable'. As we are going to concatenate these 8 misspelling tensors together we need to unsqueeze a "batch" dimension to them. ``` reliable_vector = get_vector(glove, 'reliable') reliable_misspellings = ['relieable', 'relyable', 'realible', 'realiable', 'relable', 'relaible', 'reliabe', 'relaiable'] diff_reliable = [(reliable_vector - get_vector(glove, s)).unsqueeze(0) for s in reliable_misspellings] ``` We take the average of these 8 'difference from reliable' vectors to get our "misspelling vector". ``` misspelling_vector = torch.cat(diff_reliable, dim = 0).mean(dim = 0) ``` We can now correct other spelling mistakes using this "misspelling vector" by finding the closest words to the sum of the vector of a misspelled word and the "misspelling vector". For a misspelling of "because": ``` word_vector = get_vector(glove, 'becuase') print_tuples(closest_words(glove, word_vector + misspelling_vector)) ``` For a misspelling of "definitely": ``` word_vector = get_vector(glove, 'defintiely') print_tuples(closest_words(glove, word_vector + misspelling_vector)) ``` For a misspelling of "consistent": ``` word_vector = get_vector(glove, 'consistant') print_tuples(closest_words(glove, word_vector + misspelling_vector)) ``` For a misspelling of "package": ``` word_vector = get_vector(glove, 'pakage') print_tuples(closest_words(glove, word_vector + misspelling_vector)) ``` For a more in-depth look at this, check out the [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).
true
code
0.575827
null
null
null
null
``` import lifelines import pymc as pm from pyBMA.CoxPHFitter import CoxPHFitter import matplotlib.pyplot as plt import numpy as np from numpy import log from datetime import datetime import pandas as pd %matplotlib inline ``` The first step in any data analysis is acquiring and munging the data Our starting data set can be found here: http://jakecoltman.com in the pyData post It is designed to be roughly similar to the output from DCM's path to conversion Download the file and transform it into something with the columns: id,lifetime,age,male,event,search,brand where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165) ``` running_id = 0 output = [[0]] with open("E:/output.txt") as file_open: for row in file_open.read().split("\n"): cols = row.split(",") if cols[0] == output[-1][0]: output[-1].append(cols[1]) output[-1].append(True) else: output.append(cols) output = output[1:] for row in output: if len(row) == 6: row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False] output = output[1:-1] def convert_to_days(dt): day_diff = dt / np.timedelta64(1, 'D') if day_diff == 0: return 23.0 else: return day_diff df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"]) df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"]) df["lifetime"] = df["lifetime"].apply(convert_to_days) df["male"] = df["male"].astype(int) df["search"] = df["search"].astype(int) df["brand"] = df["brand"].astype(int) df["age"] = df["age"].astype(int) df["event"] = df["event"].astype(int) df = df.drop('advert_time', 1) df = df.drop('conversion_time', 1) df = df.set_index("id") df = df.dropna(thresh=2) df.median() ###Parametric Bayes #Shout out to Cam Davidson-Pilon ## Example fully worked model using toy data ## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html ## Note that we've made some corrections N = 2500 ##Generate some random data lifetime = pm.rweibull( 2, 5, size = N ) birth = pm.runiform(0, 10, N) censor = ((birth + lifetime) >= 10) lifetime_ = lifetime.copy() lifetime_[censor] = 10 - birth[censor] alpha = pm.Uniform('alpha', 0, 20) beta = pm.Uniform('beta', 0, 20) @pm.observed def survival(value=lifetime_, alpha = alpha, beta = beta ): return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha)) mcmc = pm.MCMC([alpha, beta, survival ] ) mcmc.sample(50000, 30000) pm.Matplot.plot(mcmc) mcmc.trace("alpha")[:] ``` Problems: 1 - Try to fit your data from section 1 2 - Use the results to plot the distribution of the median Note that the media of a Weibull distribution is: $$β(log 2)^{1/α}$$ ``` censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist()) alpha = pm.Uniform("alpha", 0,50) beta = pm.Uniform("beta", 0,50) @pm.observed def survival(value=df["lifetime"], alpha = alpha, beta = beta ): return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha)) mcmc = pm.MCMC([alpha, beta, survival ] ) mcmc.sample(10000) def weibull_median(alpha, beta): return beta * ((log(2)) ** ( 1 / alpha)) plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))]) ``` Problems: 4 - Try adjusting the number of samples for burning and thinnning 5 - Try adjusting the prior and see how it affects the estimate ``` #### Adjust burn and thin, both paramters of the mcmc sample function #### Narrow and broaden prior ``` Problems: 7 - Try testing whether the median is greater than a different values ``` #### Hypothesis testing ``` If we want to look at covariates, we need a new approach. We'll use Cox proprtional hazards, a very popular regression model. To fit in python we use the module lifelines: http://lifelines.readthedocs.io/en/latest/ ``` ### Fit a cox proprtional hazards model ``` Once we've fit the data, we need to do something useful with it. Try to do the following things: 1 - Plot the baseline survival function 2 - Predict the functions for a particular set of features 3 - Plot the survival function for two different set of features 4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time ``` #### Plot baseline hazard function #### Predict #### Plot survival functions for different covariates #### Plot some odds ``` Model selection Difficult to do with classic tools (here) Problem: 1 - Calculate the BMA coefficient values 2 - Try running with different priors ``` #### BMA Coefficient values #### Different priors ```
true
code
0.444505
null
null
null
null
# Probability Distributions # Some typical stuff we'll likely use ``` import numpy as np import matplotlib.pyplot as plt import seaborn as sns %config InlineBackend.figure_format = 'retina' ``` # [SciPy](https://scipy.org) ### [scipy.stats](https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html) ``` import scipy as sp import scipy.stats as st ``` # Binomial Distribution ### <font color=darkred> **Example**: A couple, who are both carriers for a recessive disease, wish to have 5 children. They want to know the probability that they will have four healthy kids.</font> In this case the random variable is the number of healthy kids. ``` # number of trials (kids) n = 5 # probability of success on each trial # i.e. probability that each child will be healthy = 1 - 0.5 * 0.5 = 0.75 p = 0.75 # a binomial distribution object dist = st.binom(n, p) # probability of four healthy kids dist.pmf(4) print(f"The probability of having four healthy kids is {dist.pmf(4):.3f}") ``` ### <font color=darkred>Probability to have each of 0-5 healthy kids.</font> ``` # all possible # of successes out of n trials # i.e. all possible outcomes of the random variable # i.e. all possible number of healthy kids = 0-5 numHealthyKids = np.arange(n+1) numHealthyKids # probability of obtaining each possible number of successes # i.e. probability of having each possible number of healthy children pmf = dist.pmf(numHealthyKids) pmf ``` ### <font color=darkred>Visualize the probability to have each of 0-5 healthy kids.</font> ``` plt.bar(numHealthyKids, pmf) plt.xlabel('# healthy children', fontsize=18) plt.ylabel('probability', fontsize=18); ``` ### <font color=darkred>Probability to have at least 4 healthy kids.</font> ``` # sum of probabilities of 4 and 5 healthy kids pmf[-2:].sum() # remaining probability after subtracting CDF for 3 kids 1 - dist.cdf(3) # survival function for 3 kids dist.sf(3) ``` ### <font color=darkred>What is the expected number of healthy kids?</font> ``` print(f"The expected number of healthy kids is {dist.mean()}") ``` ### <font color=darkred>How sure are we about the above estimate?</font> ``` print(f"The expected number of healthy kids is {dist.mean()} ± {dist.std():.2f}") ``` # <font color=red> Exercise</font> Should the couple consider having six children? 1. Plot the *pmf* for the probability of each possible number of healthy children. 2. What's the probability that they will all be healthy? # Poisson Distribution ### <font color=darkred> **Example**: Assume that the rate of deleterious mutations is ~1.2 per diploid genome. What is the probability that an individual has 8 or more spontaneous deleterious mutations?</font> In this case the random variable is the number of deleterious mutations within an individuals genome. ``` # the rate of deleterious mutations is 1.2 per diploid genome rate = 1.2 # poisson distribution describing the predicted number of spontaneous mutations dist = st.poisson(rate) # let's look at the probability for 0-10 mutations numMutations = np.arange(11) plt.bar(numMutations, dist.pmf(numMutations)) plt.xlabel('# mutations', fontsize=18) plt.ylabel('probability', fontsize=18); print(f"Probability of less than 8 mutations = {dist.cdf(7)}") print(f"Probability of 8 or more mutations = {dist.sf(7)}") dist.cdf(7) + dist.sf(7) ``` # <font color=red> Exercise</font> For the above example, what is the probability that an individual has three or fewer mutations? # Exponential Distribution ### <font color=darkred> **Example**: Assume that a neuron spikes 1.5 times per second on average. Plot the probability density function of interspike intervals from zero to five seconds with a resolution of 0.01 seconds.</font> In this case the random variable is the interspike interval time. ``` # spike rate per second rate = 1.5 # exponential distribution describing the neuron's predicted interspike intervals dist = st.expon(loc=0, scale=1/rate) # plot interspike intervals from 0-5 seconds at 0.01 sec resolution intervalsSec = np.linspace(0, 5, 501) # probability density for each interval pdf = dist.pdf(intervalsSec) plt.plot(intervalsSec, pdf) plt.xlabel('interspike interval (sec)', fontsize=18) plt.ylabel('pdf', fontsize=18); ``` ### <font color=darkred>What is the average interval?</font> ``` print(f"Average interspike interval = {dist.mean():.2f} seconds.") ``` ### <font color=darkred>time constant = 1 / rate = mean</font> ``` tau = 1 / rate tau ``` ### <font color=darkred> What is the probability that an interval will be between 1 and 2 seconds?</font> ``` prob1to2 = dist.cdf(2) - dist.cdf(1); print(f"Probability of an interspike interval being between 1 and 2 seconds is {prob1to2:.2f}") ``` ### <font color=darkred> For what time *T* is the probability that an interval is shorter than *T* equal to 25%?</font> ``` timeAtFirst25PercentOfDist = dist.ppf(0.25) # percent point function print(f"There is a 25% chance that an interval is shorter than {timeAtFirst25PercentOfDist:.2f} seconds.") ``` # <font color=red> Exercise</font> For the above example, what is the probability that 3 seconds will pass without any spikes? # Normal Distribution ### <font color=darkred> **Example**: Under basal conditions the resting membrane voltage of a neuron fluctuates around -70 mV with a variance of 10 mV.</font> In this case the random variable is the neuron's resting membrane voltage. ``` # mean resting membrane voltage (mV) mu = -70 # standard deviation about the mean sd = np.sqrt(10) # normal distribution describing the neuron's predicted resting membrane voltage dist = st.norm(mu, sd) # membrane voltages from -85 to -55 mV mV = np.linspace(-85, -55, 301) # probability density for each membrane voltage in mV pdf = dist.pdf(mV) plt.plot(mV, pdf) plt.xlabel('membrane voltage (mV)', fontsize=18) plt.ylabel('pdf', fontsize=18); ``` ### <font color=darkred> What range of membrane voltages (centered on the mean) account for 95% of the probability.</font> ``` low = dist.ppf(0.025) # first 2.5% of distribution high = dist.ppf(0.975) # first 97.5% of distribution print(f"95% of membrane voltages are expected to fall within {low :.1f} and {high :.1f} mV.") ``` # <font color=red> Exercise</font> In a resting neuron, what's the probability that you would measure a membrane voltage greater than -65 mV? If you meaassure -65 mV, is the neuron at rest? # <font color=red> Exercise</font> What probability distribution might best describe the number of synapses per millimeter of dendrite? A) Binomial B) Poisson C) Exponential D) Normal # <font color=red> Exercise</font> What probability distribution might best describe the time a protein spends in its active conformation? A) Binomial B) Poisson C) Exponential D) Normal # <font color=red> Exercise</font> What probability distribution might best describe the weights of adult mice in a colony? A) Binomial B) Poisson C) Exponential D) Normal # <font color=red> Exercise</font> What probability distribution might best describe the number of times a subject is able to identify the correct target in a series of trials? A) Binomial B) Poisson C) Exponential D) Normal
true
code
0.675336
null
null
null
null
# [모듈 2.1] SageMaker 클러스터에서 훈련 (No VPC에서 실행) 이 노트북은 아래의 작업을 실행 합니다. - SageMaker Hosting Cluster 에서 훈련을 실행 - 훈련한 Job 이름을 저장 - 다음 노트북에서 모델 배포 및 추론시에 사용 합니다. --- SageMaker의 세션을 얻고, role 정보를 가져옵니다. - 위의 두 정보를 통해서 SageMaker Hosting Cluster에 연결합니다. ``` import os import sagemaker from sagemaker import get_execution_role sagemaker_session = sagemaker.Session() role = get_execution_role() ``` ## 로컬의 데이터 S3 업로딩 로컬의 데이터를 S3에 업로딩하여 훈련시에 Input으로 사용 합니다. ``` # dataset_location = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10') # display(dataset_location) dataset_location = 's3://sagemaker-ap-northeast-2-057716757052/data/DEMO-cifar10' dataset_location # efs_dir = '/home/ec2-user/efs/data' # ! ls {efs_dir} -al # ! aws s3 cp {dataset_location} {efs_dir} --recursive from sagemaker.inputs import FileSystemInput # Specify EFS ile system id. file_system_id = 'fs-38dc1558' # 'fs-xxxxxxxx' print(f"EFS file-system-id: {file_system_id}") # Specify directory path for input data on the file system. # You need to provide normalized and absolute path below. train_file_system_directory_path = '/data/train' eval_file_system_directory_path = '/data/eval' validation_file_system_directory_path = '/data/validation' print(f'EFS file-system data input path: {train_file_system_directory_path}') print(f'EFS file-system data input path: {eval_file_system_directory_path}') print(f'EFS file-system data input path: {validation_file_system_directory_path}') # Specify the access mode of the mount of the directory associated with the file system. # Directory must be mounted 'ro'(read-only). file_system_access_mode = 'ro' # Specify your file system type file_system_type = 'EFS' train = FileSystemInput(file_system_id=file_system_id, file_system_type=file_system_type, directory_path=train_file_system_directory_path, file_system_access_mode=file_system_access_mode) eval = FileSystemInput(file_system_id=file_system_id, file_system_type=file_system_type, directory_path=eval_file_system_directory_path, file_system_access_mode=file_system_access_mode) validation = FileSystemInput(file_system_id=file_system_id, file_system_type=file_system_type, directory_path=validation_file_system_directory_path, file_system_access_mode=file_system_access_mode) aws_region = 'ap-northeast-2'# aws-region-code e.g. us-east-1 s3_bucket = 'sagemaker-ap-northeast-2-057716757052'# your-s3-bucket-name prefix = "cifar10/efs" #prefix in your bucket s3_output_location = f's3://{s3_bucket}/{prefix}/output' print(f'S3 model output location: {s3_output_location}') security_group_ids = ['sg-0192524ef63ec6138'] # ['sg-xxxxxxxx'] # subnets = ['subnet-0a84bcfa36d3981e6','subnet-0304abaaefc2b1c34','subnet-0a2204b79f378b178'] # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx'] subnets = ['subnet-0a84bcfa36d3981e6'] # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx'] from sagemaker.tensorflow import TensorFlow estimator = TensorFlow(base_job_name='cifar10', entry_point='cifar10_keras_sm_tf2.py', source_dir='training_script', role=role, framework_version='2.0.0', py_version='py3', script_mode=True, hyperparameters={'epochs' : 1}, train_instance_count=1, train_instance_type='ml.p3.2xlarge', output_path=s3_output_location, subnets=subnets, security_group_ids=security_group_ids, session = sagemaker.Session() ) estimator.fit({'train': train, 'validation': validation, 'eval': eval, }) # estimator.fit({'train': 'file://data/train', # 'validation': 'file://data/validation', # 'eval': 'file://data/eval'}) ``` # VPC_Mode를 True, False 선택 #### **[중요] VPC_Mode에서 실행시에 True로 변경해주세요** ``` VPC_Mode = False from sagemaker.tensorflow import TensorFlow def retrieve_estimator(VPC_Mode): if VPC_Mode: # VPC 모드 경우에 subnets, security_group을 기술 합니다. estimator = TensorFlow(base_job_name='cifar10', entry_point='cifar10_keras_sm_tf2.py', source_dir='training_script', role=role, framework_version='2.0.0', py_version='py3', script_mode=True, hyperparameters={'epochs': 2}, train_instance_count=1, train_instance_type='ml.p3.8xlarge', subnets = ['subnet-090c1fad32165b0fa','subnet-0bd7cff3909c55018'], security_group_ids = ['sg-0f45d634d80aef27e'] ) else: estimator = TensorFlow(base_job_name='cifar10', entry_point='cifar10_keras_sm_tf2.py', source_dir='training_script', role=role, framework_version='2.0.0', py_version='py3', script_mode=True, hyperparameters={'epochs': 2}, train_instance_count=1, train_instance_type='ml.p3.8xlarge') return estimator estimator = retrieve_estimator(VPC_Mode) ``` 학습을 수행합니다. 이번에는 각각의 채널(`train, validation, eval`)에 S3의 데이터 저장 위치를 지정합니다.<br> 학습 완료 후 Billable seconds도 확인해 보세요. Billable seconds는 실제로 학습 수행 시 과금되는 시간입니다. ``` Billable seconds: <time> ``` 참고로, `ml.p2.xlarge` 인스턴스로 5 epoch 학습 시 전체 6분-7분이 소요되고, 실제 학습에 소요되는 시간은 3분-4분이 소요됩니다. ``` %%time estimator.fit({'train':'{}/train'.format(dataset_location), 'validation':'{}/validation'.format(dataset_location), 'eval':'{}/eval'.format(dataset_location)}) ``` ## training_job_name 저장 현재의 training_job_name을 저장 합니다. - training_job_name을 에는 훈련에 관련 내용 및 훈련 결과인 **Model Artifact** 파일의 S3 경로를 제공 합니다. ``` train_job_name = estimator._current_job_name %store train_job_name ```
true
code
0.405537
null
null
null
null
<a href="https://colab.research.google.com/github/iotanalytics/IoTTutorial/blob/main/code/preprocessing_and_decomposition/Matrix_Profile.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Matrix Profile ## Introduction The matrix profile (MP) is a data structure and associated algorithms that helps solve the dual problem of anomaly detection and motif discovery. It is robust, scalable and largely parameter-free. MP can be combined with other algorithms to accomplish: * Motif discovery * Time series chains * Anomaly discovery * Joins * Semantic segmentation matrixprofile-ts offers 3 different algorithms to compute Matrix Profile: * STAMP (Scalable Time Series Anytime Matrix Profile) - Each distance profile is independent of other distance profiles, the order in which they are computed can be random. It is an anytime algorithm. * STOMP (Scalable Time Series Ordered Matrix Profile) - This algorithm is an exact ordered algorithm. It is significantly faster than STAMP. * SCRIMP++ (Scalable Column Independent Matrix Profile) - This algorithm combines the anytime component of STAMP with the speed of STOMP. See: https://towardsdatascience.com/introduction-to-matrix-profiles-5568f3375d90 ## Code Example ``` !pip install matrixprofile-ts import pandas as pd ## example data importing data = pd.read_csv('https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/SCG_data.csv').drop('Unnamed: 0',1).to_numpy()[0:20,:1000] import operator import numpy as np import matplotlib.pyplot as plt from matrixprofile import * import numpy as np from datetime import datetime import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets # Pull a portion of the data pattern = data[10,:] + max(abs(data[10,:])) # Compute Matrix Profile m = 10 mp = matrixProfile.stomp(pattern,m) #Append np.nan to Matrix profile to enable plotting against raw data mp_adj = np.append(mp[0],np.zeros(m-1)+np.nan) #Plot the signal data fig, (ax1, ax2) = plt.subplots(2,1,sharex=True,figsize=(20,10)) ax1.plot(np.arange(len(pattern)),pattern) ax1.set_ylabel('Signal', size=22) #Plot the Matrix Profile ax2.plot(np.arange(len(mp_adj)),mp_adj, label="Matrix Profile", color='red') ax2.set_ylabel('Matrix Profile', size=22) ax2.set_xlabel('Time', size=22); ``` ## Discussion Pros: * It is exact: For motif discovery, discord discovery, time series joins etc., the Matrix Profile based methods provide no false positives or false dismissals. * It is simple and parameter-free: In contrast, the more general algorithms in this space that typically require building and tuning spatial access methods and/or hash functions. * It is space efficient: Matrix Profile construction algorithms requires an inconsequential space overhead, just linear in the time series length with a small constant factor, allowing massive datasets to be processed in main memory (for most data mining, disk is death). * It allows anytime algorithms: While exact MP algorithms are extremely scalable, for extremely large datasets we can compute the Matrix Profile in an anytime fashion, allowing ultra-fast approximate solutions and real-time data interaction. * It is incrementally maintainable: Having computed the Matrix Profile for a dataset, we can incrementally update it very efficiently. In many domains this means we can effectively maintain exact joins, motifs, discords on streaming data forever. * It can leverage hardware: Matrix Profile construction is embarrassingly parallelizable, both on multicore processors, GPUs, distributed systems etc. * It is free of the curse of dimensionality: That is to say, It has time complexity that is constant in subsequence length: This is a very unusual and desirable property; virtually all existing algorithms in the time series scale poorly as the subsequence length grows. * It can be constructed in deterministic time: Almost all algorithms for time series data mining can take radically different times to finish on two (even slightly) different datasets. In contrast, given only the length of the time series, we can precisely predict in advance how long it will take to compute the Matrix Profile. (this allows resource planning) * It can handle missing data: Even in the presence of missing data, we can provide answers which are guaranteed to have no false negatives. * Finally, and subjectively: Simplicity and Intuitiveness: Seeing the world through the MP lens often invites/suggests simple and elegant solutions. Cons: * Larger datasets can take a long time to compute. Scalability needs to be addressed. * Cannot be used with Dynamic time Warping as of now. * DTW is used for one-to-all matching whereas MP is used for all-to-all matching. * DTW is used for smaller datasets rather than large. * Need to adjust window size manually for different datasets. *How to read MP* : * Where you see relatively low values, you know that the subsequence in the original time series must have (at least one) relatively similar subsequence elsewhere in the data (such regions are “motifs” or reoccurring patterns) * Where you see relatively high values, you know that the subsequence in the original time series must be unique in its shape (such areas are “discords” or anomalies). In fact, the highest point is exactly the definition of Time Series Discord, perhaps the best anomaly detector for time series. ##References: https://www.cs.ucr.edu/~eamonn/MatrixProfile.html (powerpoints on this site - a lot of examples) https://towardsdatascience.com/introduction-to-matrix-profiles-5568f3375d90 Python implementation: https://github.com/TDAmeritrade/stumpy
true
code
0.665492
null
null
null
null
``` %matplotlib inline ``` What is `torch.nn` *really*? ============================ by Jeremy Howard, `fast.ai <https://www.fast.ai>`_. Thanks to Rachel Thomas and Francisco Ingham. We recommend running this tutorial as a notebook, not a script. To download the notebook (.ipynb) file, click `here <https://pytorch.org/tutorials/beginner/nn_tutorial.html#sphx-glr-download-beginner-nn-tutorial-py>`_ . PyTorch provides the elegantly designed modules and classes `torch.nn <https://pytorch.org/docs/stable/nn.html>`_ , `torch.optim <https://pytorch.org/docs/stable/optim.html>`_ , `Dataset <https://pytorch.org/docs/stable/data.html?highlight=dataset#torch.utils.data.Dataset>`_ , and `DataLoader <https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader>`_ to help you create and train neural networks. In order to fully utilize their power and customize them for your problem, you need to really understand exactly what they're doing. To develop this understanding, we will first train basic neural net on the MNIST data set without using any features from these models; we will initially only use the most basic PyTorch tensor functionality. Then, we will incrementally add one feature from ``torch.nn``, ``torch.optim``, ``Dataset``, or ``DataLoader`` at a time, showing exactly what each piece does, and how it works to make the code either more concise, or more flexible. **This tutorial assumes you already have PyTorch installed, and are familiar with the basics of tensor operations.** (If you're familiar with Numpy array operations, you'll find the PyTorch tensor operations used here nearly identical). MNIST data setup ---------------- We will use the classic `MNIST <http://deeplearning.net/data/mnist/>`_ dataset, which consists of black-and-white images of hand-drawn digits (between 0 and 9). We will use `pathlib <https://docs.python.org/3/library/pathlib.html>`_ for dealing with paths (part of the Python 3 standard library), and will download the dataset using `requests <http://docs.python-requests.org/en/master/>`_. We will only import modules when we use them, so you can see exactly what's being used at each point. ``` from pathlib import Path import requests DATA_PATH = Path("data") PATH = DATA_PATH / "mnist" PATH.mkdir(parents=True, exist_ok=True) URL = "http://deeplearning.net/data/mnist/" FILENAME = "mnist.pkl.gz" if not (PATH / FILENAME).exists(): content = requests.get(URL + FILENAME).content (PATH / FILENAME).open("wb").write(content) ``` This dataset is in numpy array format, and has been stored using pickle, a python-specific format for serializing data. ``` import pickle import gzip with gzip.open((PATH / FILENAME).as_posix(), "rb") as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1") ``` Each image is 28 x 28, and is being stored as a flattened row of length 784 (=28x28). Let's take a look at one; we need to reshape it to 2d first. ``` from matplotlib import pyplot import numpy as np pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray") print(x_train.shape) ``` PyTorch uses ``torch.tensor``, rather than numpy arrays, so we need to convert our data. ``` import torch x_train, y_train, x_valid, y_valid = map( torch.tensor, (x_train, y_train, x_valid, y_valid) ) n, c = x_train.shape x_train, x_train.shape, y_train.min(), y_train.max() print(x_train, y_train) print(x_train.shape) print(y_train.min(), y_train.max()) ``` Neural net from scratch (no torch.nn) --------------------------------------------- Let's first create a model using nothing but PyTorch tensor operations. We're assuming you're already familiar with the basics of neural networks. (If you're not, you can learn them at `course.fast.ai <https://course.fast.ai>`_). PyTorch provides methods to create random or zero-filled tensors, which we will use to create our weights and bias for a simple linear model. These are just regular tensors, with one very special addition: we tell PyTorch that they require a gradient. This causes PyTorch to record all of the operations done on the tensor, so that it can calculate the gradient during back-propagation *automatically*! For the weights, we set ``requires_grad`` **after** the initialization, since we don't want that step included in the gradient. (Note that a trailling ``_`` in PyTorch signifies that the operation is performed in-place.) <div class="alert alert-info"><h4>Note</h4><p>We are initializing the weights here with `Xavier initialisation <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_ (by multiplying with 1/sqrt(n)).</p></div> ``` import math weights = torch.randn(784, 10) / math.sqrt(784) weights.requires_grad_() bias = torch.zeros(10, requires_grad=True) ``` Thanks to PyTorch's ability to calculate gradients automatically, we can use any standard Python function (or callable object) as a model! So let's just write a plain matrix multiplication and broadcasted addition to create a simple linear model. We also need an activation function, so we'll write `log_softmax` and use it. Remember: although PyTorch provides lots of pre-written loss functions, activation functions, and so forth, you can easily write your own using plain python. PyTorch will even create fast GPU or vectorized CPU code for your function automatically. ``` def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def model(xb): return log_softmax(xb @ weights + bias) ``` In the above, the ``@`` stands for the dot product operation. We will call our function on one batch of data (in this case, 64 images). This is one *forward pass*. Note that our predictions won't be any better than random at this stage, since we start with random weights. ``` bs = 64 # batch size xb = x_train[0:bs] # a mini-batch from x preds = model(xb) # predictions preds[0], preds.shape print(preds[0], preds.shape) ``` As you see, the ``preds`` tensor contains not only the tensor values, but also a gradient function. We'll use this later to do backprop. Let's implement negative log-likelihood to use as the loss function (again, we can just use standard Python): ``` def nll(input, target): return -input[range(target.shape[0]), target].mean() loss_func = nll ``` Let's check our loss with our random model, so we can see if we improve after a backprop pass later. ``` yb = y_train[0:bs] print(loss_func(preds, yb)) ``` Let's also implement a function to calculate the accuracy of our model. For each prediction, if the index with the largest value matches the target value, then the prediction was correct. ``` def accuracy(out, yb): preds = torch.argmax(out, dim=1) return (preds == yb).float().mean() ``` Let's check the accuracy of our random model, so we can see if our accuracy improves as our loss improves. ``` print(accuracy(preds, yb)) ``` We can now run a training loop. For each iteration, we will: - select a mini-batch of data (of size ``bs``) - use the model to make predictions - calculate the loss - ``loss.backward()`` updates the gradients of the model, in this case, ``weights`` and ``bias``. We now use these gradients to update the weights and bias. We do this within the ``torch.no_grad()`` context manager, because we do not want these actions to be recorded for our next calculation of the gradient. You can read more about how PyTorch's Autograd records operations `here <https://pytorch.org/docs/stable/notes/autograd.html>`_. We then set the gradients to zero, so that we are ready for the next loop. Otherwise, our gradients would record a running tally of all the operations that had happened (i.e. ``loss.backward()`` *adds* the gradients to whatever is already stored, rather than replacing them). .. tip:: You can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment ``set_trace()`` below to try it out. ``` from IPython.core.debugger import set_trace lr = 0.5 # learning rate epochs = 2 # how many epochs to train for for epoch in range(epochs): for i in range((n - 1) // bs + 1): # set_trace() start_i = i * bs end_i = start_i + bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_() ``` That's it: we've created and trained a minimal neural network (in this case, a logistic regression, since we have no hidden layers) entirely from scratch! Let's check the loss and accuracy and compare those to what we got earlier. We expect that the loss will have decreased and accuracy to have increased, and they have. ``` print(loss_func(model(xb), yb), accuracy(model(xb), yb)) ``` Using torch.nn.functional ------------------------------ We will now refactor our code, so that it does the same thing as before, only we'll start taking advantage of PyTorch's ``nn`` classes to make it more concise and flexible. At each step from here, we should be making our code one or more of: shorter, more understandable, and/or more flexible. The first and easiest step is to make our code shorter by replacing our hand-written activation and loss functions with those from ``torch.nn.functional`` (which is generally imported into the namespace ``F`` by convention). This module contains all the functions in the ``torch.nn`` library (whereas other parts of the library contain classes). As well as a wide range of loss and activation functions, you'll also find here some convenient functions for creating neural nets, such as pooling functions. (There are also functions for doing convolutions, linear layers, etc, but as we'll see, these are usually better handled using other parts of the library.) If you're using negative log likelihood loss and log softmax activation, then Pytorch provides a single function ``F.cross_entropy`` that combines the two. So we can even remove the activation function from our model. ``` import torch.nn.functional as F loss_func = F.cross_entropy def model(xb): return xb @ weights + bias ``` Note that we no longer call ``log_softmax`` in the ``model`` function. Let's confirm that our loss and accuracy are the same as before: ``` print(loss_func(model(xb), yb), accuracy(model(xb), yb)) ``` Refactor using nn.Module ----------------------------- Next up, we'll use ``nn.Module`` and ``nn.Parameter``, for a clearer and more concise training loop. We subclass ``nn.Module`` (which itself is a class and able to keep track of state). In this case, we want to create a class that holds our weights, bias, and method for the forward step. ``nn.Module`` has a number of attributes and methods (such as ``.parameters()`` and ``.zero_grad()``) which we will be using. <div class="alert alert-info"><h4>Note</h4><p>``nn.Module`` (uppercase M) is a PyTorch specific concept, and is a class we'll be using a lot. ``nn.Module`` is not to be confused with the Python concept of a (lowercase ``m``) `module <https://docs.python.org/3/tutorial/modules.html>`_, which is a file of Python code that can be imported.</p></div> ``` from torch import nn class Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784)) self.bias = nn.Parameter(torch.zeros(10)) def forward(self, xb): return xb @ self.weights + self.bias ``` Since we're now using an object instead of just using a function, we first have to instantiate our model: ``` model = Mnist_Logistic() ``` Now we can calculate the loss in the same way as before. Note that ``nn.Module`` objects are used as if they are functions (i.e they are *callable*), but behind the scenes Pytorch will call our ``forward`` method automatically. ``` print(loss_func(model(xb), yb)) ``` Previously for our training loop we had to update the values for each parameter by name, and manually zero out the grads for each parameter separately, like this: :: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_() Now we can take advantage of model.parameters() and model.zero_grad() (which are both defined by PyTorch for ``nn.Module``) to make those steps more concise and less prone to the error of forgetting some of our parameters, particularly if we had a more complicated model: :: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() We'll wrap our little training loop in a ``fit`` function so we can run it again later. ``` def fit(): for epoch in range(epochs): for i in range((n - 1) // bs + 1): start_i = i * bs end_i = start_i + bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() fit() ``` Let's double-check that our loss has gone down: ``` print(loss_func(model(xb), yb)) ``` Refactor using nn.Linear ------------------------- We continue to refactor our code. Instead of manually defining and initializing ``self.weights`` and ``self.bias``, and calculating ``xb @ self.weights + self.bias``, we will instead use the Pytorch class `nn.Linear <https://pytorch.org/docs/stable/nn.html#linear-layers>`_ for a linear layer, which does all that for us. Pytorch has many types of predefined layers that can greatly simplify our code, and often makes it faster too. ``` class Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.lin = nn.Linear(784, 10) def forward(self, xb): return self.lin(xb) ``` We instantiate our model and calculate the loss in the same way as before: ``` model = Mnist_Logistic() print(loss_func(model(xb), yb)) ``` We are still able to use our same ``fit`` method as before. ``` fit() print(loss_func(model(xb), yb)) ``` Refactor using optim ------------------------------ Pytorch also has a package with various optimization algorithms, ``torch.optim``. We can use the ``step`` method from our optimizer to take a forward step, instead of manually updating each parameter. This will let us replace our previous manually coded optimization step: :: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() and instead use just: :: opt.step() opt.zero_grad() (``optim.zero_grad()`` resets the gradient to 0 and we need to call it before computing the gradient for the next minibatch.) ``` from torch import optim ``` We'll define a little function to create our model and optimizer so we can reuse it in the future. ``` def get_model(): model = Mnist_Logistic() return model, optim.SGD(model.parameters(), lr=lr) model, opt = get_model() print(loss_func(model(xb), yb)) for epoch in range(epochs): for i in range((n - 1) // bs + 1): start_i = i * bs end_i = start_i + bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() print(loss_func(model(xb), yb)) ``` Refactor using Dataset ------------------------------ PyTorch has an abstract Dataset class. A Dataset can be anything that has a ``__len__`` function (called by Python's standard ``len`` function) and a ``__getitem__`` function as a way of indexing into it. `This tutorial <https://pytorch.org/tutorials/beginner/data_loading_tutorial.html>`_ walks through a nice example of creating a custom ``FacialLandmarkDataset`` class as a subclass of ``Dataset``. PyTorch's `TensorDataset <https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset>`_ is a Dataset wrapping tensors. By defining a length and way of indexing, this also gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train. ``` from torch.utils.data import TensorDataset ``` Both ``x_train`` and ``y_train`` can be combined in a single ``TensorDataset``, which will be easier to iterate over and slice. ``` train_ds = TensorDataset(x_train, y_train) ``` Previously, we had to iterate through minibatches of x and y values separately: :: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] Now, we can do these two steps together: :: xb,yb = train_ds[i*bs : i*bs+bs] ``` model, opt = get_model() for epoch in range(epochs): for i in range((n - 1) // bs + 1): xb, yb = train_ds[i * bs: i * bs + bs] pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() print(loss_func(model(xb), yb)) ``` Refactor using DataLoader ------------------------------ Pytorch's ``DataLoader`` is responsible for managing batches. You can create a ``DataLoader`` from any ``Dataset``. ``DataLoader`` makes it easier to iterate over batches. Rather than having to use ``train_ds[i*bs : i*bs+bs]``, the DataLoader gives us each minibatch automatically. ``` from torch.utils.data import DataLoader train_ds = TensorDataset(x_train, y_train) train_dl = DataLoader(train_ds, batch_size=bs) ``` Previously, our loop iterated over batches (xb, yb) like this: :: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb) Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader: :: for xb,yb in train_dl: pred = model(xb) ``` model, opt = get_model() for epoch in range(epochs): for xb, yb in train_dl: pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() print(loss_func(model(xb), yb)) ``` Thanks to Pytorch's ``nn.Module``, ``nn.Parameter``, ``Dataset``, and ``DataLoader``, our training loop is now dramatically smaller and easier to understand. Let's now try to add the basic features necessary to create effecive models in practice. Add validation ----------------------- In section 1, we were just trying to get a reasonable training loop set up for use on our training data. In reality, you **always** should also have a `validation set <https://www.fast.ai/2017/11/13/validation-sets/>`_, in order to identify if you are overfitting. Shuffling the training data is `important <https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks>`_ to prevent correlation between batches and overfitting. On the other hand, the validation loss will be identical whether we shuffle the validation set or not. Since shuffling takes extra time, it makes no sense to shuffle the validation data. We'll use a batch size for the validation set that is twice as large as that for the training set. This is because the validation set does not need backpropagation and thus takes less memory (it doesn't need to store the gradients). We take advantage of this to use a larger batch size and compute the loss more quickly. ``` train_ds = TensorDataset(x_train, y_train) train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True) valid_ds = TensorDataset(x_valid, y_valid) valid_dl = DataLoader(valid_ds, batch_size=bs * 2) ``` We will calculate and print the validation loss at the end of each epoch. (Note that we always call ``model.train()`` before training, and ``model.eval()`` before inference, because these are used by layers such as ``nn.BatchNorm2d`` and ``nn.Dropout`` to ensure appropriate behaviour for these different phases.) ``` model, opt = get_model() for epoch in range(epochs): model.train() for xb, yb in train_dl: pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() model.eval() with torch.no_grad(): valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl) print(epoch, valid_loss / len(valid_dl)) ``` Create fit() and get_data() ---------------------------------- We'll now do a little refactoring of our own. Since we go through a similar process twice of calculating the loss for both the training set and the validation set, let's make that into its own function, ``loss_batch``, which computes the loss for one batch. We pass an optimizer in for the training set, and use it to perform backprop. For the validation set, we don't pass an optimizer, so the method doesn't perform backprop. ``` def loss_batch(model, loss_func, xb, yb, opt=None): loss = loss_func(model(xb), yb) if opt is not None: loss.backward() opt.step() opt.zero_grad() return loss.item(), len(xb) ``` ``fit`` runs the necessary operations to train our model and compute the training and validation losses for each epoch. ``` import numpy as np def fit(epochs, model, loss_func, opt, train_dl, valid_dl): for epoch in range(epochs): model.train() for xb, yb in train_dl: loss_batch(model, loss_func, xb, yb, opt) model.eval() with torch.no_grad(): losses, nums = zip( *[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl] ) val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums) print(epoch, val_loss) ``` ``get_data`` returns dataloaders for the training and validation sets. ``` def get_data(train_ds, valid_ds, bs): return ( DataLoader(train_ds, batch_size=bs, shuffle=True), DataLoader(valid_ds, batch_size=bs * 2), ) ``` Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code: ``` train_dl, valid_dl = get_data(train_ds, valid_ds, bs) model, opt = get_model() fit(epochs, model, loss_func, opt, train_dl, valid_dl) ``` You can use these basic 3 lines of code to train a wide variety of models. Let's see if we can use them to train a convolutional neural network (CNN)! Switch to CNN ------------- We are now going to build our neural network with three convolutional layers. Because none of the functions in the previous section assume anything about the model form, we'll be able to use them to train a CNN without any modification. We will use Pytorch's predefined `Conv2d <https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d>`_ class as our convolutional layer. We define a CNN with 3 convolutional layers. Each convolution is followed by a ReLU. At the end, we perform an average pooling. (Note that ``view`` is PyTorch's version of numpy's ``reshape``) ``` class Mnist_CNN(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1) self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1) def forward(self, xb): xb = xb.view(-1, 1, 28, 28) xb = F.relu(self.conv1(xb)) xb = F.relu(self.conv2(xb)) xb = F.relu(self.conv3(xb)) xb = F.avg_pool2d(xb, 4) return xb.view(-1, xb.size(1)) lr = 0.1 ``` `Momentum <https://cs231n.github.io/neural-networks-3/#sgd>`_ is a variation on stochastic gradient descent that takes previous updates into account as well and generally leads to faster training. ``` model = Mnist_CNN() opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) fit(epochs, model, loss_func, opt, train_dl, valid_dl) ``` nn.Sequential ------------------------ ``torch.nn`` has another handy class we can use to simply our code: `Sequential <https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential>`_ . A ``Sequential`` object runs each of the modules contained within it, in a sequential manner. This is a simpler way of writing our neural network. To take advantage of this, we need to be able to easily define a **custom layer** from a given function. For instance, PyTorch doesn't have a `view` layer, and we need to create one for our network. ``Lambda`` will create a layer that we can then use when defining a network with ``Sequential``. ``` class Lambda(nn.Module): def __init__(self, func): super().__init__() self.func = func def forward(self, x): return self.func(x) def preprocess(x): return x.view(-1, 1, 28, 28) ``` The model created with ``Sequential`` is simply: ``` model = nn.Sequential( Lambda(preprocess), nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.AvgPool2d(4), Lambda(lambda x: x.view(x.size(0), -1)), ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) fit(epochs, model, loss_func, opt, train_dl, valid_dl) ``` Wrapping DataLoader ----------------------------- Our CNN is fairly concise, but it only works with MNIST, because: - It assumes the input is a 28\*28 long vector - It assumes that the final CNN grid size is 4\*4 (since that's the average pooling kernel size we used) Let's get rid of these two assumptions, so our model works with any 2d single channel image. First, we can remove the initial Lambda layer but moving the data preprocessing into a generator: ``` def preprocess(x, y): return x.view(-1, 1, 28, 28), y class WrappedDataLoader: def __init__(self, dl, func): self.dl = dl self.func = func def __len__(self): return len(self.dl) def __iter__(self): batches = iter(self.dl) for b in batches: yield (self.func(*b)) train_dl, valid_dl = get_data(train_ds, valid_ds, bs) train_dl = WrappedDataLoader(train_dl, preprocess) valid_dl = WrappedDataLoader(valid_dl, preprocess) ``` Next, we can replace ``nn.AvgPool2d`` with ``nn.AdaptiveAvgPool2d``, which allows us to define the size of the *output* tensor we want, rather than the *input* tensor we have. As a result, our model will work with any size input. ``` model = nn.Sequential( nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1), Lambda(lambda x: x.view(x.size(0), -1)), ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) ``` Let's try it out: ``` fit(epochs, model, loss_func, opt, train_dl, valid_dl) ``` Using your GPU --------------- If you're lucky enough to have access to a CUDA-capable GPU (you can rent one for about $0.50/hour from most cloud providers) you can use it to speed up your code. First check that your GPU is working in Pytorch: ``` print(torch.cuda.is_available()) ``` And then create a device object for it: ``` dev = torch.device( "cuda") if torch.cuda.is_available() else torch.device("cpu") ``` Let's update ``preprocess`` to move batches to the GPU: ``` def preprocess(x, y): return x.view(-1, 1, 28, 28).to(dev), y.to(dev) train_dl, valid_dl = get_data(train_ds, valid_ds, bs) train_dl = WrappedDataLoader(train_dl, preprocess) valid_dl = WrappedDataLoader(valid_dl, preprocess) ``` Finally, we can move our model to the GPU. ``` model.to(dev) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) ``` You should find it runs faster now: ``` fit(epochs, model, loss_func, opt, train_dl, valid_dl) ``` Closing thoughts ----------------- We now have a general data pipeline and training loop which you can use for training many types of models using Pytorch. To see how simple training a model can now be, take a look at the `mnist_sample` sample notebook. Of course, there are many things you'll want to add, such as data augmentation, hyperparameter tuning, monitoring training, transfer learning, and so forth. These features are available in the fastai library, which has been developed using the same design approach shown in this tutorial, providing a natural next step for practitioners looking to take their models further. We promised at the start of this tutorial we'd explain through example each of ``torch.nn``, ``torch.optim``, ``Dataset``, and ``DataLoader``. So let's summarize what we've seen: - **torch.nn** + ``Module``: creates a callable which behaves like a function, but can also contain state(such as neural net layer weights). It knows what ``Parameter`` (s) it contains and can zero all their gradients, loop through them for weight updates, etc. + ``Parameter``: a wrapper for a tensor that tells a ``Module`` that it has weights that need updating during backprop. Only tensors with the `requires_grad` attribute set are updated + ``functional``: a module(usually imported into the ``F`` namespace by convention) which contains activation functions, loss functions, etc, as well as non-stateful versions of layers such as convolutional and linear layers. - ``torch.optim``: Contains optimizers such as ``SGD``, which update the weights of ``Parameter`` during the backward step - ``Dataset``: An abstract interface of objects with a ``__len__`` and a ``__getitem__``, including classes provided with Pytorch such as ``TensorDataset`` - ``DataLoader``: Takes any ``Dataset`` and creates an iterator which returns batches of data.
true
code
0.647241
null
null
null
null
# ART for TensorFlow v2 - Keras API This notebook demonstrate applying ART with the new TensorFlow v2 using the Keras API. The code follows and extends the examples on www.tensorflow.org. ``` import warnings warnings.filterwarnings('ignore') import tensorflow as tf tf.compat.v1.disable_eager_execution() import numpy as np from matplotlib import pyplot as plt from art.estimators.classification import KerasClassifier from art.attacks.evasion import FastGradientMethod, CarliniLInfMethod if tf.__version__[0] != '2': raise ImportError('This notebook requires TensorFlow v2.') ``` # Load MNIST dataset ``` (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_test = x_test[0:100] y_test = y_test[0:100] ``` # TensorFlow with Keras API Create a model using Keras API. Here we use the Keras Sequential model and add a sequence of layers. Afterwards the model is compiles with optimizer, loss function and metrics. ``` model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=(28, 28)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']); ``` Fit the model on training data. ``` model.fit(x_train, y_train, epochs=3); ``` Evaluate model accuracy on test data. ``` loss_test, accuracy_test = model.evaluate(x_test, y_test) print('Accuracy on test data: {:4.2f}%'.format(accuracy_test * 100)) ``` Create a ART Keras classifier for the TensorFlow Keras model. ``` classifier = KerasClassifier(model=model, clip_values=(0, 1)) ``` ## Fast Gradient Sign Method attack Create a ART Fast Gradient Sign Method attack. ``` attack_fgsm = FastGradientMethod(estimator=classifier, eps=0.3) ``` Generate adversarial test data. ``` x_test_adv = attack_fgsm.generate(x_test) ``` Evaluate accuracy on adversarial test data and calculate average perturbation. ``` loss_test, accuracy_test = model.evaluate(x_test_adv, y_test) perturbation = np.mean(np.abs((x_test_adv - x_test))) print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100)) print('Average perturbation: {:4.2f}'.format(perturbation)) ``` Visualise the first adversarial test sample. ``` plt.matshow(x_test_adv[0]) plt.show() ``` ## Carlini&Wagner Infinity-norm attack Create a ART Carlini&Wagner Infinity-norm attack. ``` attack_cw = CarliniLInfMethod(classifier=classifier, eps=0.3, max_iter=100, learning_rate=0.01) ``` Generate adversarial test data. ``` x_test_adv = attack_cw.generate(x_test) ``` Evaluate accuracy on adversarial test data and calculate average perturbation. ``` loss_test, accuracy_test = model.evaluate(x_test_adv, y_test) perturbation = np.mean(np.abs((x_test_adv - x_test))) print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100)) print('Average perturbation: {:4.2f}'.format(perturbation)) ``` Visualise the first adversarial test sample. ``` plt.matshow(x_test_adv[0, :, :]) plt.show() ```
true
code
0.772547
null
null
null
null
# Prophet Time serie forecasting using Prophet Official documentation: https://facebook.github.io/prophet/docs/quick_start.html Procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It is released by Facebook’s Core Data Science team. Additive model is a model like: $Data = seasonal\space effect + trend + residual$ and, multiplicative model: $Data = seasonal\space effect * trend * residual$ The algorithm provides useful statistics that help visualize the tuning process, e.g. trend, week trend, year trend and their max and min errors. ### Data The data on which the algorithms will be trained and tested upon comes from Kaggle Hourly Energy Consumption database. It is collected by PJM Interconnection, a company coordinating the continuous buying, selling, and delivery of wholesale electricity through the Energy Market from suppliers to customers in the reagon of South Carolina, USA. All .csv files contains rows with a timestamp and a value. The name of the value column corresponds to the name of the contractor. the timestamp represents a single hour and the value represents the total energy, cunsumed during that hour. The data we will be using is hourly power consumption data from PJM. Energy consumtion has some unique charachteristics. It will be interesting to see how prophet picks them up. https://www.kaggle.com/robikscube/hourly-energy-consumption Pulling the PJM East which has data from 2002-2018 for the entire east region. ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from fbprophet import Prophet from sklearn.metrics import mean_squared_error, mean_absolute_error plt.style.use('fivethirtyeight') # For plots dataset_path = './data/hourly-energy-consumption/PJME_hourly.csv' df = pd.read_csv(dataset_path, index_col=[0], parse_dates=[0]) print("Dataset path:",df.shape) df.head(10) # VISUALIZE DATA # Color pallete for plotting color_pal = ["#F8766D", "#D39200", "#93AA00", "#00BA38", "#00C19F", "#00B9E3", "#619CFF", "#DB72FB"] df.plot(style='.', figsize=(20,10), color=color_pal[0], title='PJM East Dataset TS') plt.show() #Decompose the seasonal data def create_features(df, label=None): """ Creates time series features from datetime index. """ df = df.copy() df['date'] = df.index df['hour'] = df['date'].dt.hour df['dayofweek'] = df['date'].dt.dayofweek df['quarter'] = df['date'].dt.quarter df['month'] = df['date'].dt.month df['year'] = df['date'].dt.year df['dayofyear'] = df['date'].dt.dayofyear df['dayofmonth'] = df['date'].dt.day df['weekofyear'] = df['date'].dt.weekofyear X = df[['hour','dayofweek','quarter','month','year', 'dayofyear','dayofmonth','weekofyear']] if label: y = df[label] return X, y return X df.columns X, y = create_features(df, label='PJME_MW') features_and_target = pd.concat([X, y], axis=1) print("Shape",features_and_target.shape) features_and_target.head(10) sns.pairplot(features_and_target.dropna(), hue='hour', x_vars=['hour','dayofweek', 'year','weekofyear'], y_vars='PJME_MW', height=5, plot_kws={'alpha':0.15, 'linewidth':0} ) plt.suptitle('Power Use MW by Hour, Day of Week, Year and Week of Year') plt.show() ``` ## Train and Test Split We use a temporal split, keeping old data and use only new period to do the prediction ``` split_date = '01-Jan-2015' pjme_train = df.loc[df.index <= split_date].copy() pjme_test = df.loc[df.index > split_date].copy() # Plot train and test so you can see where we have split pjme_test \ .rename(columns={'PJME_MW': 'TEST SET'}) \ .join(pjme_train.rename(columns={'PJME_MW': 'TRAINING SET'}), how='outer') \ .plot(figsize=(15,5), title='PJM East', style='.') plt.show() ``` To use prophet you need to correctly rename features and label to correctly pass the input to the engine. ``` # Format data for prophet model using ds and y pjme_train.reset_index() \ .rename(columns={'Datetime':'ds', 'PJME_MW':'y'}) print(pjme_train.columns) pjme_train.head(5) ``` ### Create and train the model ``` # Setup and train model and fit model = Prophet() model.fit(pjme_train.reset_index() \ .rename(columns={'Datetime':'ds', 'PJME_MW':'y'})) # Predict on training set with model pjme_test_fcst = model.predict(df=pjme_test.reset_index() \ .rename(columns={'Datetime':'ds'})) pjme_test_fcst.head() ``` ### Plot the results and forecast ``` # Plot the forecast f, ax = plt.subplots(1) f.set_figheight(5) f.set_figwidth(15) fig = model.plot(pjme_test_fcst, ax=ax) plt.show() # Plot the components of the model fig = model.plot_components(pjme_test_fcst) ```
true
code
0.578567
null
null
null
null
# Scalable GP Classification in 1D (w/ KISS-GP) This example shows how to use grid interpolation based variational classification with an `ApproximateGP` using a `GridInterpolationVariationalStrategy` module. This classification module is designed for when the inputs of the function you're modeling are one-dimensional. The use of inducing points allows for scaling up the training data by making computational complexity linear instead of cubic. In this example, we’re modeling a function that is periodically labeled cycling every 1/8 (think of a square wave with period 1/4) This notebook doesn't use cuda, in general we recommend GPU use if possible and most of our notebooks utilize cuda as well. Kernel interpolation for scalable structured Gaussian processes (KISS-GP) was introduced in this paper: http://proceedings.mlr.press/v37/wilson15.pdf KISS-GP with SVI for classification was introduced in this paper: https://papers.nips.cc/paper/6426-stochastic-variational-deep-kernel-learning.pdf ``` import math import torch import gpytorch from matplotlib import pyplot as plt from math import exp %matplotlib inline %load_ext autoreload %autoreload 2 train_x = torch.linspace(0, 1, 26) train_y = torch.sign(torch.cos(train_x * (2 * math.pi))).add(1).div(2) from gpytorch.models import ApproximateGP from gpytorch.variational import CholeskyVariationalDistribution from gpytorch.variational import GridInterpolationVariationalStrategy class GPClassificationModel(ApproximateGP): def __init__(self, grid_size=128, grid_bounds=[(0, 1)]): variational_distribution = CholeskyVariationalDistribution(grid_size) variational_strategy = GridInterpolationVariationalStrategy(self, grid_size, grid_bounds, variational_distribution) super(GPClassificationModel, self).__init__(variational_strategy) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) def forward(self,x): mean_x = self.mean_module(x) covar_x = self.covar_module(x) latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x) return latent_pred model = GPClassificationModel() likelihood = gpytorch.likelihoods.BernoulliLikelihood() from gpytorch.mlls.variational_elbo import VariationalELBO # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # "Loss" for GPs - the marginal log likelihood # n_data refers to the number of training datapoints mll = VariationalELBO(likelihood, model, num_data=train_y.numel()) def train(): num_iter = 100 for i in range(num_iter): optimizer.zero_grad() output = model(train_x) # Calc loss and backprop gradients loss = -mll(output, train_y) loss.backward() print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.item())) optimizer.step() # Get clock time %time train() # Set model and likelihood into eval mode model.eval() likelihood.eval() # Initialize axes f, ax = plt.subplots(1, 1, figsize=(4, 3)) with torch.no_grad(): test_x = torch.linspace(0, 1, 101) predictions = likelihood(model(test_x)) ax.plot(train_x.numpy(), train_y.numpy(), 'k*') pred_labels = predictions.mean.ge(0.5).float() ax.plot(test_x.data.numpy(), pred_labels.numpy(), 'b') ax.set_ylim([-1, 2]) ax.legend(['Observed Data', 'Mean', 'Confidence']) ```
true
code
0.810404
null
null
null
null
# Showing uncertainty > Uncertainty occurs everywhere in data science, but it's frequently left out of visualizations where it should be included. Here, we review what a confidence interval is and how to visualize them for both single estimates and continuous functions. Additionally, we discuss the bootstrap resampling technique for assessing uncertainty and how to visualize it properly. This is the Summary of lecture "Improving Your Data Visualizations in Python", via datacamp. - toc: true - badges: true - comments: true - author: Chanseok Kang - categories: [Python, Datacamp, Visualization] - image: images/so2_compare.png ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns plt.rcParams['figure.figsize'] = (10, 5) ``` ### Point estimate intervals - When is uncertainty important? - Estimates from sample - Average of a subset - Linear model coefficients - Why is uncertainty important? - Helps inform confidence in estimate - Neccessary for decision making - Acknowledges limitations of data ### Basic confidence intervals You are a data scientist for a fireworks manufacturer in Des Moines, Iowa. You need to make a case to the city that your company's large fireworks show has not caused any harm to the city's air. To do this, you look at the average levels for pollutants in the week after the fourth of July and how they compare to readings taken after your last show. By showing confidence intervals around the averages, you can make a case that the recent readings were well within the normal range. ``` average_ests = pd.read_csv('./dataset/average_ests.csv', index_col=0) average_ests # Construct CI bounds for averages average_ests['lower'] = average_ests['mean'] - 1.96 * average_ests['std_err'] average_ests['upper'] = average_ests['mean'] + 1.96 * average_ests['std_err'] # Setup a grid of plots, with non-shared x axes limits g = sns.FacetGrid(average_ests, row='pollutant', sharex=False, aspect=2); # Plot CI for average estimate g.map(plt.hlines, 'y', 'lower', 'upper'); # Plot observed values for comparison and remove axes labels g.map(plt.scatter, 'seen', 'y', color='orangered').set_ylabels('').set_xlabels(''); ``` This simple visualization shows that all the observed values fall well within the confidence intervals for all the pollutants except for $O_3$. ### Annotating confidence intervals Your data science work with pollution data is legendary, and you are now weighing job offers in both Cincinnati, Ohio and Indianapolis, Indiana. You want to see if the SO2 levels are significantly different in the two cities, and more specifically, which city has lower levels. To test this, you decide to look at the differences in the cities' SO2 values (Indianapolis' - Cincinnati's) over multiple years. Instead of just displaying a p-value for a significant difference between the cities, you decide to look at the 95% confidence intervals (columns `lower` and `upper`) of the differences. This allows you to see the magnitude of the differences along with any trends over the years. ``` diffs_by_year = pd.read_csv('./dataset/diffs_by_year.csv', index_col=0) diffs_by_year # Set start and ends according to intervals # Make intervals thicker plt.hlines(y='year', xmin='lower', xmax='upper', linewidth=5, color='steelblue', alpha=0.7, data=diffs_by_year); # Point estimates plt.plot('mean', 'year', 'k|', data=diffs_by_year); # Add a 'null' reference line at 0 and color orangered plt.axvline(x=0, color='orangered', linestyle='--'); # Set descriptive axis labels and title plt.xlabel('95% CI'); plt.title('Avg SO2 differences between Cincinnati and Indianapolis'); ``` By looking at the confidence intervals you can see that the difference flipped from generally positive (more pollution in Cincinnati) in 2013 to negative (more pollution in Indianapolis) in 2014 and 2015. Given that every year's confidence interval contains the null value of zero, no P-Value would be significant, and a plot that only showed significance would have been entirely hidden this trend. ## Confidence bands ### Making a confidence band Vandenberg Air Force Base is often used as a location to launch rockets into space. You have a theory that a recent increase in the pace of rocket launches could be harming the air quality in the surrounding region. To explore this, you plotted a 25-day rolling average line of the measurements of atmospheric $NO_2$. To help decide if any pattern observed is random-noise or not, you decide to add a 99% confidence band around your rolling mean. Adding a confidence band to a trend line can help shed light on the stability of the trend seen. This can either increase or decrease the confidence in the discovered trend. ``` vandenberg_NO2 = pd.read_csv('./dataset/vandenberg_NO2.csv', index_col=0) vandenberg_NO2.head() # Draw 99% interval bands for average NO2 vandenberg_NO2['lower'] = vandenberg_NO2['mean'] - 2.58 * vandenberg_NO2['std_err'] vandenberg_NO2['upper'] = vandenberg_NO2['mean'] + 2.58 * vandenberg_NO2['std_err'] # Plot mean estimate as a white semi-transparent line plt.plot('day', 'mean', data=vandenberg_NO2, color='white', alpha=0.4); # Fill between the upper and lower confidence band values plt.fill_between(x='day', y1='lower', y2='upper', data=vandenberg_NO2); ``` This plot shows that the middle of the year's $NO_2$ values are not only lower than the beginning and end of the year but also are less noisy. If just the moving average line were plotted, then this potentially interesting observation would be completely missed. (Can you think of what may cause reduced variance at the lower values of the pollutant?) ### Separating a lot of bands It is relatively simple to plot a bunch of trend lines on top of each other for rapid and precise comparisons. Unfortunately, if you need to add uncertainty bands around those lines, the plot becomes very difficult to read. Figuring out whether a line corresponds to the top of one class' band or the bottom of another's can be hard due to band overlap. Luckily in Seaborn, it's not difficult to break up the overlapping bands into separate faceted plots. To see this, explore trends in SO2 levels for a few cities in the eastern half of the US. If you plot the trends and their confidence bands on a single plot - it's a mess. To fix, use Seaborn's `FacetGrid()` function to spread out the confidence intervals to multiple panes to ease your inspection. ``` eastern_SO2 = pd.read_csv('./dataset/eastern_SO2.csv', index_col=0) eastern_SO2.head() # setup a grid of plots with columns divided by location g = sns.FacetGrid(eastern_SO2, col='city', col_wrap=2); # Map interval plots to each cities data with coral colored ribbons g.map(plt.fill_between, 'day', 'lower', 'upper', color='coral'); # Map overlaid mean plots with white line g.map(plt.plot, 'day', 'mean', color='white'); ``` By separating each band into its own plot you can investigate each city with ease. Here, you see that Des Moines and Houston on average have lower SO2 values for the entire year than the two cities in the Midwest. Cincinnati has a high and variable peak near the beginning of the year but is generally more stable and lower than Indianapolis. ### Cleaning up bands for overlaps You are working for the city of Denver, Colorado and want to run an ad campaign about how much cleaner Denver's air is than Long Beach, California's air. To investigate this claim, you will compare the SO2 levels of both cities for the year 2014. Since you are solely interested in how the cities compare, you want to keep the bands on the same plot. To make the bands easier to compare, decrease the opacity of the confidence bands and set a clear legend. ``` SO2_compare = pd.read_csv('./dataset/SO2_compare.csv', index_col=0) SO2_compare.head() for city, color in [('Denver', '#66c2a5'), ('Long Beach', '#fc8d62')]: # Filter data to desired city city_data = SO2_compare[SO2_compare.city == city] # Set city interval color to desired and lower opacity plt.fill_between(x='day', y1='lower', y2='upper', data=city_data, color=color, alpha=0.4); # Draw a faint mean line for reference and give a label for legend plt.plot('day', 'mean', data=city_data, label=city, color=color, alpha=0.25); plt.legend(); ``` From these two curves you can see that during the first half of the year Long Beach generally has a higher average SO2 value than Denver, in the middle of the year they are very close, and at the end of the year Denver seems to have higher averages. However, by showing the confidence intervals, you can see however that almost none of the year shows a statistically meaningful difference in average values between the two cities. ## Beyond 95% ### 90, 95, and 99% intervals You are a data scientist for an outdoor adventure company in Fairbanks, Alaska. Recently, customers have been having issues with SO2 pollution, leading to costly cancellations. The company has sensors for CO, NO2, and O3 but not SO2 levels. You've built a model that predicts SO2 values based on the values of pollutants with sensors (loaded as `pollution_model`, a `statsmodels` object). You want to investigate which pollutant's value has the largest effect on your model's SO2 prediction. This will help you know which pollutant's values to pay most attention to when planning outdoor tours. To maximize the amount of information in your report, show multiple levels of uncertainty for the model estimates. ``` from statsmodels.formula.api import ols pollution = pd.read_csv('./dataset/pollution_wide.csv') pollution = pollution.query("city == 'Fairbanks' & year == 2014 & month == 11") pollution_model = ols(formula='SO2 ~ CO + NO2 + O3 + day', data=pollution) res = pollution_model.fit() # Add interval percent widths alphas = [ 0.01, 0.05, 0.1] widths = [ '99% CI', '95%', '90%'] colors = ['#fee08b','#fc8d59','#d53e4f'] for alpha, color, width in zip(alphas, colors, widths): # Grab confidence interval conf_ints = res.conf_int(alpha) # Pass current interval color and legend label to plot plt.hlines(y = conf_ints.index, xmin = conf_ints[0], xmax = conf_ints[1], colors = color, label = width, linewidth = 10) # Draw point estimates plt.plot(res.params, res.params.index, 'wo', label = 'Point Estimate') plt.legend(loc = 'upper right') ``` ### 90 and 95% bands You are looking at a 40-day rolling average of the $NO_2$ pollution levels for the city of Cincinnati in 2013. To provide as detailed a picture of the uncertainty in the trend you want to look at both the 90 and 99% intervals around this rolling estimate. To do this, set up your two interval sizes and an orange ordinal color palette. Additionally, to enable precise readings of the bands, make them semi-transparent, so the Seaborn background grids show through. ``` cinci_13_no2 = pd.read_csv('./dataset/cinci_13_no2.csv', index_col=0); cinci_13_no2.head() int_widths = ['90%', '99%'] z_scores = [1.67, 2.58] colors = ['#fc8d59', '#fee08b'] for percent, Z, color in zip(int_widths, z_scores, colors): # Pass lower and upper confidence bounds and lower opacity plt.fill_between( x = cinci_13_no2.day, alpha = 0.4, color = color, y1 = cinci_13_no2['mean'] - Z * cinci_13_no2['std_err'], y2 = cinci_13_no2['mean'] + Z * cinci_13_no2['std_err'], label = percent); plt.legend(); ``` This plot shows us that throughout 2013, the average NO2 values in Cincinnati followed a cyclical pattern with the seasons. However, the uncertainty bands show that for most of the year you can't be sure this pattern is not noise at both a 90 and 99% confidence level. ### Using band thickness instead of coloring You are a researcher investigating the elevation a rocket reaches before visual is lost and pollutant levels at Vandenberg Air Force Base. You've built a model to predict this relationship, and since you are working independently, you don't have the money to pay for color figures in your journal article. You need to make your model results plot work in black and white. To do this, you will plot the 90, 95, and 99% intervals of the effect of each pollutant as successively smaller bars. ``` rocket_model = pd.read_csv('./dataset/rocket_model.csv', index_col=0) rocket_model # Decrase interval thickness as interval widens sizes = [ 15, 10, 5] int_widths = ['90% CI', '95%', '99%'] z_scores = [ 1.67, 1.96, 2.58] for percent, Z, size in zip(int_widths, z_scores, sizes): plt.hlines(y = rocket_model.pollutant, xmin = rocket_model['est'] - Z * rocket_model['std_err'], xmax = rocket_model['est'] + Z * rocket_model['std_err'], label = percent, # Resize lines and color them gray linewidth = size, color = 'gray'); # Add point estimate plt.plot('est', 'pollutant', 'wo', data = rocket_model, label = 'Point Estimate'); plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5)); ``` While less elegant than using color to differentiate interval sizes, this plot still clearly allows the reader to access the effect each pollutant has on rocket visibility. You can see that of all the pollutants, O3 has the largest effect and also the tightest confidence bounds ## Visualizing the bootstrap ### The bootstrap histogram You are considering a vacation to Cincinnati in May, but you have a severe sensitivity to NO2. You pull a few years of pollution data from Cincinnati in May and look at a bootstrap estimate of the average $NO_2$ levels. You only have one estimate to look at the best way to visualize the results of your bootstrap estimates is with a histogram. While you like the intuition of the bootstrap histogram by itself, your partner who will be going on the vacation with you, likes seeing percent intervals. To accommodate them, you decide to highlight the 95% interval by shading the region. ``` # Perform bootstrapped mean on a vector def bootstrap(data, n_boots): return [np.mean(np.random.choice(data,len(data))) for _ in range(n_boots) ] pollution = pd.read_csv('./dataset/pollution_wide.csv') cinci_may_NO2 = pollution.query("city == 'Cincinnati' & month == 5").NO2 # Generate bootstrap samples boot_means = bootstrap(cinci_may_NO2, 1000) # Get lower and upper 95% interval bounds lower, upper = np.percentile(boot_means, [2.5, 97.5]) # Plot shaded area for interval plt.axvspan(lower, upper, color = 'gray', alpha = 0.2); # Draw histogram of bootstrap samples sns.distplot(boot_means, bins = 100, kde = False); ``` Your bootstrap histogram looks stable and uniform. You're now confident that the average NO2 levels in Cincinnati during your vacation should be in the range of 16 to 23. ### Bootstrapped regressions While working for the Long Beach parks and recreation department investigating the relationship between $NO_2$ and $SO_2$ you noticed a cluster of potential outliers that you suspect might be throwing off the correlations. Investigate the uncertainty of your correlations through bootstrap resampling to see how stable your fits are. For convenience, the bootstrap sampling is complete and is provided as `no2_so2_boot` along with `no2_so2` for the non-resampled data. ``` no2_so2 = pd.read_csv('./dataset/no2_so2.csv', index_col=0) no2_so2_boot = pd.read_csv('./dataset/no2_so2_boot.csv', index_col=0) sns.lmplot('NO2', 'SO2', data = no2_so2_boot, # Tell seaborn to a regression line for each sample hue = 'sample', # Make lines blue and transparent line_kws = {'color': 'steelblue', 'alpha': 0.2}, # Disable built-in confidence intervals ci = None, legend = False, scatter = False); # Draw scatter of all points plt.scatter('NO2', 'SO2', data = no2_so2); ``` The outliers appear to drag down the regression lines as evidenced by the cluster of lines with more severe slopes than average. In a single plot, you have not only gotten a good idea of the variability of your correlation estimate but also the potential effects of outliers. ### Lots of bootstraps with beeswarms As a current resident of Cincinnati, you're curious to see how the average NO2 values compare to Des Moines, Indianapolis, and Houston: a few other cities you've lived in. To look at this, you decide to use bootstrap estimation to look at the mean NO2 values for each city. Because the comparisons are of primary interest, you will use a swarm plot to compare the estimates. ``` pollution_may = pollution.query("month == 5") pollution_may # Initialize a holder DataFrame for bootstrap results city_boots = pd.DataFrame() for city in ['Cincinnati', 'Des Moines', 'Indianapolis', 'Houston']: # Filter to city city_NO2 = pollution_may[pollution_may.city == city].NO2 # Bootstrap city data & put in DataFrame cur_boot = pd.DataFrame({'NO2_avg': bootstrap(city_NO2, 100), 'city': city}) # Append to other city's bootstraps city_boots = pd.concat([city_boots,cur_boot]) # Beeswarm plot of averages with citys on y axis sns.swarmplot(y = "city", x = "NO2_avg", data = city_boots, color = 'coral'); ``` The beeswarm plots show that Indianapolis and Houston both have the highest average NO2 values, with Cincinnati falling roughly in the middle. Interestingly, you can rather confidently say that Des Moines has the lowest as nearly all its sample estimates fall below those of the other cities.
true
code
0.606382
null
null
null
null