prompt
stringlengths 501
4.98M
| target
stringclasses 1
value | chunk_prompt
bool 1
class | kind
stringclasses 2
values | prob
float64 0.2
0.97
⌀ | path
stringlengths 10
394
⌀ | quality_prob
float64 0.4
0.99
⌀ | learning_prob
float64 0.15
1
⌀ | filename
stringlengths 4
221
⌀ |
---|---|---|---|---|---|---|---|---|
```
import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import scipy as sp
import sympy as sy
sy.init_printing()
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" # display multiple results
def round_expr(expr, num_digits):
return expr.xreplace({n : round(n, num_digits) for n in expr.atoms(sy.Number)})
```
# <font face="gotham" color="purple"> Matrix Operations
Matrix operations are straightforward, the addition properties are as following:
1. $\pmb{A}+\pmb B=\pmb B+\pmb A$
2. $(\pmb{A}+\pmb{B})+\pmb C=\pmb{A}+(\pmb{B}+\pmb{C})$
3. $c(\pmb{A}+\pmb{B})=c\pmb{A}+c\pmb{B}$
4. $(c+d)\pmb{A}=c\pmb{A}+c\pmb{D}$
5. $c(d\pmb{A})=(cd)\pmb{A}$
6. $\pmb{A}+\pmb{0}=\pmb{A}$, where $\pmb{0}$ is the zero matrix
7. For any $\pmb{A}$, there exists an $-\pmb A$, such that $\pmb A+(-\pmb A)=\pmb0$.
They are as obvious as it shows, so no proofs are provided here.And the matrix multiplication properties are:
1. $\pmb A(\pmb{BC})=(\pmb{AB})\pmb C$
2. $c(\pmb{AB})=(c\pmb{A})\pmb{B}=\pmb{A}(c\pmb{B})$
3. $\pmb{A}(\pmb{B}+\pmb C)=\pmb{AB}+\pmb{AC}$
4. $(\pmb{B}+\pmb{C})\pmb{A}=\pmb{BA}+\pmb{CA}$
Note that we need to differentiate two kinds of multiplication, <font face="gotham" color="red">Hadamard multiplication</font> (element-wise multiplication) and <font face="gotham" color="red">matrix multiplication</font>:
```
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
A*B # this is Hadamard elementwise product
A@B # this is matrix product
```
The matrix multipliation rule is
```
np.sum(A[0,:]*B[:,0]) # (1, 1)
np.sum(A[1,:]*B[:,0]) # (2, 1)
np.sum(A[0,:]*B[:,1]) # (1, 2)
np.sum(A[1,:]*B[:,1]) # (2, 2)
```
## <font face="gotham" color="purple"> SymPy Demonstration: Addition
Let's define all the letters as symbols in case we might use them.
```
a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z = sy.symbols('a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z', real = True)
A = sy.Matrix([[a, b, c], [d, e, f]])
A + A
A - A
B = sy.Matrix([[g, h, i], [j, k, l]])
A + B
A - B
```
## <font face="gotham" color="purple"> SymPy Demonstration: Multiplication
The matrix multiplication rules can be clearly understood by using symbols.
```
A = sy.Matrix([[a, b, c], [d, e, f]])
B = sy.Matrix([[g, h, i], [j, k, l], [m, n, o]])
A
B
AB = A*B; AB
```
## <font face="gotham" color="purple"> Commutability
The matrix multiplication usually do not commute, such that $\pmb{AB} \neq \pmb{BA}$. For instance, consider $\pmb A$ and $\pmb B$:
```
A = sy.Matrix([[3, 4], [7, 8]])
B = sy.Matrix([[5, 3], [2, 1]])
A*B
B*A
```
How do we find commutable matrices?
```
A = sy.Matrix([[a, b], [c, d]])
B = sy.Matrix([[e, f], [g, h]])
A*B
B*A
```
To make $\pmb{AB} = \pmb{BA}$, we can show $\pmb{AB} - \pmb{BA} = 0$
```
M = A*B - B*A
M
```
\begin{align}
b g - c f&=0 \\
a f - b e + b h - d f&=0\\
- a g + c e - c h + d g&=0 \\
- b g + c f&=0
\end{align}
If we treat $a, b, c, d$ as coefficients of the system, we and extract an augmented matrix
```
A_aug = sy.Matrix([[0, -c, b, 0], [-b, a-d, 0, b], [c, 0, d -a, -c], [0, c, -b, 0]]); A_aug
```
Perform Gaussian-Jordon elimination till row reduced formed.
```
A_aug.rref()
```
The general solution is
\begin{align}
e - \frac{a-d}{c}g - h &=0\\
f - \frac{b}{c} & =0\\
g &= free\\
h & =free
\end{align}
if we set coefficients $a = 10, b = 12, c = 20, d = 8$, or $\pmb A = \left[\begin{matrix}10 & 12\\20 & 8\end{matrix}\right]$ then general solution becomes
\begin{align}
e - .1g - h &=0\\
f - .6 & =0\\
g &= free\\
h & =free
\end{align}
Then try a special solution when $g = h = 1$
\begin{align}
e &=1.1\\
f & =.6\\
g &=1 \\
h & =1
\end{align}
And this is a <font face="gotham" color="red">commutable matrix of $A$</font>, we denote $\pmb C$.
```
C = sy.Matrix([[1.1, .6], [1, 1]]);C
```
Now we can see that $\pmb{AB}=\pmb{BA}$.
```
A = sy.Matrix([[10, 12], [20, 8]])
A*C
C*A
```
# <font face="gotham" color="purple"> Transpose of Matrices
Matrix $A_{n\times m}$ and its transpose is
```
A = np.array([[1, 2, 3], [4, 5, 6]]); A
A.T # transpose
A = sy.Matrix([[1, 2, 3], [4, 5, 6]]); A
A.transpose()
```
The properties of transpose are
1. $(A^T)^T$
2. $(A+B)^T=A^T+B^T$
3. $(cA)^T=cA^T$
4. $(AB)^T=B^TA^T$
We can show why this holds with SymPy:
```
A = sy.Matrix([[a, b], [c, d], [e, f]])
B = sy.Matrix([[g, h, i], [j, k, l]])
AB = A*B
AB_tr = AB.transpose(); AB_tr
A_tr_B_tr = B.transpose()*A.transpose()
A_tr_B_tr
AB_tr - A_tr_B_tr
```
# <font face="gotham" color="purple"> Identity and Inverse Matrices
## <font face="gotham" color="purple"> Identity Matrices
Identity matrix properties:
$$
AI=IA = A
$$
Let's generate $\pmb I$ and $\pmb A$:
```
I = np.eye(5); I
A = np.around(np.random.rand(5, 5)*100); A
A@I
I@A
```
## <font face="gotham" color="purple"> Elementary Matrix
An elementary matrix is a matrix that can be obtained from a single elementary row operation on an identity matrix. Such as:
$$
\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\end{matrix}\right]\ \matrix{R_1\leftrightarrow R_2\cr ~\cr ~}\qquad\Longrightarrow\qquad \left[\begin{matrix}0 & 1 & 0\cr 1 & 0 & 0\cr 0 & 0 & 1\end{matrix}\right]
$$
The elementary matrix above is created by switching row 1 and row 2, and we denote it as $\pmb{E}$, let's left multiply $\pmb E$ onto a matrix $\pmb A$. Generate $\pmb A$
```
A = sy.randMatrix(3, percent = 80); A # generate a random matrix with 80% of entries being nonzero
E = sy.Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]]);E
```
It turns out that by multiplying $\pmb E$ onto $\pmb A$, $\pmb A$ also switches the row 1 and 2.
```
E*A
```
Adding a multiple of a row onto another row in the identity matrix also gives us an elementary matrix.
$$
\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr 0 & 0 & 1\end{matrix}\right]\ \matrix{~\cr ~\cr R_3-7R_1}\qquad\longrightarrow\left[\begin{matrix}1 & 0 & 0\cr 0 & 1 & 0\cr -7 & 0 & 1\end{matrix}\right]
$$
Let's verify with SymPy.
```
A = sy.randMatrix(3, percent = 80); A
E = sy.Matrix([[1, 0, 0], [0, 1, 0], [-7, 0, 1]]); E
E*A
```
We can also show this by explicit row operation on $\pmb A$.
```
EA = sy.matrices.MatrixBase.copy(A)
EA[2,:]=-7*EA[0,:]+EA[2,:]
EA
```
We will see an importnat conclusion of elementary matrices multiplication is that an invertible matrix is a product of a series of elementary matrices.
## <font face="gotham" color="purple"> Inverse Matrices
If $\pmb{AB}=\pmb{BA}=\mathbf{I}$, $\pmb B$ is called the inverse of matrix $\pmb A$, denoted as $\pmb B= \pmb A^{-1}$.
NumPy has convenient function ```np.linalg.inv()``` for computing inverse matrices. Generate $\pmb A$
```
A = np.round(10*np.random.randn(5,5)); A
Ainv = np.linalg.inv(A)
Ainv
A@Ainv
```
The ```-0.``` means there are more digits after point, but omitted here.
### <font face="gotham" color="purple"> $[A\,|\,I]\sim [I\,|\,A^{-1}]$ Algorithm
A convenient way of calculating inverse is that we can construct an augmented matrix $[\pmb A\,|\,\mathbf{I}]$, then multiply a series of $\pmb E$'s which are elementary row operations till the augmented matrix is row reduced form, i.e. $\pmb A \rightarrow \mathbf{I}$. Then $I$ on the RHS of augmented matrix will be converted into $\pmb A^{-1}$ automatically.
We can show with SymPy's ```.rref()``` function on the augmented matrix $[A\,|\,I]$.
```
AI = np.hstack((A, I)) # stack the matrix A and I horizontally
AI = sy.Matrix(AI); AI
AI_rref = AI.rref(); AI_rref
```
Extract the RHS block, this is the $A^{-1}$.
```
Ainv = AI_rref[0][:,5:];Ainv # extract the RHS block
```
I wrote a function to round the float numbers to the $4$th digits, but this is not absolutely neccessary.
```
round_expr(Ainv, 4)
```
We can verify if $AA^{-1}=\mathbf{I}$
```
A = sy.Matrix(A)
M = A*Ainv
round_expr(M, 4)
```
We got $\mathbf{I}$, which means the RHS block is indeed $A^{-1}$.
### <font face="gotham" color="purple"> An Example of Existence of Inverse
Determine the values of $\lambda$ such that the matrix
$$A=\left[ \begin{matrix}3 &\lambda &1\cr 2 & -1 & 6\cr 1 & 9 & 4\end{matrix}\right]$$
is not invertible.
Still,we are using SymPy to solve the problem.
```
lamb = sy.symbols('lamda') # SymPy will automatically render into LaTeX greek letters
A = np.array([[3, lamb, 1], [2, -1, 6], [1, 9, 4]])
I = np.eye(3)
AI = np.hstack((A, I))
AI = sy.Matrix(AI)
AI_rref = AI.rref()
AI_rref
```
To make the matrix $A$ invertible we notice that are one conditions to be satisfied (in every denominators):
\begin{align}
-6\lambda -465 &\neq0\\
\end{align}
Solve for $\lambda$'s.
```
sy.solvers.solve(-6*lamb-465, lamb)
```
Let's test with determinant. If $|\pmb A|=0$, then the matrix is not invertible. Don't worry, we will come back to this.
```
A = np.array([[3, -155/2, 1], [2, -1, 6], [1, 9, 4]])
np.linalg.det(A)
```
The $|\pmb A|$ is practically $0$. The condition is that as long as $\lambda \neq -\frac{155}{2}$, the matrix $A$ is invertible.
### <font face="gotham" color="purple"> Properties of Inverse Matrices
1. If $A$ and $B$ are both invertible, then $(AB)^{-1}=B^{-1}A^{-1}$.
2. If $A$ is invertible, then $(A^T)^{-1}=(A^{-1})^T$.
3. If $A$ and $B$ are both invertible and symmetric such that $AB=BA$, then $A^{-1}B$ is symmetric.
The <font face="gotham" color="red"> first property</font> is straightforward
\begin{align}
ABB^{-1}A^{-1}=AIA^{-1}=I=AB(AB)^{-1}
\end{align}
The <font face="gotham" color="red"> second property</font> is to show
$$
A^T(A^{-1})^T = I
$$
We can use the property of transpose
$$
A^T(A^{-1})^T=(A^{-1}A)^T = I^T = I
$$
The <font face="gotham" color="red">third property</font> is to show
$$
A^{-1}B = (A^{-1}B)^T
$$
Again use the property of tranpose
$$
(A^{-1}B)^{T}=B^T(A^{-1})^T=B(A^T)^{-1}=BA^{-1}
$$
We use the $AB = BA$ condition to continue
\begin{align}
AB&=BA\\
A^{-1}ABA^{-1}&=A^{-1}BAA^{-1}\\
BA^{-1}&=A^{-1}B
\end{align}
The plug in the previous equation, we have
$$
(A^{-1}B)^{T}=BA^{-1}=A^{-1}B
$$
| true | code | 0.312314 | null | null | null | null |
|
# Exploring Neural Audio Synthesis with NSynth
## Parag Mital
There is a lot to explore with NSynth. This notebook explores just a taste of what's possible including how to encode and decode, timestretch, and interpolate sounds. Also check out the [blog post](https://magenta.tensorflow.org/nsynth-fastgen) for more examples including two compositions created with Ableton Live. If you are interested in learning more, checkout my [online course on Kadenze](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) where we talk about Magenta and NSynth in more depth.
## Part 1: Encoding and Decoding
We'll walkthrough using the source code to encode and decode some audio. This is the most basic thing we can do with NSynth, and it will take at least about 6 minutes per 1 second of audio to perform on a GPU, though this will get faster!
I'll first show you how to encode some audio. This is basically saying, here is some audio, now put it into the trained model. It's like the encoding of an MP3 file. It takes some raw audio, and represents it using some really reduced down representation of the raw audio. NSynth works similarly, but we can actually mess with the encoding to do some awesome stuff. You can for instance, mix it with other encodings, or slow it down, or speed it up. You can potentially even remove parts of it, mix many different encodings together, and hopefully just explore ideas yet to be thought of. After you've created your encoding, you have to just generate, or decode it, just like what an audio player does to an MP3 file.
First, to install Magenta, follow their setup guide here: https://github.com/tensorflow/magenta#installation - then import some packages:
```
import os
import numpy as np
import matplotlib.pyplot as plt
from magenta.models.nsynth import utils
from magenta.models.nsynth.wavenet import fastgen
from IPython.display import Audio
%matplotlib inline
%config InlineBackend.figure_format = 'jpg'
```
Now we'll load up a sound I downloaded from freesound.org. The `utils.load_audio` method will resample this to the required sample rate of 16000. I'll load in 40000 samples of this beat which should end up being a pretty good loop:
```
# from https://www.freesound.org/people/MustardPlug/sounds/395058/
fname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav'
sr = 16000
audio = utils.load_audio(fname, sample_length=40000, sr=sr)
sample_length = audio.shape[0]
print('{} samples, {} seconds'.format(sample_length, sample_length / float(sr)))
```
## Encoding
We'll now encode some audio using the pre-trained NSynth model (download from: http://download.magenta.tensorflow.org/models/nsynth/wavenet-ckpt.tar). This is pretty fast, and takes about 3 seconds per 1 second of audio on my NVidia 1080 GPU. This will give us a 125 x 16 dimension encoding for every 4 seconds of audio which we can then decode, or resynthesize. We'll try a few things, including just leaving it alone and reconstructing it as is. But then we'll also try some fun transformations of the encoding and see what's possible from there.
```help(fastgen.encode)
Help on function encode in module magenta.models.nsynth.wavenet.fastgen:
encode(wav_data, checkpoint_path, sample_length=64000)
Generate an array of embeddings from an array of audio.
Args:
wav_data: Numpy array [batch_size, sample_length]
checkpoint_path: Location of the pretrained model.
sample_length: The total length of the final wave file, padded with 0s.
Returns:
encoding: a [mb, 125, 16] encoding (for 64000 sample audio file).
```
```
%time encoding = fastgen.encode(audio, 'model.ckpt-200000', sample_length)
```
This returns a 3-dimensional tensor representing the encoding of the audio. The first dimension of the encoding represents the batch dimension. We could have passed in many audio files at once and the process would be much faster. For now we've just passed in one audio file.
```
print(encoding.shape)
```
We'll also save the encoding so that we can use it again later:
```
np.save(fname + '.npy', encoding)
```
Let's take a look at the encoding of this audio file. Think of these as 16 channels of sounds all mixed together (though with a lot of caveats):
```
fig, axs = plt.subplots(2, 1, figsize=(10, 5))
axs[0].plot(audio);
axs[0].set_title('Audio Signal')
axs[1].plot(encoding[0]);
axs[1].set_title('NSynth Encoding')
```
You should be able to pretty clearly see a sort of beat like pattern in both the signal and the encoding.
## Decoding
Now we can decode the encodings as is. This is the process that takes awhile, though it used to be so long that you wouldn't even dare trying it. There is still plenty of room for improvement and I'm sure it will get faster very soon.
```
help(fastgen.synthesize)
Help on function synthesize in module magenta.models.nsynth.wavenet.fastgen:
synthesize(encodings, save_paths, checkpoint_path='model.ckpt-200000', samples_per_save=1000)
Synthesize audio from an array of embeddings.
Args:
encodings: Numpy array with shape [batch_size, time, dim].
save_paths: Iterable of output file names.
checkpoint_path: Location of the pretrained model. [model.ckpt-200000]
samples_per_save: Save files after every amount of generated samples.
```
```
%time fastgen.synthesize(encoding, save_paths=['gen_' + fname], samples_per_save=sample_length)
```
After it's done synthesizing, we can see that takes about 6 minutes per 1 second of audio on a non-optimized version of Tensorflow for GPU on an NVidia 1080 GPU. We can speed things up considerably if we want to do multiple encodings at a time. We'll see that in just a moment. Let's first listen to the synthesized audio:
```
sr = 16000
synthesis = utils.load_audio('gen_' + fname, sample_length=sample_length, sr=sr)
```
Listening to the audio, the sounds are definitely different. NSynth seems to apply a sort of gobbly low-pass that also really doesn't know what to do with the high frequencies. It is really quite hard to describe, but that is what is so interesting about it. It has a recognizable, characteristic sound.
Let's try another one. I'll put the whole workflow for synthesis in two cells, and we can listen to another synthesis of a vocalist singing, "Laaaa":
```
def load_encoding(fname, sample_length=None, sr=16000, ckpt='model.ckpt-200000'):
audio = utils.load_audio(fname, sample_length=sample_length, sr=sr)
encoding = fastgen.encode(audio, ckpt, sample_length)
return audio, encoding
# from https://www.freesound.org/people/maurolupo/sounds/213259/
fname = '213259__maurolupo__girl-sings-laa.wav'
sample_length = 32000
audio, encoding = load_encoding(fname, sample_length)
fastgen.synthesize(
encoding,
save_paths=['gen_' + fname],
samples_per_save=sample_length)
synthesis = utils.load_audio('gen_' + fname,
sample_length=sample_length,
sr=sr)
```
Aside from the quality of the reconstruction, what we're really after is what is possible with such a model. Let's look at two examples now.
# Part 2: Timestretching
Let's try something more fun. We'll stretch the encodings a bit and see what it sounds like. If you were to try and stretch audio directly, you'd hear a pitch shift. There are some other ways of stretching audio without shifting pitch, like granular synthesis. But it turns out that NSynth can also timestretch. Let's see how. First we'll use image interpolation to help stretch the encodings.
```
# use image interpolation to stretch the encoding: (pip install scikit-image)
try:
from skimage.transform import resize
except ImportError:
!pip install scikit-image
from skimage.transform import resize
```
Here's a utility function to help you stretch your own encoding. It uses skimage.transform and will retain the range of values. Images typically only have a range of 0-1, but the encodings aren't actually images so we'll keep track of their min/max in order to stretch them like images.
```
def timestretch(encodings, factor):
min_encoding, max_encoding = encoding.min(), encoding.max()
encodings_norm = (encodings - min_encoding) / (max_encoding - min_encoding)
timestretches = []
for encoding_i in encodings_norm:
stretched = resize(encoding_i, (int(encoding_i.shape[0] * factor), encoding_i.shape[1]), mode='reflect')
stretched = (stretched * (max_encoding - min_encoding)) + min_encoding
timestretches.append(stretched)
return np.array(timestretches)
# from https://www.freesound.org/people/MustardPlug/sounds/395058/
fname = '395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav'
sample_length = 40000
audio, encoding = load_encoding(fname, sample_length)
```
Now let's stretch the encodings with a few different factors:
```
audio = utils.load_audio('gen_slower_' + fname, sample_length=None, sr=sr)
Audio(audio, rate=sr)
encoding_slower = timestretch(encoding, 1.5)
encoding_faster = timestretch(encoding, 0.5)
```
Basically we've made a slower and faster version of the amen break's encodings. The original encoding is shown in black:
```
fig, axs = plt.subplots(3, 1, figsize=(10, 7), sharex=True, sharey=True)
axs[0].plot(encoding[0]);
axs[0].set_title('Encoding (Normal Speed)')
axs[1].plot(encoding_faster[0]);
axs[1].set_title('Encoding (Faster))')
axs[2].plot(encoding_slower[0]);
axs[2].set_title('Encoding (Slower)')
```
Now let's decode them:
```
fastgen.synthesize(encoding_faster, save_paths=['gen_faster_' + fname])
fastgen.synthesize(encoding_slower, save_paths=['gen_slower_' + fname])
```
It seems to work pretty well and retains the pitch and timbre of the original sound. We could even quickly layer the sounds just by adding them. You might want to do this in a program like Logic or Ableton Live instead and explore more possiblities of these sounds!
# Part 3: Interpolating Sounds
Now let's try something more experimental. NSynth released plenty of great examples of what happens when you mix the embeddings of different sounds: https://magenta.tensorflow.org/nsynth-instrument - we're going to do the same but now with our own sounds!
First let's load some encodings:
```
sample_length = 80000
# from https://www.freesound.org/people/MustardPlug/sounds/395058/
aud1, enc1 = load_encoding('395058__mustardplug__breakbeat-hiphop-a4-4bar-96bpm.wav', sample_length)
# from https://www.freesound.org/people/xserra/sounds/176098/
aud2, enc2 = load_encoding('176098__xserra__cello-cant-dels-ocells.wav', sample_length)
```
Now we'll mix the two audio signals together. But this is unlike adding the two signals together in a Ableton or simply hearing both sounds at the same time. Instead, we're averaging the representation of their timbres, tonality, change over time, and resulting audio signal. This is way more powerful than a simple averaging.
```
enc_mix = (enc1 + enc2) / 2.0
fig, axs = plt.subplots(3, 1, figsize=(10, 7))
axs[0].plot(enc1[0]);
axs[0].set_title('Encoding 1')
axs[1].plot(enc2[0]);
axs[1].set_title('Encoding 2')
axs[2].plot(enc_mix[0]);
axs[2].set_title('Average')
fastgen.synthesize(enc_mix, save_paths='mix.wav')
```
As another example of what's possible with interpolation of embeddings, we'll try crossfading between the two embeddings. To do this, we'll write a utility function which will use a hanning window to apply a fade in or out to the embeddings matrix:
```
def fade(encoding, mode='in'):
length = encoding.shape[1]
fadein = (0.5 * (1.0 - np.cos(3.1415 * np.arange(length) /
float(length)))).reshape(1, -1, 1)
if mode == 'in':
return fadein * encoding
else:
return (1.0 - fadein) * encoding
fig, axs = plt.subplots(3, 1, figsize=(10, 7))
axs[0].plot(enc1[0]);
axs[0].set_title('Original Encoding')
axs[1].plot(fade(enc1, 'in')[0]);
axs[1].set_title('Fade In')
axs[2].plot(fade(enc1, 'out')[0]);
axs[2].set_title('Fade Out')
```
Now we can cross fade two different encodings by adding their repsective fade ins and out:
```
def crossfade(encoding1, encoding2):
return fade(encoding1, 'out') + fade(encoding2, 'in')
fig, axs = plt.subplots(3, 1, figsize=(10, 7))
axs[0].plot(enc1[0]);
axs[0].set_title('Encoding 1')
axs[1].plot(enc2[0]);
axs[1].set_title('Encoding 2')
axs[2].plot(crossfade(enc1, enc2)[0]);
axs[2].set_title('Crossfade')
```
Now let's synthesize the resulting encodings:
```
fastgen.synthesize(crossfade(enc1, enc2), save_paths=['crossfade.wav'])
```
There is a lot to explore with NSynth. So far I've just shown you a taste of what's possible when you are able to generate your own sounds. I expect the generation process will soon get much faster, especially with help from the community, and for more unexpected and interesting applications to emerge. Please keep in touch with whatever you end up creating, either personally via [twitter](https://twitter.com/pkmital), in our [Creative Applications of Deep Learning](https://www.kadenze.com/programs/creative-applications-of-deep-learning-with-tensorflow) community on Kadenze, or the [Magenta Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss).
| true | code | 0.48749 | null | null | null | null |
|
# Comprehensive Example
```
# Enabling the `widget` backend.
# This requires jupyter-matplotlib a.k.a. ipympl.
# ipympl can be install via pip or conda.
%matplotlib widget
import matplotlib.pyplot as plt
import numpy as np
# Testing matplotlib interactions with a simple plot
fig = plt.figure()
plt.plot(np.sin(np.linspace(0, 20, 100)));
# Always hide the toolbar
fig.canvas.toolbar_visible = False
# Put it back to its default
fig.canvas.toolbar_visible = 'fade-in-fade-out'
# Change the toolbar position
fig.canvas.toolbar_position = 'top'
# Hide the Figure name at the top of the figure
fig.canvas.header_visible = False
# Hide the footer
fig.canvas.footer_visible = False
# Disable the resizing feature
fig.canvas.resizable = False
# If true then scrolling while the mouse is over the canvas will not move the entire notebook
fig.canvas.capture_scroll = True
```
You can also call `display` on `fig.canvas` to display the interactive plot anywhere in the notebooke
```
fig.canvas.toolbar_visible = True
display(fig.canvas)
```
Or you can `display(fig)` to embed the current plot as a png
```
display(fig)
```
# 3D plotting
```
from mpl_toolkits.mplot3d import axes3d
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Grab some test data.
X, Y, Z = axes3d.get_test_data(0.05)
# Plot a basic wireframe.
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
plt.show()
```
# Subplots
```
# A more complex example from the matplotlib gallery
np.random.seed(0)
n_bins = 10
x = np.random.randn(1000, 3)
fig, axes = plt.subplots(nrows=2, ncols=2)
ax0, ax1, ax2, ax3 = axes.flatten()
colors = ['red', 'tan', 'lime']
ax0.hist(x, n_bins, density=1, histtype='bar', color=colors, label=colors)
ax0.legend(prop={'size': 10})
ax0.set_title('bars with legend')
ax1.hist(x, n_bins, density=1, histtype='bar', stacked=True)
ax1.set_title('stacked bar')
ax2.hist(x, n_bins, histtype='step', stacked=True, fill=False)
ax2.set_title('stack step (unfilled)')
# Make a multiple-histogram of data-sets with different length.
x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]
ax3.hist(x_multi, n_bins, histtype='bar')
ax3.set_title('different sample sizes')
fig.tight_layout()
plt.show()
fig.canvas.toolbar_position = 'right'
fig.canvas.toolbar_visible = False
```
# Interactions with other widgets and layouting
When you want to embed the figure into a layout of other widgets you should call `plt.ioff()` before creating the figure otherwise `plt.figure()` will trigger a display of the canvas automatically and outside of your layout.
### Without using `ioff`
Here we will end up with the figure being displayed twice. The button won't do anything it just placed as an example of layouting.
```
import ipywidgets as widgets
# ensure we are interactive mode
# this is default but if this notebook is executed out of order it may have been turned off
plt.ion()
fig = plt.figure()
ax = fig.gca()
ax.imshow(Z)
widgets.AppLayout(
center=fig.canvas,
footer=widgets.Button(icon='check'),
pane_heights=[0, 6, 1]
)
```
### Fixing the double display with `ioff`
If we make sure interactive mode is off when we create the figure then the figure will only display where we want it to.
There is ongoing work to allow usage of `ioff` as a context manager, see the [ipympl issue](https://github.com/matplotlib/ipympl/issues/220) and the [matplotlib issue](https://github.com/matplotlib/matplotlib/issues/17013)
```
plt.ioff()
fig = plt.figure()
plt.ion()
ax = fig.gca()
ax.imshow(Z)
widgets.AppLayout(
center=fig.canvas,
footer=widgets.Button(icon='check'),
pane_heights=[0, 6, 1]
)
```
# Interacting with other widgets
## Changing a line plot with a slide
```
# When using the `widget` backend from ipympl,
# fig.canvas is a proper Jupyter interactive widget, which can be embedded in
# an ipywidgets layout. See https://ipywidgets.readthedocs.io/en/stable/examples/Layout%20Templates.html
# One can bound figure attributes to other widget values.
from ipywidgets import AppLayout, FloatSlider
plt.ioff()
slider = FloatSlider(
orientation='horizontal',
description='Factor:',
value=1.0,
min=0.02,
max=2.0
)
slider.layout.margin = '0px 30% 0px 30%'
slider.layout.width = '40%'
fig = plt.figure()
fig.canvas.header_visible = False
fig.canvas.layout.min_height = '400px'
plt.title('Plotting: y=sin({} * x)'.format(slider.value))
x = np.linspace(0, 20, 500)
lines = plt.plot(x, np.sin(slider.value * x))
def update_lines(change):
plt.title('Plotting: y=sin({} * x)'.format(change.new))
lines[0].set_data(x, np.sin(change.new * x))
fig.canvas.draw()
fig.canvas.flush_events()
slider.observe(update_lines, names='value')
AppLayout(
center=fig.canvas,
footer=slider,
pane_heights=[0, 6, 1]
)
```
## Update image data in a performant manner
Two useful tricks to improve performance when updating an image displayed with matplolib are to:
1. Use the `set_data` method instead of calling imshow
2. Precompute and then index the array
```
# precomputing all images
x = np.linspace(0,np.pi,200)
y = np.linspace(0,10,200)
X,Y = np.meshgrid(x,y)
parameter = np.linspace(-5,5)
example_image_stack = np.sin(X)[None,:,:]+np.exp(np.cos(Y[None,:,:]*parameter[:,None,None]))
plt.ioff()
fig = plt.figure()
plt.ion()
im = plt.imshow(example_image_stack[0])
def update(change):
im.set_data(example_image_stack[change['new']])
fig.canvas.draw_idle()
slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)
slider.observe(update, names='value')
widgets.VBox([slider, fig.canvas])
```
### Debugging widget updates and matplotlib callbacks
If an error is raised in the `update` function then will not always display in the notebook which can make debugging difficult. This same issue is also true for matplotlib callbacks on user events such as mousemovement, for example see [issue](https://github.com/matplotlib/ipympl/issues/116). There are two ways to see the output:
1. In jupyterlab the output will show up in the Log Console (View > Show Log Console)
2. using `ipywidgets.Output`
Here is an example of using an `Output` to capture errors in the update function from the previous example. To induce errors we changed the slider limits so that out of bounds errors will occur:
From: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)-1)`
To: `slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)`
If you move the slider all the way to the right you should see errors from the Output widget
```
plt.ioff()
fig = plt.figure()
plt.ion()
im = plt.imshow(example_image_stack[0])
out = widgets.Output()
@out.capture()
def update(change):
with out:
if change['name'] == 'value':
im.set_data(example_image_stack[change['new']])
fig.canvas.draw_idle
slider = widgets.IntSlider(value=0, min=0, max=len(parameter)+10)
slider.observe(update)
display(widgets.VBox([slider, fig.canvas]))
display(out)
```
| true | code | 0.661677 | null | null | null | null |
|
# Interactive single compartment HH example
To run this interactive Jupyter Notebook, please click on the rocket icon 🚀 in the top panel. For more information, please see {ref}`how to use this documentation <userdocs:usage:jupyterbooks>`. Please uncomment the line below if you use the Google Colab. (It does not include these packages by default).
```
#%pip install pyneuroml neuromllite NEURON
import math
from neuroml import NeuroMLDocument
from neuroml import Cell
from neuroml import IonChannelHH
from neuroml import GateHHRates
from neuroml import BiophysicalProperties
from neuroml import MembraneProperties
from neuroml import ChannelDensity
from neuroml import HHRate
from neuroml import SpikeThresh
from neuroml import SpecificCapacitance
from neuroml import InitMembPotential
from neuroml import IntracellularProperties
from neuroml import IncludeType
from neuroml import Resistivity
from neuroml import Morphology, Segment, Point3DWithDiam
from neuroml import Network, Population
from neuroml import PulseGenerator, ExplicitInput
import numpy as np
from pyneuroml import pynml
from pyneuroml.lems import LEMSSimulation
```
## Declare the model
### Create ion channels
```
def create_na_channel():
"""Create the Na channel.
This will create the Na channel and save it to a file.
It will also validate this file.
returns: name of the created file
"""
na_channel = IonChannelHH(id="na_channel", notes="Sodium channel for HH cell", conductance="10pS", species="na")
gate_m = GateHHRates(id="na_m", instances="3", notes="m gate for na channel")
m_forward_rate = HHRate(type="HHExpLinearRate", rate="1per_ms", midpoint="-40mV", scale="10mV")
m_reverse_rate = HHRate(type="HHExpRate", rate="4per_ms", midpoint="-65mV", scale="-18mV")
gate_m.forward_rate = m_forward_rate
gate_m.reverse_rate = m_reverse_rate
na_channel.gate_hh_rates.append(gate_m)
gate_h = GateHHRates(id="na_h", instances="1", notes="h gate for na channel")
h_forward_rate = HHRate(type="HHExpRate", rate="0.07per_ms", midpoint="-65mV", scale="-20mV")
h_reverse_rate = HHRate(type="HHSigmoidRate", rate="1per_ms", midpoint="-35mV", scale="10mV")
gate_h.forward_rate = h_forward_rate
gate_h.reverse_rate = h_reverse_rate
na_channel.gate_hh_rates.append(gate_h)
na_channel_doc = NeuroMLDocument(id="na_channel", notes="Na channel for HH neuron")
na_channel_fn = "HH_example_na_channel.nml"
na_channel_doc.ion_channel_hhs.append(na_channel)
pynml.write_neuroml2_file(nml2_doc=na_channel_doc, nml2_file_name=na_channel_fn, validate=True)
return na_channel_fn
def create_k_channel():
"""Create the K channel
This will create the K channel and save it to a file.
It will also validate this file.
:returns: name of the K channel file
"""
k_channel = IonChannelHH(id="k_channel", notes="Potassium channel for HH cell", conductance="10pS", species="k")
gate_n = GateHHRates(id="k_n", instances="4", notes="n gate for k channel")
n_forward_rate = HHRate(type="HHExpLinearRate", rate="0.1per_ms", midpoint="-55mV", scale="10mV")
n_reverse_rate = HHRate(type="HHExpRate", rate="0.125per_ms", midpoint="-65mV", scale="-80mV")
gate_n.forward_rate = n_forward_rate
gate_n.reverse_rate = n_reverse_rate
k_channel.gate_hh_rates.append(gate_n)
k_channel_doc = NeuroMLDocument(id="k_channel", notes="k channel for HH neuron")
k_channel_fn = "HH_example_k_channel.nml"
k_channel_doc.ion_channel_hhs.append(k_channel)
pynml.write_neuroml2_file(nml2_doc=k_channel_doc, nml2_file_name=k_channel_fn, validate=True)
return k_channel_fn
def create_leak_channel():
"""Create a leak channel
This will create the leak channel and save it to a file.
It will also validate this file.
:returns: name of leak channel nml file
"""
leak_channel = IonChannelHH(id="leak_channel", conductance="10pS", notes="Leak conductance")
leak_channel_doc = NeuroMLDocument(id="leak_channel", notes="leak channel for HH neuron")
leak_channel_fn = "HH_example_leak_channel.nml"
leak_channel_doc.ion_channel_hhs.append(leak_channel)
pynml.write_neuroml2_file(nml2_doc=leak_channel_doc, nml2_file_name=leak_channel_fn, validate=True)
return leak_channel_fn
```
### Create cell
```
def create_cell():
"""Create the cell.
:returns: name of the cell nml file
"""
# Create the nml file and add the ion channels
hh_cell_doc = NeuroMLDocument(id="cell", notes="HH cell")
hh_cell_fn = "HH_example_cell.nml"
hh_cell_doc.includes.append(IncludeType(href=create_na_channel()))
hh_cell_doc.includes.append(IncludeType(href=create_k_channel()))
hh_cell_doc.includes.append(IncludeType(href=create_leak_channel()))
# Define a cell
hh_cell = Cell(id="hh_cell", notes="A single compartment HH cell")
# Define its biophysical properties
bio_prop = BiophysicalProperties(id="hh_b_prop")
# notes="Biophysical properties for HH cell")
# Membrane properties are a type of biophysical properties
mem_prop = MembraneProperties()
# Add membrane properties to the biophysical properties
bio_prop.membrane_properties = mem_prop
# Append to cell
hh_cell.biophysical_properties = bio_prop
# Channel density for Na channel
na_channel_density = ChannelDensity(id="na_channels", cond_density="120.0 mS_per_cm2", erev="50.0 mV", ion="na", ion_channel="na_channel")
mem_prop.channel_densities.append(na_channel_density)
# Channel density for k channel
k_channel_density = ChannelDensity(id="k_channels", cond_density="360 S_per_m2", erev="-77mV", ion="k", ion_channel="k_channel")
mem_prop.channel_densities.append(k_channel_density)
# Leak channel
leak_channel_density = ChannelDensity(id="leak_channels", cond_density="3.0 S_per_m2", erev="-54.3mV", ion="non_specific", ion_channel="leak_channel")
mem_prop.channel_densities.append(leak_channel_density)
# Other membrane properties
mem_prop.spike_threshes.append(SpikeThresh(value="-20mV"))
mem_prop.specific_capacitances.append(SpecificCapacitance(value="1.0 uF_per_cm2"))
mem_prop.init_memb_potentials.append(InitMembPotential(value="-65mV"))
intra_prop = IntracellularProperties()
intra_prop.resistivities.append(Resistivity(value="0.03 kohm_cm"))
# Add to biological properties
bio_prop.intracellular_properties = intra_prop
# Morphology
morph = Morphology(id="hh_cell_morph")
# notes="Simple morphology for the HH cell")
seg = Segment(id="0", name="soma", notes="Soma segment")
# We want a diameter such that area is 1000 micro meter^2
# surface area of a sphere is 4pi r^2 = 4pi diam^2
diam = math.sqrt(1000 / math.pi)
proximal = distal = Point3DWithDiam(x="0", y="0", z="0", diameter=str(diam))
seg.proximal = proximal
seg.distal = distal
morph.segments.append(seg)
hh_cell.morphology = morph
hh_cell_doc.cells.append(hh_cell)
pynml.write_neuroml2_file(nml2_doc=hh_cell_doc, nml2_file_name=hh_cell_fn, validate=True)
return hh_cell_fn
```
### Create a network
```
def create_network():
"""Create the network
:returns: name of network nml file
"""
net_doc = NeuroMLDocument(id="network",
notes="HH cell network")
net_doc_fn = "HH_example_net.nml"
net_doc.includes.append(IncludeType(href=create_cell()))
# Create a population: convenient to create many cells of the same type
pop = Population(id="pop0", notes="A population for our cell", component="hh_cell", size=1)
# Input
pulsegen = PulseGenerator(id="pg", notes="Simple pulse generator", delay="100ms", duration="100ms", amplitude="0.08nA")
exp_input = ExplicitInput(target="pop0[0]", input="pg")
net = Network(id="single_hh_cell_network", note="A network with a single population")
net_doc.pulse_generators.append(pulsegen)
net.explicit_inputs.append(exp_input)
net.populations.append(pop)
net_doc.networks.append(net)
pynml.write_neuroml2_file(nml2_doc=net_doc, nml2_file_name=net_doc_fn, validate=True)
return net_doc_fn
```
## Plot the data we record
```
def plot_data(sim_id):
"""Plot the sim data.
Load the data from the file and plot the graph for the membrane potential
using the pynml generate_plot utility function.
:sim_id: ID of simulaton
"""
data_array = np.loadtxt(sim_id + ".dat")
pynml.generate_plot([data_array[:, 0]], [data_array[:, 1]], "Membrane potential", show_plot_already=False, save_figure_to=sim_id + "-v.png", xaxis="time (s)", yaxis="membrane potential (V)")
pynml.generate_plot([data_array[:, 0]], [data_array[:, 2]], "channel current", show_plot_already=False, save_figure_to=sim_id + "-i.png", xaxis="time (s)", yaxis="channel current (A)")
pynml.generate_plot([data_array[:, 0], data_array[:, 0]], [data_array[:, 3], data_array[:, 4]], "current density", labels=["Na", "K"], show_plot_already=False, save_figure_to=sim_id + "-iden.png", xaxis="time (s)", yaxis="current density (A_per_m2)")
```
## Create and run the simulation
Create the simulation, run it, record data, and plot the recorded information.
```
def main():
"""Main function
Include the NeuroML model into a LEMS simulation file, run it, plot some
data.
"""
# Simulation bits
sim_id = "HH_single_compartment_example_sim"
simulation = LEMSSimulation(sim_id=sim_id, duration=300, dt=0.01, simulation_seed=123)
# Include the NeuroML model file
simulation.include_neuroml2_file(create_network())
# Assign target for the simulation
simulation.assign_simulation_target("single_hh_cell_network")
# Recording information from the simulation
simulation.create_output_file(id="output0", file_name=sim_id + ".dat")
simulation.add_column_to_output_file("output0", column_id="pop0[0]/v", quantity="pop0[0]/v")
simulation.add_column_to_output_file("output0", column_id="pop0[0]/iChannels", quantity="pop0[0]/iChannels")
simulation.add_column_to_output_file("output0", column_id="pop0[0]/na/iDensity", quantity="pop0[0]/hh_b_prop/membraneProperties/na_channels/iDensity/")
simulation.add_column_to_output_file("output0", column_id="pop0[0]/k/iDensity", quantity="pop0[0]/hh_b_prop/membraneProperties/k_channels/iDensity/")
# Save LEMS simulation to file
sim_file = simulation.save_to_file()
# Run the simulation using the default jNeuroML simulator
pynml.run_lems_with_jneuroml(sim_file, max_memory="2G", nogui=True, plot=False)
# Plot the data
plot_data(sim_id)
if __name__ == "__main__":
main()
```
| true | code | 0.593374 | null | null | null | null |
|
# Hyperparameter tuning
In the previous section, we did not discuss the parameters of random forest
and gradient-boosting. However, there are a couple of things to keep in mind
when setting these.
This notebook gives crucial information regarding how to set the
hyperparameters of both random forest and gradient boosting decision tree
models.
<div class="admonition caution alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Caution!</p>
<p class="last">For the sake of clarity, no cross-validation will be used to estimate the
testing error. We are only showing the effect of the parameters
on the validation set of what should be the inner cross-validation.</p>
</div>
## Random forest
The main parameter to tune for random forest is the `n_estimators` parameter.
In general, the more trees in the forest, the better the generalization
performance will be. However, it will slow down the fitting and prediction
time. The goal is to balance computing time and generalization performance when
setting the number of estimators when putting such learner in production.
The `max_depth` parameter could also be tuned. Sometimes, there is no need
to have fully grown trees. However, be aware that with random forest, trees
are generally deep since we are seeking to overfit the learners on the
bootstrap samples because this will be mitigated by combining them.
Assembling underfitted trees (i.e. shallow trees) might also lead to an
underfitted forest.
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
import pandas as pd
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestRegressor
param_grid = {
"n_estimators": [10, 20, 30],
"max_depth": [3, 5, None],
}
grid_search = GridSearchCV(
RandomForestRegressor(n_jobs=2), param_grid=param_grid,
scoring="neg_mean_absolute_error", n_jobs=2,
)
grid_search.fit(data_train, target_train)
columns = [f"param_{name}" for name in param_grid.keys()]
columns += ["mean_test_score", "rank_test_score"]
cv_results = pd.DataFrame(grid_search.cv_results_)
cv_results["mean_test_score"] = -cv_results["mean_test_score"]
cv_results[columns].sort_values(by="rank_test_score")
```
We can observe that in our grid-search, the largest `max_depth` together
with the largest `n_estimators` led to the best generalization performance.
## Gradient-boosting decision trees
For gradient-boosting, parameters are coupled, so we cannot set the
parameters one after the other anymore. The important parameters are
`n_estimators`, `max_depth`, and `learning_rate`.
Let's first discuss the `max_depth` parameter.
We saw in the section on gradient-boosting that the algorithm fits the error
of the previous tree in the ensemble. Thus, fitting fully grown trees will
be detrimental.
Indeed, the first tree of the ensemble would perfectly fit (overfit) the data
and thus no subsequent tree would be required, since there would be no
residuals.
Therefore, the tree used in gradient-boosting should have a low depth,
typically between 3 to 8 levels. Having very weak learners at each step will
help reducing overfitting.
With this consideration in mind, the deeper the trees, the faster the
residuals will be corrected and less learners are required. Therefore,
`n_estimators` should be increased if `max_depth` is lower.
Finally, we have overlooked the impact of the `learning_rate` parameter
until now. When fitting the residuals, we would like the tree
to try to correct all possible errors or only a fraction of them.
The learning-rate allows you to control this behaviour.
A small learning-rate value would only correct the residuals of very few
samples. If a large learning-rate is set (e.g., 1), we would fit the
residuals of all samples. So, with a very low learning-rate, we will need
more estimators to correct the overall error. However, a too large
learning-rate tends to obtain an overfitted ensemble,
similar to having a too large tree depth.
```
from sklearn.ensemble import GradientBoostingRegressor
param_grid = {
"n_estimators": [10, 30, 50],
"max_depth": [3, 5, None],
"learning_rate": [0.1, 1],
}
grid_search = GridSearchCV(
GradientBoostingRegressor(), param_grid=param_grid,
scoring="neg_mean_absolute_error", n_jobs=2
)
grid_search.fit(data_train, target_train)
columns = [f"param_{name}" for name in param_grid.keys()]
columns += ["mean_test_score", "rank_test_score"]
cv_results = pd.DataFrame(grid_search.cv_results_)
cv_results["mean_test_score"] = -cv_results["mean_test_score"]
cv_results[columns].sort_values(by="rank_test_score")
```
<div class="admonition caution alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Caution!</p>
<p class="last">Here, we tune the <tt class="docutils literal">n_estimators</tt> but be aware that using early-stopping as
in the previous exercise will be better.</p>
</div>
| true | code | 0.771236 | null | null | null | null |
|
```
# !pip install ray[tune]
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.metrics import mean_squared_error
from hyperopt import hp
from ray import tune
from hyperopt import fmin, tpe, hp,Trials, space_eval
import scipy.stats
df = pd.read_csv("../../Data/Raw/flightLogData.csv")
plt.figure(figsize=(20, 10))
plt.plot(df.Time, df['Altitude'], linewidth=2, color="r", label="Altitude")
plt.plot(df.Time, df['Vertical_velocity'], linewidth=2, color="y", label="Vertical_velocity")
plt.plot(df.Time, df['Vertical_acceleration'], linewidth=2, color="b", label="Vertical_acceleration")
plt.legend()
plt.show()
temp_df = df[['Altitude', "Vertical_velocity", "Vertical_acceleration"]]
noise = np.random.normal(2, 5, temp_df.shape)
noisy_df = temp_df + noise
noisy_df['Time'] = df['Time']
plt.figure(figsize=(20, 10))
plt.plot(noisy_df.Time, noisy_df['Altitude'], linewidth=2, color="r", label="Altitude")
plt.plot(noisy_df.Time, noisy_df['Vertical_velocity'], linewidth=2, color="y", label="Vertical_velocity")
plt.plot(noisy_df.Time, noisy_df['Vertical_acceleration'], linewidth=2, color="b", label="Vertical_acceleration")
plt.legend()
plt.show()
```
## Altitude
```
q = 0.001
A = np.array([[1.0, 0.1, 0.005], [0, 1.0, 0.1], [0, 0, 1]])
H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])
P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
# R = np.array([[0.5, 0.0], [0.0, 0.0012]])
# Q = np.array([[q, 0.0, 0.0], [0.0, q, 0.0], [0.0, 0.0, q]])
I = np.identity(3)
x_hat = np.array([[0.0], [0.0], [0.0]])
Y = np.array([[0.0], [0.0]])
def kalman_update(param):
r1, r2, q1 = param['r1'], param['r2'], param['q1']
R = np.array([[r1, 0.0], [0.0, r2]])
Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])
A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])
H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])
P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
I = np.identity(3)
x_hat = np.array([[0.0], [0.0], [0.0]])
Y = np.array([[0.0], [0.0]])
new_altitude = []
new_acceleration = []
new_velocity = []
for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):
Z = np.array([[altitude], [az]])
x_hat_minus = np.dot(A, x_hat)
P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q
K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))
Y = Z - np.dot(H, x_hat_minus)
x_hat = x_hat_minus + np.dot(K, Y)
P = np.dot((I - np.dot(K, H)), P_minus)
Y = Z - np.dot(H, x_hat_minus)
new_altitude.append(float(x_hat[0]))
new_velocity.append(float(x_hat[1]))
new_acceleration.append(float(x_hat[2]))
return new_altitude
def objective_function(param):
r1, r2, q1 = param['r1'], param['r2'], param['q1']
R = np.array([[r1, 0.0], [0.0, r2]])
Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])
A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])
H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])
P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
I = np.identity(3)
x_hat = np.array([[0.0], [0.0], [0.0]])
Y = np.array([[0.0], [0.0]])
new_altitude = []
new_acceleration = []
new_velocity = []
for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):
Z = np.array([[altitude], [az]])
x_hat_minus = np.dot(A, x_hat)
P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q
K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))
Y = Z - np.dot(H, x_hat_minus)
x_hat = x_hat_minus + np.dot(K, Y)
P = np.dot((I - np.dot(K, H)), P_minus)
Y = Z - np.dot(H, x_hat_minus)
new_altitude.append(float(x_hat[0]))
new_velocity.append(float(x_hat[1]))
new_acceleration.append(float(x_hat[2]))
return mean_squared_error(df['Altitude'], new_altitude)
# space = {
# "r1": hp.choice("r1", np.arange(0.01, 90, 0.005)),
# "r2": hp.choice("r2", np.arange(0.01, 90, 0.005)),
# "q1": hp.choice("q1", np.arange(0.0001, 0.0009, 0.0001))
# }
len(np.arange(0.00001, 0.09, 0.00001))
space = {
"r1": hp.choice("r1", np.arange(0.001, 90, 0.001)),
"r2": hp.choice("r2", np.arange(0.001, 90, 0.001)),
"q1": hp.choice("q1", np.arange(0.00001, 0.09, 0.00001))
}
# Initialize trials object
trials = Trials()
best = fmin(fn=objective_function, space = space, algo=tpe.suggest, max_evals=100, trials=trials )
print(best)
# -> {'a': 1, 'c2': 0.01420615366247227}
print(space_eval(space, best))
# -> ('case 2', 0.01420615366247227}
d1 = space_eval(space, best)
objective_function(d1)
%%timeit
objective_function({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75})
objective_function({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75})
y = kalman_update(d1)
current = kalman_update({'q1': 0.06626, 'r1': 0.25, 'r2': 0.75})
plt.figure(figsize=(20, 10))
plt.plot(noisy_df.Time, df['Altitude'], linewidth=2, color="r", label="Actual")
plt.plot(noisy_df.Time, current, linewidth=2, color="g", label="ESP32")
plt.plot(noisy_df.Time, noisy_df['Altitude'], linewidth=2, color="y", label="Noisy")
plt.plot(noisy_df.Time, y, linewidth=2, color="b", label="Predicted")
plt.legend()
plt.show()
def kalman_update_return_velocity(param):
r1, r2, q1 = param['r1'], param['r2'], param['q1']
R = np.array([[r1, 0.0], [0.0, r2]])
Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])
A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])
H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])
P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
I = np.identity(3)
x_hat = np.array([[0.0], [0.0], [0.0]])
Y = np.array([[0.0], [0.0]])
new_altitude = []
new_acceleration = []
new_velocity = []
for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):
Z = np.array([[altitude], [az]])
x_hat_minus = np.dot(A, x_hat)
P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q
K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))
Y = Z - np.dot(H, x_hat_minus)
x_hat = x_hat_minus + np.dot(K, Y)
P = np.dot((I - np.dot(K, H)), P_minus)
Y = Z - np.dot(H, x_hat_minus)
new_altitude.append(float(x_hat[0]))
new_velocity.append(float(x_hat[1]))
new_acceleration.append(float(x_hat[2]))
return new_velocity
def objective_function(param):
r1, r2, q1 = param['r1'], param['r2'], param['q1']
R = np.array([[r1, 0.0], [0.0, r2]])
Q = np.array([[q1, 0.0, 0.0], [0.0, q1, 0.0], [0.0, 0.0, q1]])
A = np.array([[1.0, 0.05, 0.00125], [0, 1.0, 0.05], [0, 0, 1]])
H = np.array([[1.0, 0.0, 0.0],[ 0.0, 0.0, 1.0]])
P = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
I = np.identity(3)
x_hat = np.array([[0.0], [0.0], [0.0]])
Y = np.array([[0.0], [0.0]])
new_altitude = []
new_acceleration = []
new_velocity = []
for altitude, az in zip(noisy_df['Altitude'], noisy_df['Vertical_acceleration']):
Z = np.array([[altitude], [az]])
x_hat_minus = np.dot(A, x_hat)
P_minus = np.dot(np.dot(A, P), np.transpose(A)) + Q
K = np.dot(np.dot(P_minus, np.transpose(H)), np.linalg.inv((np.dot(np.dot(H, P_minus), np.transpose(H)) + R)))
Y = Z - np.dot(H, x_hat_minus)
x_hat = x_hat_minus + np.dot(K, Y)
P = np.dot((I - np.dot(K, H)), P_minus)
Y = Z - np.dot(H, x_hat_minus)
new_altitude.append(float(x_hat[0]))
new_velocity.append(float(x_hat[1]))
new_acceleration.append(float(x_hat[2]))
return mean_squared_error(df['Vertical_velocity'], new_velocity)
space = {
"r1": hp.choice("r1", np.arange(0.001, 90, 0.001)),
"r2": hp.choice("r2", np.arange(0.001, 90, 0.001)),
"q1": hp.choice("q1", np.arange(0.00001, 0.09, 0.00001))
}
# Initialize trials object
trials = Trials()
best = fmin(fn=objective_function, space = space, algo=tpe.suggest, max_evals=100, trials=trials )
print(best)
print(space_eval(space, best))
d2 = space_eval(space, best)
objective_function(d2)
y = kalman_update_return_velocity(d2)
current = kalman_update_return_velocity({'q1': 0.0013, 'r1': 0.25, 'r2': 0.65})
previous = kalman_update_return_velocity({'q1': 0.08519, 'r1': 4.719, 'r2': 56.443})
plt.figure(figsize=(20, 10))
plt.plot(noisy_df.Time, df['Vertical_velocity'], linewidth=2, color="r", label="Actual")
plt.plot(noisy_df.Time, current, linewidth=2, color="g", label="ESP32")
plt.plot(noisy_df.Time, previous, linewidth=2, color="c", label="With previous data")
plt.plot(noisy_df.Time, noisy_df['Vertical_velocity'], linewidth=2, color="y", label="Noisy")
plt.plot(noisy_df.Time, y, linewidth=2, color="b", label="Predicted")
plt.legend()
plt.show()
```
| true | code | 0.447038 | null | null | null | null |
|
# Selected Economic Characteristics: Employment Status from the American Community Survey
**[Work in progress]**
This notebook downloads [selected economic characteristics (DP03)](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03) from the American Community Survey 2018 5-Year Data.
Data source: [American Community Survey 5-Year Data 2018](https://www.census.gov/data/developers/data-sets/acs-5year.html)
Authors: Peter Rose ([email protected]), Ilya Zaslavsky ([email protected])
```
import os
import pandas as pd
from pathlib import Path
import time
pd.options.display.max_rows = None # display all rows
pd.options.display.max_columns = None # display all columsns
NEO4J_IMPORT = Path(os.getenv('NEO4J_IMPORT'))
print(NEO4J_IMPORT)
```
## Download selected variables
* [Selected economic characteristics for US](https://data.census.gov/cedsci/table?tid=ACSDP5Y2018.DP03)
* [List of variables as HTML](https://api.census.gov/data/2018/acs/acs5/profile/groups/DP03.html) or [JSON](https://api.census.gov/data/2018/acs/acs5/profile/groups/DP03/)
* [Description of variables](https://www2.census.gov/programs-surveys/acs/tech_docs/subject_definitions/2018_ACSSubjectDefinitions.pdf)
* [Example URLs for API](https://api.census.gov/data/2018/acs/acs5/profile/examples.html)
### Specify variables from DP03 group and assign property names
Names must follow the [Neo4j property naming conventions](https://neo4j.com/docs/getting-started/current/graphdb-concepts/#graphdb-naming-rules-and-recommendations).
```
variables = {# EMPLOYMENT STATUS
'DP03_0001E': 'population16YearsAndOver',
'DP03_0002E': 'population16YearsAndOverInLaborForce',
'DP03_0002PE': 'population16YearsAndOverInLaborForcePct',
'DP03_0003E': 'population16YearsAndOverInCivilianLaborForce',
'DP03_0003PE': 'population16YearsAndOverInCivilianLaborForcePct',
'DP03_0006E': 'population16YearsAndOverInArmedForces',
'DP03_0006PE': 'population16YearsAndOverInArmedForcesPct',
'DP03_0007E': 'population16YearsAndOverNotInLaborForce',
'DP03_0007PE': 'population16YearsAndOverNotInLaborForcePct'
#'DP03_0014E': 'ownChildrenOfTheHouseholderUnder6Years',
#'DP03_0015E': 'ownChildrenOfTheHouseholderUnder6YearsAllParentsInLaborForce',
#'DP03_0016E': 'ownChildrenOfTheHouseholder6To17Years',
#'DP03_0017E': 'ownChildrenOfTheHouseholder6To17YearsAllParentsInLaborForce',
}
fields = ",".join(variables.keys())
for v in variables.values():
print('e.' + v + ' = toInteger(row.' + v + '),')
print(len(variables.keys()))
```
## Download county-level data using US Census API
```
url_county = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=county:*'
df = pd.read_json(url_county, dtype='str')
df.fillna('', inplace=True)
df.head()
```
##### Add column names
```
df = df[1:].copy() # skip first row of labels
columns = list(variables.values())
columns.append('stateFips')
columns.append('countyFips')
df.columns = columns
```
Remove Puerto Rico (stateFips = 72) to limit data to US States
TODO handle data for Puerto Rico (GeoNames represents Puerto Rico as a country)
```
df.query("stateFips != '72'", inplace=True)
```
Save list of state fips (required later to get tract data by state)
```
stateFips = list(df['stateFips'].unique())
stateFips.sort()
print(stateFips)
df.head()
# Example data
df[(df['stateFips'] == '06') & (df['countyFips'] == '073')]
df['source'] = 'American Community Survey 5 year'
df['aggregationLevel'] = 'Admin2'
```
### Save data
```
df.to_csv(NEO4J_IMPORT / "03a-USCensusDP03EmploymentAdmin2.csv", index=False)
```
## Download zip-level data using US Census API
```
url_zip = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=zip%20code%20tabulation%20area:*'
df = pd.read_json(url_zip, dtype='str')
df.fillna('', inplace=True)
df.head()
```
##### Add column names
```
df = df[1:].copy() # skip first row
columns = list(variables.values())
columns.append('stateFips')
columns.append('postalCode')
df.columns = columns
df.head()
# Example data
df.query("postalCode == '90210'")
df['source'] = 'American Community Survey 5 year'
df['aggregationLevel'] = 'PostalCode'
```
### Save data
```
df.to_csv(NEO4J_IMPORT / "03a-USCensusDP03EmploymentZip.csv", index=False)
```
## Download tract-level data using US Census API
Tract-level data are only available by state, so we need to loop over all states.
```
def get_tract_data(state):
url_tract = f'https://api.census.gov/data/2018/acs/acs5/profile?get={fields}&for=tract:*&in=state:{state}'
df = pd.read_json(url_tract, dtype='str')
time.sleep(1)
# skip first row of labels
df = df[1:].copy()
# Add column names
columns = list(variables.values())
columns.append('stateFips')
columns.append('countyFips')
columns.append('tract')
df.columns = columns
return df
df = pd.concat((get_tract_data(state) for state in stateFips))
df.fillna('', inplace=True)
df['tract'] = df['stateFips'] + df['countyFips'] + df['tract']
df['source'] = 'American Community Survey 5 year'
df['aggregationLevel'] = 'Tract'
# Example data for San Diego County
df[(df['stateFips'] == '06') & (df['countyFips'] == '073')].head()
```
### Save data
```
df.to_csv(NEO4J_IMPORT / "03a-USCensusDP03EmploymentTract.csv", index=False)
df.shape
```
| true | code | 0.28393 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/3_image_processing_deep_learning_roadmap/3_deep_learning_advanced/1_Blocks%20in%20Deep%20Learning%20Networks/8)%20Resnet%20V2%20Bottleneck%20Block%20(Type%20-%202).ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### 1. Learn to implement Resnet V2 Bottleneck Block (Type - 1) using monk
- Monk's Keras
- Monk's Pytorch
- Monk's Mxnet
### 2. Use network Monk's debugger to create complex blocks
### 3. Understand how syntactically different it is to implement the same using
- Traditional Keras
- Traditional Pytorch
- Traditional Mxnet
# Resnet V2 Bottleneck Block - Type 1
- Note: The block structure can have variations too, this is just an example
```
from IPython.display import Image
Image(filename='imgs/resnet_v2_bottleneck_without_downsample.png')
```
# Table of contents
[1. Install Monk](#1)
[2. Block basic Information](#2)
- [2.1) Visual structure](#2-1)
- [2.2) Layers in Branches](#2-2)
[3) Creating Block using monk visual debugger](#3)
- [3.1) Create the first branch](#3-1)
- [3.2) Create the second branch](#3-2)
- [3.3) Merge the branches](#3-3)
- [3.4) Debug the merged network](#3-4)
- [3.5) Compile the network](#3-5)
- [3.6) Visualize the network](#3-6)
- [3.7) Run data through the network](#3-7)
[4) Creating Block Using MONK one line API call](#4)
- [Mxnet Backend](#4-1)
- [Pytorch Backend](#4-2)
- [Keras Backend](#4-3)
[5) Appendix](#5)
- [Study Material](#5-1)
- [Creating block using traditional Mxnet](#5-2)
- [Creating block using traditional Pytorch](#5-3)
- [Creating block using traditional Keras](#5-4)
<a id='0'></a>
# Install Monk
## Using pip (Recommended)
- colab (gpu)
- All bakcends: `pip install -U monk-colab`
- kaggle (gpu)
- All backends: `pip install -U monk-kaggle`
- cuda 10.2
- All backends: `pip install -U monk-cuda102`
- Gluon bakcned: `pip install -U monk-gluon-cuda102`
- Pytorch backend: `pip install -U monk-pytorch-cuda102`
- Keras backend: `pip install -U monk-keras-cuda102`
- cuda 10.1
- All backend: `pip install -U monk-cuda101`
- Gluon bakcned: `pip install -U monk-gluon-cuda101`
- Pytorch backend: `pip install -U monk-pytorch-cuda101`
- Keras backend: `pip install -U monk-keras-cuda101`
- cuda 10.0
- All backend: `pip install -U monk-cuda100`
- Gluon bakcned: `pip install -U monk-gluon-cuda100`
- Pytorch backend: `pip install -U monk-pytorch-cuda100`
- Keras backend: `pip install -U monk-keras-cuda100`
- cuda 9.2
- All backend: `pip install -U monk-cuda92`
- Gluon bakcned: `pip install -U monk-gluon-cuda92`
- Pytorch backend: `pip install -U monk-pytorch-cuda92`
- Keras backend: `pip install -U monk-keras-cuda92`
- cuda 9.0
- All backend: `pip install -U monk-cuda90`
- Gluon bakcned: `pip install -U monk-gluon-cuda90`
- Pytorch backend: `pip install -U monk-pytorch-cuda90`
- Keras backend: `pip install -U monk-keras-cuda90`
- cpu
- All backend: `pip install -U monk-cpu`
- Gluon bakcned: `pip install -U monk-gluon-cpu`
- Pytorch backend: `pip install -U monk-pytorch-cpu`
- Keras backend: `pip install -U monk-keras-cpu`
## Install Monk Manually (Not recommended)
### Step 1: Clone the library
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
### Step 2: Install requirements
- Linux
- Cuda 9.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`
- Cuda 9.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`
- Cuda 10.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`
- Cuda 10.1
- `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`
- Cuda 10.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`
- Windows
- Cuda 9.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`
- Cuda 9.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`
- Cuda 10.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`
- Cuda 10.1 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`
- Cuda 10.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`
- Mac
- CPU (Non gpu system)
- `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`
- Misc
- Colab (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`
- Kaggle (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`
### Step 3: Add to system path (Required for every terminal or kernel run)
- `import sys`
- `sys.path.append("monk_v1/");`
# Imports
```
# Common
import numpy as np
import math
import netron
from collections import OrderedDict
from functools import partial
#Using mxnet-gluon backend
# When installed using pip
from monk.gluon_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.gluon_prototype import prototype
```
<a id='2'></a>
# Block Information
<a id='2_1'></a>
## Visual structure
```
from IPython.display import Image
Image(filename='imgs/resnet_v2_bottleneck_without_downsample.png')
```
<a id='2_2'></a>
## Layers in Branches
- Number of branches: 2
- Common Elements
- batchnorm -> relu
- Branch 1
- identity
- Branch 2
- conv_1x1 -> batchnorm -> relu -> conv_3x3 -> batchnorm -> relu -> conv1x1
- Branches merged using
- Elementwise addition
(See Appendix to read blogs on resnets)
<a id='3'></a>
# Creating Block using monk debugger
```
# Imports and setup a project
# To use pytorch backend - replace gluon_prototype with pytorch_prototype
# To use keras backend - replace gluon_prototype with keras_prototype
from monk.gluon_prototype import prototype
# Create a sample project
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
```
<a id='3-1'></a>
## Create the first branch
```
def first_branch():
network = [];
network.append(gtf.identity());
return network;
# Debug the branch
branch_1 = first_branch()
network = [];
network.append(branch_1);
gtf.debug_custom_model_design(network);
```
<a id='3-2'></a>
## Create the second branch
```
def second_branch(output_channels=128, stride=1):
network = [];
# Bottleneck convolution
network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Bottleneck convolution
network.append(gtf.convolution(output_channels=output_channels//4, kernel_size=1, stride=stride));
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Normal convolution
network.append(gtf.convolution(output_channels=output_channels, kernel_size=1, stride=1));
return network;
# Debug the branch
branch_2 = second_branch(output_channels=128, stride=1)
network = [];
network.append(branch_2);
gtf.debug_custom_model_design(network);
```
<a id='3-3'></a>
## Merge the branches
```
def final_block(output_channels=128, stride=1):
network = [];
#Common Elements
network.append(gtf.batch_normalization());
network.append(gtf.relu());
#Create subnetwork and add branches
subnetwork = [];
branch_1 = first_branch()
branch_2 = second_branch(output_channels=output_channels, stride=stride)
subnetwork.append(branch_1);
subnetwork.append(branch_2);
# Add merging element
subnetwork.append(gtf.add());
# Add the subnetwork
network.append(subnetwork)
return network;
```
<a id='3-4'></a>
## Debug the merged network
```
final = final_block(output_channels=64, stride=1)
network = [];
network.append(final);
gtf.debug_custom_model_design(network);
```
<a id='3-5'></a>
## Compile the network
```
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='3-6'></a>
## Run data through the network
```
import mxnet as mx
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = gtf.system_dict["local"]["model"].forward(x);
print(x.shape, y.shape)
```
<a id='3-7'></a>
## Visualize network using netron
```
gtf.Visualize_With_Netron(data_shape=(64, 224, 224))
```
<a id='4'></a>
# Creating Using MONK LOW code API
<a id='4-1'></a>
## Mxnet backend
```
from monk.gluon_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='4-2'></a>
## Pytorch backend
- Only the import changes
```
#Change gluon_prototype to pytorch_prototype
from monk.pytorch_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='4-3'></a>
## Keras backend
- Only the import changes
```
#Change gluon_prototype to keras_prototype
from monk.keras_prototype import prototype
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
network = [];
# Single line addition of blocks
network.append(gtf.resnet_v2_bottleneck_block(output_channels=64, downsample=False));
gtf.Compile_Network(network, data_shape=(64, 224, 224), use_gpu=False);
```
<a id='5'></a>
# Appendix
<a id='5-1'></a>
## Study links
- https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec
- https://medium.com/@MaheshNKhatri/resnet-block-explanation-with-a-terminology-deep-dive-989e15e3d691
- https://medium.com/analytics-vidhya/understanding-and-implementation-of-residual-networks-resnets-b80f9a507b9c
- https://hackernoon.com/resnet-block-level-design-with-deep-learning-studio-part-1-727c6f4927ac
<a id='5-2'></a>
## Creating block using traditional Mxnet
- Code credits - https://mxnet.incubator.apache.org/
```
# Traditional-Mxnet-gluon
import mxnet as mx
from mxnet.gluon import nn
from mxnet.gluon.nn import HybridBlock, BatchNorm
from mxnet.gluon.contrib.nn import HybridConcurrent, Identity
from mxnet import gluon, init, nd
def _conv3x3(channels, stride, in_channels):
return nn.Conv2D(channels, kernel_size=3, strides=stride, padding=1,
use_bias=False, in_channels=in_channels)
class ResnetBlockV1(HybridBlock):
def __init__(self, channels, stride, in_channels=0, **kwargs):
super(ResnetBlockV1, self).__init__(**kwargs)
#Common Elements
self.bn0 = nn.BatchNorm();
self.relu0 = nn.Activation('relu');
#Branch - 1
#Identity
# Branch - 2
self.body = nn.HybridSequential(prefix='')
self.body.add(nn.Conv2D(channels//4, kernel_size=1, strides=stride,
use_bias=False, in_channels=in_channels))
self.body.add(nn.BatchNorm())
self.body.add(nn.Activation('relu'))
self.body.add(_conv3x3(channels//4, stride, in_channels))
self.body.add(nn.BatchNorm())
self.body.add(nn.Activation('relu'))
self.body.add(nn.Conv2D(channels, kernel_size=1, strides=stride,
use_bias=False, in_channels=in_channels))
def hybrid_forward(self, F, x):
x = self.bn0(x);
x = self.relu0(x);
residual = x
x = self.body(x)
x = residual+x
return x
# Invoke the block
block = ResnetBlockV1(64, 1)
# Initialize network and load block on machine
ctx = [mx.cpu()];
block.initialize(init.Xavier(), ctx = ctx);
block.collect_params().reset_ctx(ctx)
block.hybridize()
# Run data through network
x = np.zeros((1, 64, 224, 224));
x = mx.nd.array(x);
y = block.forward(x);
print(x.shape, y.shape)
# Export Model to Load on Netron
block.export("final", epoch=0);
netron.start("final-symbol.json", port=8082)
```
<a id='5-3'></a>
## Creating block using traditional Pytorch
- Code credits - https://pytorch.org/
```
# Traiditional-Pytorch
import torch
from torch import nn
from torch.jit.annotations import List
import torch.nn.functional as F
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=dilation, groups=groups, bias=False, dilation=dilation)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
class ResnetBottleNeckBlock(nn.Module):
expansion = 1
__constants__ = ['downsample']
def __init__(self, inplanes, planes, stride=1, groups=1,
base_width=64, dilation=1, norm_layer=None):
super(ResnetBottleNeckBlock, self).__init__()
norm_layer = nn.BatchNorm2d
#Common elements
self.bn0 = norm_layer(inplanes);
self.relu0 = nn.ReLU(inplace=True);
# Branch - 1
#Identity
# Branch - 2
self.conv1 = conv1x1(inplanes, planes//4, stride)
self.bn1 = norm_layer(planes//4)
self.relu1 = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes//4, planes//4, stride)
self.bn2 = norm_layer(planes//4)
self.relu2 = nn.ReLU(inplace=True)
self.conv3 = conv1x1(planes//4, planes)
self.stride = stride
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
x = self.bn0(x);
x = self.relu0(x);
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu1(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu2(out)
out = self.conv3(out)
out += identity
return out
# Invoke the block
block = ResnetBottleNeckBlock(64, 64, stride=1);
# Initialize network and load block on machine
layers = []
layers.append(block);
net = nn.Sequential(*layers);
# Run data through network
x = torch.randn(1, 64, 224, 224)
y = net(x)
print(x.shape, y.shape);
# Export Model to Load on Netron
torch.onnx.export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=10, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes
'output' : {0 : 'batch_size'}})
netron.start('model.onnx', port=9998);
```
<a id='5-4'></a>
## Creating block using traditional Keras
- Code credits: https://keras.io/
```
# Traditional-Keras
import keras
import keras.layers as kla
import keras.models as kmo
import tensorflow as tf
from keras.models import Model
backend = 'channels_last'
from keras import layers
def resnet_conv_block(input_tensor,
kernel_size,
filters,
stage,
block,
strides=(1, 1)):
filters1, filters2, filters3 = filters
bn_axis = 3
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
#Common Elements
start = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '0a')(input_tensor)
start = layers.Activation('relu')(start)
# Branch - 1
# Identity
shortcut = start
# Branch - 2
x = layers.Conv2D(filters1, (1, 1), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2a')(start)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters2, (3, 3), strides=strides,
kernel_initializer='he_normal',
name=conv_name_base + '2b', padding="same")(x)
x = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
x = layers.Activation('relu')(x)
x = layers.Conv2D(filters3, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2c')(x);
x = layers.add([x, shortcut])
x = layers.Activation('relu')(x)
return x
def create_model(input_shape, kernel_size, filters, stage, block):
img_input = layers.Input(shape=input_shape);
x = resnet_conv_block(img_input, kernel_size, filters, stage, block)
return Model(img_input, x);
# Invoke the block
kernel_size=3;
filters=[16, 16, 64];
input_shape=(224, 224, 64);
model = create_model(input_shape, kernel_size, filters, 0, "0");
# Run data through network
x = tf.placeholder(tf.float32, shape=(1, 224, 224, 64))
y = model(x)
print(x.shape, y.shape)
# Export Model to Load on Netron
model.save("final.h5");
netron.start("final.h5", port=8082)
```
# Goals Completed
### 1. Learn to implement Resnet V2 Bottleneck Block (Type - 1) using monk
- Monk's Keras
- Monk's Pytorch
- Monk's Mxnet
### 2. Use network Monk's debugger to create complex blocks
### 3. Understand how syntactically different it is to implement the same using
- Traditional Keras
- Traditional Pytorch
- Traditional Mxnet
| true | code | 0.800585 | null | null | null | null |
|
# Experiments comparing the performance of traditional pooling operations and entropy pooling within a shallow neural network and Lenet. The experiments use cifar10 and cifar100.
```
%matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR100(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=8)
testset = torchvision.datasets.CIFAR100(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=8)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.modules.utils import _pair, _quadruple
import time
from skimage.measure import shannon_entropy
from scipy import stats
from torch.nn.modules.utils import _pair, _quadruple
import time
from skimage.measure import shannon_entropy
from scipy import stats
import numpy as np
class EntropyPool2d(nn.Module):
def __init__(self, kernel_size=3, stride=1, padding=0, same=False, entr='high'):
super(EntropyPool2d, self).__init__()
self.k = _pair(kernel_size)
self.stride = _pair(stride)
self.padding = _quadruple(padding) # convert to l, r, t, b
self.same = same
self.entr = entr
def _padding(self, x):
if self.same:
ih, iw = x.size()[2:]
if ih % self.stride[0] == 0:
ph = max(self.k[0] - self.stride[0], 0)
else:
ph = max(self.k[0] - (ih % self.stride[0]), 0)
if iw % self.stride[1] == 0:
pw = max(self.k[1] - self.stride[1], 0)
else:
pw = max(self.k[1] - (iw % self.stride[1]), 0)
pl = pw // 2
pr = pw - pl
pt = ph // 2
pb = ph - pt
padding = (pl, pr, pt, pb)
else:
padding = self.padding
return padding
def forward(self, x):
# using existing pytorch functions and tensor ops so that we get autograd,
# would likely be more efficient to implement from scratch at C/Cuda level
start = time.time()
x = F.pad(x, self._padding(x), mode='reflect')
x_detached = x.cpu().detach()
x_unique, x_indices, x_inverse, x_counts = np.unique(x_detached,
return_index=True,
return_inverse=True,
return_counts=True)
freq = torch.FloatTensor([x_counts[i] / len(x_inverse) for i in x_inverse]).cuda()
x_probs = freq.view(x.shape)
x_probs = x_probs.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1])
x_probs = x_probs.contiguous().view(x_probs.size()[:4] + (-1,))
if self.entr is 'high':
x_probs, indices = torch.min(x_probs.cuda(), dim=-1)
elif self.entr is 'low':
x_probs, indices = torch.max(x_probs.cuda(), dim=-1)
else:
raise Exception('Unknown entropy mode: {}'.format(self.entr))
x = x.unfold(2, self.k[0], self.stride[0]).unfold(3, self.k[1], self.stride[1])
x = x.contiguous().view(x.size()[:4] + (-1,))
indices = indices.view(indices.size() + (-1,))
pool = torch.gather(input=x, dim=-1, index=indices)
return pool.squeeze(-1)
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import time
from sklearn.metrics import f1_score
MAX = 'max'
AVG = 'avg'
HIGH_ENTROPY = 'high_entr'
LOW_ENTROPY = 'low_entr'
class Net1Pool(nn.Module):
def __init__(self, num_classes=10, pooling=MAX):
super(Net1Pool, self).__init__()
self.conv1 = nn.Conv2d(3, 30, 5)
if pooling is MAX:
self.pool = nn.MaxPool2d(2, 2)
elif pooling is AVG:
self.pool = nn.AvgPool2d(2, 2)
elif pooling is HIGH_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='high')
elif pooling is LOW_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='low')
self.fc0 = nn.Linear(30 * 14 * 14, num_classes)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = x.view(-1, 30 * 14 * 14)
x = F.relu(self.fc0(x))
return x
class Net2Pool(nn.Module):
def __init__(self, num_classes=10, pooling=MAX):
super(Net2Pool, self).__init__()
self.conv1 = nn.Conv2d(3, 50, 5, 1)
self.conv2 = nn.Conv2d(50, 50, 5, 1)
if pooling is MAX:
self.pool = nn.MaxPool2d(2, 2)
elif pooling is AVG:
self.pool = nn.AvgPool2d(2, 2)
elif pooling is HIGH_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='high')
elif pooling is LOW_ENTROPY:
self.pool = EntropyPool2d(2, 2, entr='low')
self.fc1 = nn.Linear(5*5*50, 500)
self.fc2 = nn.Linear(500, num_classes)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.pool(x)
x = F.relu(self.conv2(x))
x = self.pool(x)
x = x.view(-1, 5*5*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
def configure_net(net, device):
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
return net, optimizer, criterion
def train(net, optimizer, criterion, trainloader, device, epochs=10, logging=2000):
for epoch in range(epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
start = time.time()
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % logging == logging - 1:
print('[%d, %5d] loss: %.3f duration: %.5f' %
(epoch + 1, i + 1, running_loss / logging, time.time() - start))
running_loss = 0.0
print('Finished Training')
def test(net, testloader, device):
correct = 0
total = 0
predictions = []
l = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
predictions.extend(predicted.cpu().numpy())
l.extend(labels.cpu().numpy())
print('Accuracy: {}'.format(100 * correct / total))
epochs = 10
logging = 15000
num_classes = 100
print('- - - - - - - - -- - - - 2 pool - - - - - - - - - - - - - - - -')
print('- - - - - - - - -- - - - MAX - - - - - - - - - - - - - - - -')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=MAX), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - AVG - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=AVG), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - HIGH - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=HIGH_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - LOW - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net2Pool(num_classes=num_classes, pooling=LOW_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - 1 pool - - - - - - - - - - - - - - - -')
print('- - - - - - - - -- - - - MAX - - - - - - - - - - - - - - - -')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=MAX), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - AVG - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=AVG), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - HIGH - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=HIGH_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
print('- - - - - - - - -- - - - LOW - - - - - - - - - - - - - - - -')
net, optimizer, criterion = configure_net(Net1Pool(num_classes=num_classes, pooling=LOW_ENTROPY), device)
train(net, optimizer, criterion, trainloader, device, epochs=epochs, logging=logging)
test(net, testloader, device)
```
| true | code | 0.847763 | null | null | null | null |
|
# Real Estate Price Prediction
```
import pandas as pd
df = pd.read_csv("data.csv")
df.head()
df['CHAS'].value_counts()
df.info()
df.describe()
%matplotlib inline
import matplotlib.pyplot as plt
df.hist(bins=50, figsize=(20,15))
```
## train_test_split
```
import numpy as np
def split_train_test(data, test_ratio):
np.random.seed(42)
shuffled = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled[:test_set_size]
train_indices = shuffled[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(df, 0.2)
print(f"The length of train dataset is: {len(train_set)}")
print(f"The length of train dataset is: {len(test_set)}")
def data_percent_allocation(train_set, test_set):
total = len(df)
train_percent = round((len(train_set)/total) * 100)
test_percent = round((len(test_set)/total) * 100)
return train_percent, test_percent
data_percent_allocation(train_set, test_set)
```
## train_test_split from sklearn
```
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(df, test_size = 0.2, random_state = 42)
print(f"The length of train dataset is: {len(train_set)}")
print(f"The length of train dataset is: {len(test_set)}")
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits = 1, test_size = 0.2, random_state = 42)
for train_index, test_index in split.split(df, df['CHAS']):
strat_train_set = df.loc[train_index]
strat_test_set = df.loc[test_index]
strat_test_set['CHAS'].value_counts()
test_set['CHAS'].value_counts()
strat_train_set['CHAS'].value_counts()
train_set['CHAS'].value_counts()
```
### Stratified learning equal splitting of zero and ones
```
95/7
376/28
df = strat_train_set.copy()
```
## Corelations
```
from pandas.plotting import scatter_matrix
attributes = ["MEDV", "RM", "ZN" , "LSTAT"]
scatter_matrix(df[attributes], figsize = (12,8))
df.plot(kind="scatter", x="RM", y="MEDV", alpha=1)
```
### Trying out attribute combinations
```
df["TAXRM"] = df["TAX"]/df["RM"]
df.head()
corr_matrix = df.corr()
corr_matrix['MEDV'].sort_values(ascending=False)
# 1 means strong positive corr and -1 means strong negative corr.
# EX: if RM will increase our final result(MEDV) in prediction will also increase.
df.plot(kind="scatter", x="TAXRM", y="MEDV", alpha=1)
df = strat_train_set.drop("MEDV", axis=1)
df_labels = strat_train_set["MEDV"].copy()
```
## Pipeline
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('std_scaler', StandardScaler()),
])
df_numpy = my_pipeline.fit_transform(df)
df_numpy
#Numpy array of df as models will take numpy array as input.
df_numpy.shape
```
## Model Selection
```
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
# model = LinearRegression()
# model = DecisionTreeRegressor()
model = RandomForestRegressor()
model.fit(df_numpy, df_labels)
some_data = df.iloc[:5]
some_labels = df_labels.iloc[:5]
prepared_data = my_pipeline.transform(some_data)
model.predict(prepared_data)
list(some_labels)
```
## Evaluating the model
```
from sklearn.metrics import mean_squared_error
df_predictions = model.predict(df_numpy)
mse = mean_squared_error(df_labels, df_predictions)
rmse = np.sqrt(mse)
rmse
# from sklearn.metrics import accuracy_score
# accuracy_score(some_data, some_labels, normalize=False)
```
## Cross Validation
```
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, df_numpy, df_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores
def print_scores(scores):
print("Scores:", scores)
print("\nMean:", scores.mean())
print("\nStandard deviation:", scores.std())
print_scores(rmse_scores)
```
### Saving Model
```
from joblib import dump, load
dump(model, 'final_model.joblib')
dump(model, 'final_model.sav')
```
## Testing model on test data
```
X_test = strat_test_set.drop("MEDV", axis=1)
Y_test = strat_test_set["MEDV"].copy()
X_test_prepared = my_pipeline.transform(X_test)
final_predictions = model.predict(X_test_prepared)
final_mse = mean_squared_error(Y_test, final_predictions)
final_rmse = np.sqrt(final_mse)
final_rmse
```
| true | code | 0.443299 | null | null | null | null |
|
# In-Place Waveform Library Updates
This example notebook shows how one can update pulses data in-place without recompiling.
© Raytheon BBN Technologies 2020
Set the `SAVE_WF_OFFSETS` flag in order that QGL will output a map of the waveform data within the compiled binary waveform library.
```
from QGL import *
import QGL
import os.path
import pickle
QGL.drivers.APS2Pattern.SAVE_WF_OFFSETS = True
```
Create the usual channel library with a couple of AWGs.
```
cl = ChannelLibrary(":memory:")
q1 = cl.new_qubit("q1")
aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101")
aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102")
dig_1 = cl.new_X6("X6_1", address=0)
h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30)
h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30)
cl.set_control(q1, aps2_1, generator=h1)
cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2)
cl.set_master(aps2_1, aps2_1.ch("m2"))
cl["q1"].measure_chan.frequency = 0e6
cl.commit()
```
Compile a simple sequence.
```
mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 11))
plot_pulse_files(mf, time=True)
```
Open the offsets file (in the same directory as the `.aps2` files, one per AWG slice.)
```
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
offsets
```
Let's replace every single pulse with a fixed amplitude `Utheta`
```
pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
```
We see that the data in the file has been updated.
```
plot_pulse_files(mf, time=True)
```
## Profiling
How long does this take?
```
%timeit mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 100))
```
Getting the offsets is fast, and only needs to be done once
```
def get_offsets():
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
return offsets
%timeit offsets = get_offsets()
%timeit pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
%timeit QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
# %timeit QGL.drivers.APS2Pattern.update_wf_library("/Users/growland/workspace/AWG/Rabi/Rabi-BBNAPS1.aps2", pulses, offsets)
```
Moral of the story: 300 ms for initial compilation, and roughly 1.3 ms for update_in_place.
| true | code | 0.356223 | null | null | null | null |
|
# End-to-end learning for music audio
- http://qiita.com/himono/items/a94969e35fa8d71f876c
```
# データのダウンロード
wget http://mi.soi.city.ac.uk/datasets/magnatagatune/mp3.zip.001
wget http://mi.soi.city.ac.uk/datasets/magnatagatune/mp3.zip.002
wget http://mi.soi.city.ac.uk/datasets/magnatagatune/mp3.zip.003
# 結合
cat data/mp3.zip.* > data/music.zip
# 解凍
unzip data/music.zip -d music
```
```
%matplotlib inline
import os
import matplotlib.pyplot as plt
```
## MP3ファイルのロード
```
import numpy as np
from pydub import AudioSegment
def mp3_to_array(file):
# MP3 => RAW
song = AudioSegment.from_mp3(file)
song_arr = np.fromstring(song._data, np.int16)
return song_arr
%ls data/music/1/ambient_teknology-phoenix-01-ambient_teknology-0-29.mp3
file = 'data/music/1/ambient_teknology-phoenix-01-ambient_teknology-0-29.mp3'
song = mp3_to_array(file)
plt.plot(song)
```
## 楽曲タグデータをロード
- ランダムに3000曲を抽出
- よく使われるタグ50個を抽出
- 各曲には複数のタグがついている
```
import pandas as pd
tags_df = pd.read_csv('data/annotations_final.csv', delim_whitespace=True)
# 全体をランダムにサンプリング
tags_df = tags_df.sample(frac=1)
# 最初の3000曲を使う
tags_df = tags_df[:3000]
tags_df
top50_tags = tags_df.iloc[:, 1:189].sum().sort_values(ascending=False).index[:50].tolist()
y = tags_df[top50_tags].values
y
```
## 楽曲データをロード
- tags_dfのmp3_pathからファイルパスを取得
- mp3_to_array()でnumpy arrayをロード
- (samples, features, channels) になるようにreshape
- 音声波形は1次元なのでchannelsは1
- 訓練データはすべて同じサイズなのでfeaturesは同じになるはず(パディング不要)
```
files = tags_df.mp3_path.values
files = [os.path.join('data', 'music', x) for x in files]
X = np.array([mp3_to_array(file) for file in files])
X = X.reshape(X.shape[0], X.shape[1], 1)
X.shape
```
## 訓練データとテストデータに分割
```
from sklearn.model_selection import train_test_split
random_state = 42
train_x, test_x, train_y, test_y = train_test_split(X, y, test_size=0.2, random_state=random_state)
print(train_x.shape)
print(test_x.shape)
print(train_y.shape)
print(test_y.shape)
plt.plot(train_x[0])
np.save('train_x.npy', train_x)
np.save('test_x.npy', test_x)
np.save('train_y.npy', train_y)
np.save('test_y.npy', test_y)
```
## 訓練
```
import numpy as np
from keras.models import Model
from keras.layers import Dense, Flatten, Input, Conv1D, MaxPooling1D
from keras.callbacks import CSVLogger, ModelCheckpoint
train_x = np.load('train_x.npy')
train_y = np.load('train_y.npy')
test_x = np.load('test_x.npy')
test_y = np.load('test_y.npy')
print(train_x.shape)
print(train_y.shape)
print(test_x.shape)
print(test_y.shape)
features = train_x.shape[1]
x_inputs = Input(shape=(features, 1), name='x_inputs')
x = Conv1D(128, 256, strides=256, padding='valid', activation='relu')(x_inputs) # strided conv
x = Conv1D(32, 8, activation='relu')(x)
x = MaxPooling1D(4)(x)
x = Conv1D(32, 8, activation='relu')(x)
x = MaxPooling1D(4)(x)
x = Conv1D(32, 8, activation='relu')(x)
x = MaxPooling1D(4)(x)
x = Conv1D(32, 8, activation='relu')(x)
x = MaxPooling1D(4)(x)
x = Flatten()(x)
x = Dense(100, activation='relu')(x)
x_outputs = Dense(50, activation='sigmoid', name='x_outputs')(x)
model = Model(inputs=x_inputs, outputs=x_outputs)
model.compile(optimizer='adam',
loss='categorical_crossentropy')
logger = CSVLogger('history.log')
checkpoint = ModelCheckpoint(
'model.{epoch:02d}-{val_loss:.3f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='auto')
model.fit(train_x, train_y, batch_size=600, epochs=50,
validation_data=[test_x, test_y],
callbacks=[logger, checkpoint])
```
## 予測
- taggerは複数のタグを出力するのでevaluate()ではダメ?
```
import numpy as np
from keras.models import load_model
from sklearn.metrics import roc_auc_score
test_x = np.load('test_x.npy')
test_y = np.load('test_y.npy')
model = load_model('model.22-9.187-0.202.h5')
pred_y = model.predict(test_x, batch_size=50)
print(roc_auc_score(test_y, pred_y))
print(model.evaluate(test_x, test_y))
```
| true | code | 0.613352 | null | null | null | null |
|
Mount my google drive, where I stored the dataset.
```
from google.colab import drive
drive.mount('/content/drive')
```
**Download dependencies**
```
!pip3 install sklearn matplotlib GPUtil
!pip3 install torch torchvision
```
**Download Data**
In order to acquire the dataset please navigate to:
https://ieee-dataport.org/documents/cervigram-image-dataset
Unzip the dataset into the folder "dataset".
For your environment, please adjust the paths accordingly.
```
!rm -vrf "dataset"
!mkdir "dataset"
# !cp -r "/content/drive/My Drive/Studiu doctorat leziuni cervicale/cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
!cp -r "cervigram-image-dataset-v2.zip" "dataset/cervigram-image-dataset-v2.zip"
!unzip "dataset/cervigram-image-dataset-v2.zip" -d "dataset"
```
**Constants**
For your environment, please modify the paths accordingly.
```
# TRAIN_PATH = '/content/dataset/data/train/'
# TEST_PATH = '/content/dataset/data/test/'
TRAIN_PATH = 'dataset/data/train/'
TEST_PATH = 'dataset/data/test/'
CROP_SIZE = 260
IMAGE_SIZE = 224
BATCH_SIZE = 100
```
**Imports**
```
import torch as t
import torchvision as tv
import numpy as np
import PIL as pil
import matplotlib.pyplot as plt
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from torch.nn import Linear, BCEWithLogitsLoss
import sklearn as sk
import sklearn.metrics
from os import listdir
import time
import random
import GPUtil
```
**Memory Stats**
```
import GPUtil
def memory_stats():
for gpu in GPUtil.getGPUs():
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
memory_stats()
```
**Deterministic Measurements**
This statements help making the experiments reproducible by fixing the random seeds. Despite fixing the random seeds, experiments are usually not reproducible using different PyTorch releases, commits, platforms or between CPU and GPU executions. Please find more details in the PyTorch documentation:
https://pytorch.org/docs/stable/notes/randomness.html
```
SEED = 0
t.manual_seed(SEED)
t.cuda.manual_seed(SEED)
t.backends.cudnn.deterministic = True
t.backends.cudnn.benchmark = False
np.random.seed(SEED)
random.seed(SEED)
```
**Loading Data**
The dataset is structured in multiple small folders of 7 images each. This generator iterates through the folders and returns the category and 7 paths: one for each image in the folder. The paths are ordered; the order is important since each folder contains 3 types of images, first 5 are with acetic acid solution and the last two are through a green lens and having iodine solution(a solution of a dark red color).
```
def sortByLastDigits(elem):
chars = [c for c in elem if c.isdigit()]
return 0 if len(chars) == 0 else int(''.join(chars))
def getImagesPaths(root_path):
for class_folder in [root_path + f for f in listdir(root_path)]:
category = int(class_folder[-1])
for case_folder in listdir(class_folder):
case_folder_path = class_folder + '/' + case_folder + '/'
img_files = [case_folder_path + file_name for file_name in listdir(case_folder_path)]
yield category, sorted(img_files, key = sortByLastDigits)
```
We define 3 datasets, which load 3 kinds of images: natural images, images taken through a green lens and images where the doctor applied iodine solution (which gives a dark red color). Each dataset has dynamic and static transformations which could be applied to the data. The static transformations are applied on the initialization of the dataset, while the dynamic ones are applied when loading each batch of data.
```
class SimpleImagesDataset(t.utils.data.Dataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
for i in range(5):
img = pil.Image.open(img_files[i])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
def __getitem__(self, i):
x, y = self.dataset[i]
if self.transforms_x != None:
x = self.transforms_x(x)
if self.transforms_y != None:
y = self.transforms_y(y)
return x, y
def __len__(self):
return len(self.dataset)
class GreenLensImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-2])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
class RedImagesDataset(SimpleImagesDataset):
def __init__(self, root_path, transforms_x_static = None, transforms_x_dynamic = None, transforms_y_static = None, transforms_y_dynamic = None):
self.dataset = []
self.transforms_x = transforms_x_dynamic
self.transforms_y = transforms_y_dynamic
for category, img_files in getImagesPaths(root_path):
# Only the green lens image
img = pil.Image.open(img_files[-1])
if transforms_x_static != None:
img = transforms_x_static(img)
if transforms_y_static != None:
category = transforms_y_static(category)
self.dataset.append((img, category))
```
**Preprocess Data**
Convert pytorch tensor to numpy array.
```
def to_numpy(x):
return x.cpu().detach().numpy()
```
Data transformations for the test and training sets.
```
norm_mean = [0.485, 0.456, 0.406]
norm_std = [0.229, 0.224, 0.225]
transforms_train = tv.transforms.Compose([
tv.transforms.RandomAffine(degrees = 45, translate = None, scale = (1., 2.), shear = 30),
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.RandomHorizontalFlip(),
tv.transforms.ToTensor(),
tv.transforms.Lambda(lambda t: t.cuda()),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
transforms_test = tv.transforms.Compose([
# tv.transforms.CenterCrop(CROP_SIZE),
tv.transforms.Resize(IMAGE_SIZE),
tv.transforms.ToTensor(),
tv.transforms.Normalize(mean=norm_mean, std=norm_std)
])
y_transform = tv.transforms.Lambda(lambda y: t.tensor(y, dtype=t.long, device = 'cuda:0'))
```
Initialize pytorch datasets and loaders for training and test.
```
def create_loaders(dataset_class):
dataset_train = dataset_class(TRAIN_PATH, transforms_x_dynamic = transforms_train, transforms_y_dynamic = y_transform)
dataset_test = dataset_class(TEST_PATH, transforms_x_static = transforms_test,
transforms_x_dynamic = tv.transforms.Lambda(lambda t: t.cuda()), transforms_y_dynamic = y_transform)
loader_train = DataLoader(dataset_train, BATCH_SIZE, shuffle = True, num_workers = 0)
loader_test = DataLoader(dataset_test, BATCH_SIZE, shuffle = False, num_workers = 0)
return loader_train, loader_test, len(dataset_train), len(dataset_test)
loader_train_simple_img, loader_test_simple_img, len_train, len_test = create_loaders(SimpleImagesDataset)
```
**Visualize Data**
Load a few images so that we can see the effects of the data augmentation on the training set.
```
def plot_one_prediction(x, label, pred):
x, label, pred = to_numpy(x), to_numpy(label), to_numpy(pred)
x = np.transpose(x, [1, 2, 0])
if x.shape[-1] == 1:
x = x.squeeze()
x = x * np.array(norm_std) + np.array(norm_mean)
plt.title(label, color = 'green' if label == pred else 'red')
plt.imshow(x)
def plot_predictions(imgs, labels, preds):
fig = plt.figure(figsize = (20, 5))
for i in range(20):
fig.add_subplot(2, 10, i + 1, xticks = [], yticks = [])
plot_one_prediction(imgs[i], labels[i], preds[i])
# x, y = next(iter(loader_train_simple_img))
# plot_predictions(x, y, y)
```
**Model**
Define a few models to experiment with.
```
def get_mobilenet_v2():
model = t.hub.load('pytorch/vision', 'mobilenet_v2', pretrained=True)
model.classifier[1] = Linear(in_features=1280, out_features=4, bias=True)
model = model.cuda()
return model
def get_vgg_19():
model = tv.models.vgg19(pretrained = True)
model = model.cuda()
model.classifier[6].out_features = 4
return model
def get_res_next_101():
model = t.hub.load('facebookresearch/WSL-Images', 'resnext101_32x8d_wsl')
model.fc.out_features = 4
model = model.cuda()
return model
def get_resnet_18():
model = tv.models.resnet18(pretrained = True)
model.fc.out_features = 4
model = model.cuda()
return model
def get_dense_net():
model = tv.models.densenet121(pretrained = True)
model.classifier.out_features = 4
model = model.cuda()
return model
class MobileNetV2_FullConv(t.nn.Module):
def __init__(self):
super().__init__()
self.cnn = get_mobilenet_v2().features
self.cnn[18] = t.nn.Sequential(
tv.models.mobilenet.ConvBNReLU(320, 32, kernel_size=1),
t.nn.Dropout2d(p = .7)
)
self.fc = t.nn.Linear(32, 4)
def forward(self, x):
x = self.cnn(x)
x = x.mean([2, 3])
x = self.fc(x);
return x
model_simple = t.nn.DataParallel(get_mobilenet_v2())
```
**Train & Evaluate**
Timer utility function. This is used to measure the execution speed.
```
time_start = 0
def timer_start():
global time_start
time_start = time.time()
def timer_end():
return time.time() - time_start
```
This function trains the network and evaluates it at the same time. It outputs the metrics recorded during the training for both train and test. We are measuring accuracy and the loss. The function also saves a checkpoint of the model every time the accuracy is improved. In the end we will have a checkpoint of the model which gave the best accuracy.
```
def train_eval(optimizer, model, loader_train, loader_test, chekpoint_name, epochs):
metrics = {
'losses_train': [],
'losses_test': [],
'acc_train': [],
'acc_test': [],
'prec_train': [],
'prec_test': [],
'rec_train': [],
'rec_test': [],
'f_score_train': [],
'f_score_test': []
}
best_acc = 0
loss_fn = t.nn.CrossEntropyLoss()
try:
for epoch in range(epochs):
timer_start()
train_epoch_loss, train_epoch_acc, train_epoch_precision, train_epoch_recall, train_epoch_f_score = 0, 0, 0, 0, 0
test_epoch_loss, test_epoch_acc, test_epoch_precision, test_epoch_recall, test_epoch_f_score = 0, 0, 0, 0, 0
# Train
model.train()
for x, y in loader_train:
y_pred = model.forward(x)
loss = loss_fn(y_pred, y)
loss.backward()
optimizer.step()
# memory_stats()
optimizer.zero_grad()
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
ratio = len(y) / len_train
train_epoch_loss += (loss.item() * ratio)
train_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio)
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
train_epoch_precision += (precision * ratio)
train_epoch_recall += (recall * ratio)
train_epoch_f_score += (f_score * ratio)
metrics['losses_train'].append(train_epoch_loss)
metrics['acc_train'].append(train_epoch_acc)
metrics['prec_train'].append(train_epoch_precision)
metrics['rec_train'].append(train_epoch_recall)
metrics['f_score_train'].append(train_epoch_f_score)
# Evaluate
model.eval()
with t.no_grad():
for x, y in loader_test:
y_pred = model.forward(x)
loss = loss_fn(y_pred, y)
y_pred, y = to_numpy(y_pred), to_numpy(y)
pred = y_pred.argmax(axis = 1)
ratio = len(y) / len_test
test_epoch_loss += (loss * ratio)
test_epoch_acc += (sk.metrics.accuracy_score(y, pred) * ratio )
precision, recall, f_score, _ = sk.metrics.precision_recall_fscore_support(y, pred, average = 'macro')
test_epoch_precision += (precision * ratio)
test_epoch_recall += (recall * ratio)
test_epoch_f_score += (f_score * ratio)
metrics['losses_test'].append(test_epoch_loss)
metrics['acc_test'].append(test_epoch_acc)
metrics['prec_test'].append(test_epoch_precision)
metrics['rec_test'].append(test_epoch_recall)
metrics['f_score_test'].append(test_epoch_f_score)
if metrics['acc_test'][-1] > best_acc:
best_acc = metrics['acc_test'][-1]
t.save({'model': model.state_dict()}, 'checkpint {}.tar'.format(chekpoint_name))
print('Epoch {} acc {} prec {} rec {} f {} minutes {}'.format(
epoch + 1, metrics['acc_test'][-1], metrics['prec_test'][-1], metrics['rec_test'][-1], metrics['f_score_test'][-1], timer_end() / 60))
except KeyboardInterrupt as e:
print(e)
print('Ended training')
return metrics
```
Plot a metric for both train and test.
```
def plot_train_test(train, test, title, y_title):
plt.plot(range(len(train)), train, label = 'train')
plt.plot(range(len(test)), test, label = 'test')
plt.xlabel('Epochs')
plt.ylabel(y_title)
plt.title(title)
plt.legend()
plt.show()
```
Plot precision - recall curve
```
def plot_precision_recall(metrics):
plt.scatter(metrics['prec_train'], metrics['rec_train'], label = 'train')
plt.scatter(metrics['prec_test'], metrics['rec_test'], label = 'test')
plt.legend()
plt.title('Precision-Recall')
plt.xlabel('Precision')
plt.ylabel('Recall')
```
Train a model for several epochs. The steps_learning parameter is a list of tuples. Each tuple specifies the steps and the learning rate.
```
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning):
for steps, learn_rate in steps_learning:
metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps)
print('Best test accuracy :', max(metrics['acc_test']))
plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate))
plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate))
```
Perform actual training.
```
def do_train(model, loader_train, loader_test, checkpoint_name, steps_learning):
t.cuda.empty_cache()
for steps, learn_rate in steps_learning:
metrics = train_eval(t.optim.Adam(model.parameters(), lr = learn_rate, weight_decay = 0), model, loader_train, loader_test, checkpoint_name, steps)
index_max = np.array(metrics['acc_test']).argmax()
print('Best test accuracy :', metrics['acc_test'][index_max])
print('Corresponding precision :', metrics['prec_test'][index_max])
print('Corresponding recall :', metrics['rec_test'][index_max])
print('Corresponding f1 score :', metrics['f_score_test'][index_max])
plot_train_test(metrics['losses_train'], metrics['losses_test'], 'Loss (lr = {})'.format(learn_rate), 'Loss')
plot_train_test(metrics['acc_train'], metrics['acc_test'], 'Accuracy (lr = {})'.format(learn_rate), 'Accuracy')
plot_train_test(metrics['prec_train'], metrics['prec_test'], 'Precision (lr = {})'.format(learn_rate), 'Precision')
plot_train_test(metrics['rec_train'], metrics['rec_test'], 'Recall (lr = {})'.format(learn_rate), 'Recall')
plot_train_test(metrics['f_score_train'], metrics['f_score_test'], 'F1 Score (lr = {})'.format(learn_rate), 'F1 Score')
plot_precision_recall(metrics)
do_train(model_simple, loader_train_simple_img, loader_test_simple_img, 'simple_1', [(50, 1e-4)])
# checkpoint = t.load('/content/checkpint simple_1.tar')
# model_simple.load_state_dict(checkpoint['model'])
```
| true | code | 0.608012 | null | null | null | null |
|
```
%matplotlib inline
```
# Simple Oscillator Example
This example shows the most simple way of using a solver.
We solve free vibration of a simple oscillator:
$$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$
using the CVODE solver. An analytical solution exists, given by
$$u(t) = u_0 \cos\left(\sqrt{\frac{k}{m}} t\right)+\frac{\dot{u}_0}{\sqrt{\frac{k}{m}}} \sin\left(\sqrt{\frac{k}{m}} t\right)$$
```
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
from scikits.odes import ode
#data of the oscillator
k = 4.0
m = 1.0
#initial position and speed data on t=0, x[0] = u, x[1] = \dot{u}, xp = \dot{x}
initx = [1, 0.1]
```
We need a first order system, so convert the second order system
$$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$
into
$$\left\{ \begin{array}{l}
\dot u = v\\
\dot v = \ddot u = -\frac{ku}{m}
\end{array} \right.$$
You need to define a function that computes the right hand side of above equation:
```
def rhseqn(t, x, xdot):
""" we create rhs equations for the problem"""
xdot[0] = x[1]
xdot[1] = - k/m * x[0]
```
To solve the ODE you define an ode object, specify the solver to use, here cvode, and pass the right hand side function. You request the solution at specific timepoints by passing an array of times to the solve member.
```
solver = ode('cvode', rhseqn, old_api=False)
solution = solver.solve([0., 1., 2.], initx)
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
```
You can continue the solver by passing further times. Calling the solve routine reinits the solver, so you can restart at whatever time. To continue from the last computed solution, pass the last obtained time and solution.
**Note:** The solver performes better if it can take into account history information, so avoid calling solve to continue computation!
In general, you must check for errors using the errors output of solve.
```
#Solve over the next hour by continuation
times = np.linspace(0, 3600, 61)
times[0] = solution.values.t[-1]
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
```
The solution fails at a time around 24 seconds. Erros can be due to many things. Here however the reason is simple: we try to make too large jumps in time output. Increasing the allowed steps the solver can take will fix this. This is the **max_steps** option of cvode:
```
solver = ode('cvode', rhseqn, old_api=False, max_steps=5000)
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
```
To plot the simple oscillator, we show a (t,x) plot of the solution. Doing this over 60 seconds can be done as follows:
```
#plot of the oscilator
solver = ode('cvode', rhseqn, old_api=False)
times = np.linspace(0,60,600)
solution = solver.solve(times, initx)
plt.plot(solution.values.t,[x[0] for x in solution.values.y])
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
```
You can refine the tolerances from their defaults to obtain more accurate solutions
```
options1= {'rtol': 1e-6, 'atol': 1e-12, 'max_steps': 50000} # default rtol and atol
options2= {'rtol': 1e-15, 'atol': 1e-25, 'max_steps': 50000}
solver1 = ode('cvode', rhseqn, old_api=False, **options1)
solver2 = ode('cvode', rhseqn, old_api=False, **options2)
solution1 = solver1.solve([0., 1., 60], initx)
solution2 = solver2.solve([0., 1., 60], initx)
print('\n t Solution1 Solution2 Exact')
print('-----------------------------------------------------')
for t, u1, u2 in zip(solution1.values.t, solution1.values.y, solution2.values.y):
print('{0:>4.0f} {1:15.8g} {2:15.8g} {3:15.8g}'.format(t, u1[0], u2[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
```
# Simple Oscillator Example: Stepwise running
When using the *solve* method, you solve over a period of time you decided before. In some problems you might want to solve and decide on the output when to stop. Then you use the *step* method. The same example as above using the step method can be solved as follows.
You define the ode object selecting the cvode solver. You initialize the solver with the begin time and initial conditions using *init_step*. You compute solutions going forward with the *step* method.
```
solver = ode('cvode', rhseqn, old_api=False)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
time += 0.1
# fix roundoff error at end
if time > 60: time = 60
solution = solver.step(time)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if time >= 60:
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
```
The solver interpolates solutions to return the solution at the required output times:
```
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
```
# Simple Oscillator Example: Internal Solver Stepwise running
When using the *solve* method, you solve over a period of time you decided before. With the *step* method you solve by default towards a desired output time after which you can continue solving the problem.
For full control, you can also compute problems using the solver internal steps. This is not advised, as the number of return steps can be very large, **slowing down** the computation enormously. If you want this nevertheless, you can achieve it with the *one_step_compute* option. Like this:
```
solver = ode('cvode', rhseqn, old_api=False, one_step_compute=True)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
solution = solver.step(60)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if solution.values.t >= 60:
#back up to 60
solver.set_options(one_step_compute=False)
solution = solver.step(60)
plott[-1] = solution.values.t
plotx[-1] = solution.values.y[0]
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
```
By inspection of the returned times you can see how efficient the solver can solve this problem:
```
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
```
| true | code | 0.602559 | null | null | null | null |
|
# Siamese networks with TensorFlow 2.0/Keras
In this example, we'll implement a simple siamese network system, which verifyies whether a pair of MNIST images is of the same class (true) or not (false).
_This example is partially based on_ [https://github.com/keras-team/keras/blob/master/examples/mnist_siamese.py](https://github.com/keras-team/keras/blob/master/examples/mnist_siamese.py)
Let's start with the imports
```
import random
import numpy as np
import tensorflow as tf
```
We'll continue with the `create_pairs` function, which creates a training dataset of equal number of true/false pairs of each MNIST class.
```
def create_pairs(inputs: np.ndarray, labels: np.ndarray):
"""Create equal number of true/false pairs of samples"""
num_classes = 10
digit_indices = [np.where(labels == i)[0] for i in range(num_classes)]
pairs = list()
labels = list()
n = min([len(digit_indices[d]) for d in range(num_classes)]) - 1
for d in range(num_classes):
for i in range(n):
z1, z2 = digit_indices[d][i], digit_indices[d][i + 1]
pairs += [[inputs[z1], inputs[z2]]]
inc = random.randrange(1, num_classes)
dn = (d + inc) % num_classes
z1, z2 = digit_indices[d][i], digit_indices[dn][i]
pairs += [[inputs[z1], inputs[z2]]]
labels += [1, 0]
return np.array(pairs), np.array(labels, dtype=np.float32)
```
Next, we'll define the base network of the siamese system:
```
def create_base_network():
"""The shared encoding part of the siamese network"""
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
])
```
Next, let's load the regular MNIST training and validation sets and create true/false pairs out of them:
```
# Load the train and test MNIST datasets
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype(np.float32)
x_test = x_test.astype(np.float32)
x_train /= 255
x_test /= 255
input_shape = x_train.shape[1:]
# Create true/false training and testing pairs
train_pairs, tr_labels = create_pairs(x_train, y_train)
test_pairs, test_labels = create_pairs(x_test, y_test)
```
Then, we'll build the siamese system, which includes the `base_network`, the 2 siamese paths `encoder_a` and `encoder_b`, the `l1_dist` measure, and the combined `model`:
```
# Create the siamese network
# Start from the shared layers
base_network = create_base_network()
# Create first half of the siamese system
input_a = tf.keras.layers.Input(shape=input_shape)
# Note how we reuse the base_network in both halfs
encoder_a = base_network(input_a)
# Create the second half of the siamese system
input_b = tf.keras.layers.Input(shape=input_shape)
encoder_b = base_network(input_b)
# Create the the distance measure
l1_dist = tf.keras.layers.Lambda(
lambda embeddings: tf.keras.backend.abs(embeddings[0] - embeddings[1])) \
([encoder_a, encoder_b])
# Final fc layer with a single logistic output for the binary classification
flattened_weighted_distance = tf.keras.layers.Dense(1, activation='sigmoid') \
(l1_dist)
# Build the model
model = tf.keras.models.Model([input_a, input_b], flattened_weighted_distance)
```
Finally, we can train the model and check the validation accuracy, which reaches 99.37%:
```
# Train
model.compile(loss='binary_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit([train_pairs[:, 0], train_pairs[:, 1]], tr_labels,
batch_size=128,
epochs=20,
validation_data=([test_pairs[:, 0], test_pairs[:, 1]], test_labels))
```
| true | code | 0.740005 | null | null | null | null |
|
# Hierarchical Clustering
**Hierarchical clustering** refers to a class of clustering methods that seek to build a **hierarchy** of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means.
**Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook.
## Import packages
```
from __future__ import print_function # to conform python 2.x print to python 3.x
import turicreate
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import time
from scipy.sparse import csr_matrix
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances
%matplotlib inline
```
## Load the Wikipedia dataset
```
wiki = turicreate.SFrame('people_wiki.sframe/')
```
As we did in previous assignments, let's extract the TF-IDF features:
```
wiki['tf_idf'] = turicreate.text_analytics.tf_idf(wiki['text'])
```
To run k-means on this dataset, we should convert the data matrix into a sparse matrix.
```
from em_utilities import sframe_to_scipy # converter
# This will take about a minute or two.
wiki = wiki.add_row_number()
tf_idf, map_word_to_index = sframe_to_scipy(wiki, 'tf_idf')
```
To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.
```
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
```
## Bipartition the Wikipedia dataset using k-means
Recall our workflow for clustering text data with k-means:
1. Load the dataframe containing a dataset, such as the Wikipedia text dataset.
2. Extract the data matrix from the dataframe.
3. Run k-means on the data matrix with some value of k.
4. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article).
Let us modify the workflow to perform bipartitioning:
1. Load the dataframe containing a dataset, such as the Wikipedia text dataset.
2. Extract the data matrix from the dataframe.
3. Run k-means on the data matrix with k=2.
4. Divide the data matrix into two parts using the cluster assignments.
5. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization.
6. Visualize the bipartition of data.
We'd like to be able to repeat Steps 3-6 multiple times to produce a **hierarchy** of clusters such as the following:
```
(root)
|
+------------+-------------+
| |
Cluster Cluster
+------+-----+ +------+-----+
| | | |
Cluster Cluster Cluster Cluster
```
Each **parent cluster** is bipartitioned to produce two **child clusters**. At the very top is the **root cluster**, which consists of the entire dataset.
Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster:
* `dataframe`: a subset of the original dataframe that correspond to member rows of the cluster
* `matrix`: same set of rows, stored in sparse matrix format
* `centroid`: the centroid of the cluster (not applicable for the root cluster)
Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters).
```
def bipartition(cluster, maxiter=400, num_runs=4, seed=None):
'''cluster: should be a dictionary containing the following keys
* dataframe: original dataframe
* matrix: same data, in matrix format
* centroid: centroid for this particular cluster'''
data_matrix = cluster['matrix']
dataframe = cluster['dataframe']
# Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.
kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=1)
kmeans_model.fit(data_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
# Divide the data matrix into two parts using the cluster assignments.
data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \
data_matrix[cluster_assignment==1]
# Divide the dataframe into two parts, again using the cluster assignments.
cluster_assignment_sa = turicreate.SArray(cluster_assignment) # minor format conversion
dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \
dataframe[cluster_assignment_sa==1]
# Package relevant variables for the child clusters
cluster_left_child = {'matrix': data_matrix_left_child,
'dataframe': dataframe_left_child,
'centroid': centroids[0]}
cluster_right_child = {'matrix': data_matrix_right_child,
'dataframe': dataframe_right_child,
'centroid': centroids[1]}
return (cluster_left_child, cluster_right_child)
```
The following cell performs bipartitioning of the Wikipedia dataset. Allow 2+ minutes to finish.
Note. For the purpose of the assignment, we set an explicit seed (`seed=1`) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs.
```
%%time
wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster
left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=1, seed=0)
```
Let's examine the contents of one of the two clusters, which we call the `left_child`, referring to the tree visualization above.
```
left_child
```
And here is the content of the other cluster we named `right_child`.
```
right_child
```
## Visualize the bipartition
We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid.
```
def display_single_tf_idf_cluster(cluster, map_index_to_word):
'''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''
wiki_subset = cluster['dataframe']
tf_idf_subset = cluster['matrix']
centroid = cluster['centroid']
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroid.argsort()[::-1]
for i in range(5):
print('{0}:{1:.3f}'.format(map_index_to_word['category'], centroid[idx[i]])),
print('')
# Compute distances from the centroid to all data points in the cluster.
distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()
# compute nearest neighbors of the centroid within the cluster.
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in range(8):
text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('')
```
Let's visualize the two child clusters:
```
display_single_tf_idf_cluster(left_child, map_word_to_index)
display_single_tf_idf_cluster(right_child, map_word_to_index)
```
The right cluster consists of athletes and artists (singers and actors/actresses), whereas the left cluster consists of non-athletes and non-artists. So far, we have a single-level hierarchy consisting of two clusters, as follows:
```
Wikipedia
+
|
+--------------------------+--------------------+
| |
+ +
Non-athletes/artists Athletes/artists
```
Is this hierarchy good enough? **When building a hierarchy of clusters, we must keep our particular application in mind.** For instance, we might want to build a **directory** for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the `athletes/artists` and `non-athletes/artists` clusters.
## Perform recursive bipartitioning
### Cluster of athletes and artists
To help identify the clusters we've built so far, let's give them easy-to-read aliases:
```
non_athletes_artists = left_child
athletes_artists = right_child
```
Using the bipartition function, we produce two child clusters of the athlete cluster:
```
# Bipartition the cluster of athletes and artists
left_child_athletes_artists, right_child_athletes_artists = bipartition(athletes_artists, maxiter=100, num_runs=6, seed=1)
```
The left child cluster mainly consists of athletes:
```
display_single_tf_idf_cluster(left_child_athletes_artists, map_word_to_index)
```
On the other hand, the right child cluster consists mainly of artists (singers and actors/actresses):
```
display_single_tf_idf_cluster(right_child_athletes_artists, map_word_to_index)
```
Our hierarchy of clusters now looks like this:
```
Wikipedia
+
|
+--------------------------+--------------------+
| |
+ +
Non-athletes/artists Athletes/artists
+
|
+----------+----------+
| |
| |
+ |
athletes artists
```
Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, **we would like to achieve similar level of granularity for all clusters.**
Both the athletes and artists node can be subdivided more, as each one can be divided into more descriptive professions (singer/actress/painter/director, or baseball/football/basketball, etc.). Let's explore subdividing the athletes cluster further to produce finer child clusters.
Let's give the clusters aliases as well:
```
athletes = left_child_athletes_artists
artists = right_child_athletes_artists
```
### Cluster of athletes
In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights.
Let us bipartition the cluster of athletes.
```
left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_child_athletes, map_word_to_index)
display_single_tf_idf_cluster(right_child_athletes, map_word_to_index)
```
**Quiz Question**. Which diagram best describes the hierarchy right after splitting the `athletes` cluster? Refer to the quiz form for the diagrams.
**Caution**. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters.
* **If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster.** Thus, we may be misled if we judge the purity of clusters solely by their top documents and words.
* **Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization.** We may need to subdivide further to discover new topics. For instance, subdividing the `ice_hockey_football` cluster led to the appearance of runners and golfers.
### Cluster of non-athletes
Now let us subdivide the cluster of non-athletes.
```
%%time
# Bipartition the cluster of non-athletes
left_child_non_athletes_artists, right_child_non_athletes_artists = bipartition(non_athletes_artists, maxiter=100, num_runs=3, seed=1)
display_single_tf_idf_cluster(left_child_non_athletes_artists, map_word_to_index)
display_single_tf_idf_cluster(right_child_non_athletes_artists, map_word_to_index)
```
The clusters are not as clear, but the left cluster has a tendency to show important female figures, and the right one to show politicians and government officials.
Let's divide them further.
```
female_figures = left_child_non_athletes_artists
politicians_etc = right_child_non_athletes_artists
politicians_etc = left_child_non_athletes_artists
female_figures = right_child_non_athletes_artists
```
**Quiz Question**. Let us bipartition the clusters `female_figures` and `politicians`. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams.
**Note**. Use `maxiter=100, num_runs=6, seed=1` for consistency of output.
```
left_female_figures, right_female_figures = bipartition(female_figures, maxiter=100, num_runs=6, seed=1)
left_politicians_etc, right_politicians_etc = bipartition(politicians_etc, maxiter=100, num_runs=6, seed=1)
display_single_tf_idf_cluster(left_female_figures, map_word_to_index)
display_single_tf_idf_cluster(right_female_figures, map_word_to_index)
display_single_tf_idf_cluster(left_politicians_etc, map_word_to_index)
display_single_tf_idf_cluster(right_politicians_etc, map_word_to_index)
```
| true | code | 0.609524 | null | null | null | null |
|
# Data Attribute Recommendation - TechED 2020 INT260
Getting started with the Python SDK for the Data Attribute Recommendation service.
## Business Scenario
We will consider a business scenario involving product master data. The creation and maintenance of this product master data requires the careful manual selection of the correct categories for a given product from a pre-defined hierarchy of product categories.
In this workshop, we will explore how to automate this tedious manual task with the Data Attribute Recommendation service.
<video controls src="videos/dar_prediction_material_table.mp4"/>
This workshop will cover:
* Data Upload
* Model Training and Deployment
* Inference Requests
We will work through a basic example of how to achieve these tasks using the [Python SDK for Data Attribute Recommendation](https://github.com/SAP/data-attribute-recommendation-python-sdk).
*Note: if you are doing several runs of this notebook on a trial account, you may see errors stating 'The resource can no longer be used. Usage limit has been reached'. It can be beneficial to [clean up the service instance](#Cleaning-up-a-service-instance) to free up limited trial resources acquired by an earlier run of the notebook. [Some limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) cannot be reset this way.*
## Table of Contents
* [Exercise 01.1](#Exercise-01.1) - Installing the SDK and preparing the service key
* [Creating a service instance and key on BTP Trial](#Creating-a-service-instance-and-key)
* [Installing the SDK](#Installing-the-SDK)
* [Loading the service key into your Jupyter Notebook](#Loading-the-service-key-into-your-Jupyter-Notebook)
* [Exercise 01.2](#Exercise-01.2) - Uploading the data
* [Exercise 01.3](#Exercise-01.3) - Training the model
* [Exercise 01.4](#Exercise-01.4) - Deploying the Model and predicting labels
* [Resources](#Resources) - Additional reading
* [Cleaning up a service instance](#Cleaning-up-a-service-instance) - Clean up all resources on the service instance
* [Optional Exercises](#Optional-Exercises) - Optional exercises
## Requirements
See the [README in the Github repository for this workshop](https://github.com/SAP-samples/teched2020-INT260/blob/master/exercises/ex1-DAR/README.md).
# Exercise 01.1
*Back to [table of contents](#Table-of-Contents)*
In exercise 01.1, we will install the SDK and prepare the service key.
## Creating a service instance and key on BTP Trial
Please log in to your trial account: https://cockpit.eu10.hana.ondemand.com/trial/
In the your global account screen, go to the "Boosters" tab:

*Boosters are only available on the Trial landscape. If you are using a production environment, please follow this tutorial to manually [create a service instance and a service key](https://developers.sap.com/tutorials/cp-aibus-dar-service-instance.html)*.
In the Boosters tab, enter "Data Attribute Recommendation" into the search box. Then, select the
service tile from the search results:

The resulting screen shows details of the booster pack. Here, click the "Start" button and wait a few seconds.

Once the booster is finished, click the "go to Service Key" link to obtain your service key.

Finally, download the key and save it to disk.

## Installing the SDK
The Data Attribute Recommendation SDK is available from the Python package repository. It can be installed with the standard `pip` tool:
```
! pip install data-attribute-recommendation-sdk
```
*Note: If you are not using a Jupyter notebook, but instead a regular Python development environment, we recommend using a Python virtual environment to set up your development environment. Please see [the dedicated tutorial to learn how to install the SDK inside a Python virtual environment](https://developers.sap.com/tutorials/cp-aibus-dar-sdk-setup.html).*
## Loading the service key into your Jupyter Notebook
Once you downloaded the service key from the Cockpit, upload it to your notebook environment. The service key must be uploaded to same directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` is stored.
We first navigate to the file browser in Jupyter. On the top of your Jupyter notebook, right-click on the Jupyter logo and open in a new tab.

**In the file browser, navigate to the directory where the `teched2020-INT260_Data_Attribute_Recommendation.ipynb` notebook file is stored. The service key must reside next to this file.**
In the Jupyter file browser, click the **Upload** button (1). In the file selection dialog that opens, select the `defaultKey_*.json` file you downloaded previously from the SAP Cloud Platform Cockpit. Rename the file to `key.json`.
Confirm the upload by clicking on the second **Upload** button (2).

The service key contains your credentials to access the service. Please treat this as carefully as you would treat any password. We keep the service key as a separate file outside this notebook to avoid leaking the secret credentials.
The service key is a JSON file. We will load this file once and use the credentials throughout this workshop.
```
# First, set up logging so we can see the actions performed by the SDK behind the scenes
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
from pprint import pprint # for nicer output formatting
import json
import os
if not os.path.exists("key.json"):
msg = "key.json is not found. Please follow instructions above to create a service key of"
msg += " Data Attribute Recommendation. Then, upload it into the same directory where"
msg += " this notebook is saved."
print(msg)
raise ValueError(msg)
with open("key.json") as file_handle:
key = file_handle.read()
SERVICE_KEY = json.loads(key)
```
## Summary Exercise 01.1
In exercise 01.1, we have covered the following topics:
* How to install the Python SDK for Data Attribute Recommendation
* How to obtain a service key for the Data Attribute Recommendation service
# Exercise 01.2
*Back to [table of contents](#Table-of-Contents)*
*To perform this exercise, you need to execute the code in all previous exercises.*
In exercise 01.2, we will upload our demo dataset to the service.
## The Dataset
### Obtaining the Data
The dataset we use in this workshop is a CSV file containing product master data. The original data was released by BestBuy, a retail company, under an [open license](https://github.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample#data-and-license). This makes it ideal for first experiments with the Data Attribute Recommendation service.
The dataset can be downloaded directly from Github using the following command:
```
! wget -O bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv"
# If you receive a "command not found" error (i.e. on Windows), try curl instead of wget:
# ! curl -o bestBuy.csv "https://raw.githubusercontent.com/SAP-samples/data-attribute-recommendation-postman-tutorial-sample/master/Tutorial_Example_Dataset.csv"
```
Let's inspect the data:
```
# if you are experiencing an import error here, run the following in a new cell:
# ! pip install pandas
import pandas as pd
df = pd.read_csv("bestBuy.csv")
df.head(5)
print()
print(f"Data has {df.shape[0]} rows and {df.shape[1]} columns.")
```
The CSV contains the several products. For each product, the description, the manufacturer and the price are given. Additionally, three levels of the products hierarchy are given.
The first product, a set of AAA batteries, is located in the following place in the product hierarchy:
```
level1_category: Connected Home & Housewares
|
level2_category: Housewares
|
level3_category: Household Batteries
```
We will use the Data Attribute Recommendation service to predict the categories for a given product based on its **description**, **manufacturer** and **price**.
### Creating the DatasetSchema
We first have to describe the shape of our data by creating a DatasetSchema. This schema informs the service about the individual column types found in the CSV. We also describe which are the target columns used for training. These columns will be later predicted. In our case, these are the three category columns.
The service currently supports three column types: **text**, **category** and **number**. For prediction, only **category** is currently supported.
A DatasetSchema for the BestBuy dataset looks as follows:
```json
{
"features": [
{"label": "manufacturer", "type": "CATEGORY"},
{"label": "description", "type": "TEXT"},
{"label": "price", "type": "NUMBER"}
],
"labels": [
{"label": "level1_category", "type": "CATEGORY"},
{"label": "level2_category", "type": "CATEGORY"},
{"label": "level3_category", "type": "CATEGORY"}
],
"name": "bestbuy-category-prediction",
}
```
We will now upload this DatasetSchema to the Data Attribute Recommendation service. The SDK provides the
[`DataManagerClient.create_dataset_schema()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset_schema) method for this purpose.
```
from sap.aibus.dar.client.data_manager_client import DataManagerClient
dataset_schema = {
"features": [
{"label": "manufacturer", "type": "CATEGORY"},
{"label": "description", "type": "TEXT"},
{"label": "price", "type": "NUMBER"}
],
"labels": [
{"label": "level1_category", "type": "CATEGORY"},
{"label": "level2_category", "type": "CATEGORY"},
{"label": "level3_category", "type": "CATEGORY"}
],
"name": "bestbuy-category-prediction",
}
data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY)
response = data_manager.create_dataset_schema(dataset_schema)
dataset_schema_id = response["id"]
print()
print("DatasetSchema created:")
pprint(response)
print()
print(f"DatasetSchema ID: {dataset_schema_id}")
```
The API responds with the newly created DatasetSchema resource. The service assigned an ID to the schema. We save this ID in a variable, as we will need it when we upload the data.
### Uploading the Data to the service
The [`DataManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient) class is also responsible for uploading data to the service. This data must fit to an existing DatasetSchema. After uploading the data, the service will validate the Dataset against the DataSetSchema in a background process. The data must be a CSV file which can optionally be `gzip` compressed.
We will now upload our `bestBuy.csv` file, using the DatasetSchema which we created earlier.
Data upload is a two-step process. We first create the Dataset using [`DataManagerClient.create_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.create_dataset). Then we can upload data to the Dataset using the [`DataManagerClient.upload_data_to_dataset()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_to_dataset) method.
```
dataset_resource = data_manager.create_dataset("my-bestbuy-dataset", dataset_schema_id)
dataset_id = dataset_resource["id"]
print()
print("Dataset created:")
pprint(dataset_resource)
print()
print(f"Dataset ID: {dataset_id}")
# Compress file first for a faster upload
! gzip -9 -c bestBuy.csv > bestBuy.csv.gz
```
Note that the data upload can take a few minutes. Please do not restart the process while the cell is still running.
```
# Open in binary mode.
with open('bestBuy.csv.gz', 'rb') as file_handle:
dataset_resource = data_manager.upload_data_to_dataset(dataset_id, file_handle)
print()
print("Dataset after data upload:")
print()
pprint(dataset_resource)
```
Note that the Dataset status changed from `NO_DATA` to `VALIDATING`.
Dataset validation is a background process. The status will eventually change from `VALIDATING` to `SUCCEEDED`.
The SDK provides the [`DataManagerClient.wait_for_dataset_validation()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.wait_for_dataset_validation) method to poll for the Dataset validation.
```
dataset_resource = data_manager.wait_for_dataset_validation(dataset_id)
print()
print("Dataset after validation has finished:")
print()
pprint(dataset_resource)
```
If the status is `FAILED` instead of `SUCCEEDED`, then the `validationMessage` will contain details about the validation failure.
To better understand the Dataset lifecycle, refer to the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/a9b7429687a04e769dbc7955c6c44265.html).
## Summary Exercise 01.2
In exercise 01.2, we have covered the following topics:
* How to create a DatasetSchema
* How to upload a Dataset to the service
You can find optional exercises related to exercise 01.2 [below](#Optional-Exercises-for-01.2).
# Exercise 01.3
*Back to [table of contents](#Table-of-Contents)*
*To perform this exercise, you need to execute the code in all previous exercises.*
In exercise 01.3, we will train the model.
## Training the Model
The Dataset is now uploaded and has been validated successfully by the service.
To train a machine learning model, we first need to select the correct model template.
### Selecting the right ModelTemplate
The Data Attribute Recommendation service currently supports two different ModelTemplates:
| ID | Name | Description |
|--------------------------------------|---------------------------|---------------------------------------------------------------------------|
| d7810207-ca31-4d4d-9b5a-841a644fd81f | **Hierarchical template** | Recommended for the prediction of multiple classes that form a hierarchy. |
| 223abe0f-3b52-446f-9273-f3ca39619d2c | **Generic template** | Generic neural network for multi-label, multi-class classification. |
| 188df8b2-795a-48c1-8297-37f37b25ea00 | **AutoML template** | Finds the [best traditional machine learning model out of several traditional algorithms](https://blogs.sap.com/2021/04/28/how-does-automl-works-in-data-attribute-recommendation/). Single label only. |
We are building a model to predict product hierarchies. The **Hierarchical Template** is correct for this scenario. In this template, the first label in the DatasetSchema is considered the top-level category. Each subsequent label is considered to be further down in the hierarchy.
Coming back to our example DatasetSchema:
```json
{
"labels": [
{"label": "level1_category", "type": "CATEGORY"},
{"label": "level2_category", "type": "CATEGORY"},
{"label": "level3_category", "type": "CATEGORY"}
]
}
```
The first defined label is `level1_category`, which is given more weight during training than `level3_category`.
Refer to the [official documentation on ModelTemplates](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e76e8c636974a06967552c05d40e066.html) to learn more. Additional model templates may be added over time, so check back regularly.
## Starting the training
When working with models, we use the [`ModelManagerClient`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient) class.
To start the training, we need the IDs of the dataset and the desired model template. We also have to provide a name for the model.
The [`ModelManagerClient.create_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.create_job) method launches the training Job.
*Only one model of a given name can exist. If you receive a message stating 'The model name specified is already in use', you either have to remove the job and its associated model first or you have to change the `model_name` variable name below. You can also [clean up the entire service instance](#Cleaning-up-a-service-instance).*
```
from sap.aibus.dar.client.model_manager_client import ModelManagerClient
from sap.aibus.dar.client.exceptions import DARHTTPException
model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY)
model_template_id = "d7810207-ca31-4d4d-9b5a-841a644fd81f" # hierarchical template
model_name = "bestbuy-hierarchy-model"
job_resource = model_manager.create_job(model_name, dataset_id, model_template_id)
job_id = job_resource['id']
print()
print("Job resource:")
print()
pprint(job_resource)
print()
print(f"ID of submitted Job: {job_id}")
```
The job is now running in the background. Similar to the DatasetValidation, we have to poll the job until it succeeds.
The SDK provides the [`ModelManagerClient.wait_for_job()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_job) method:
```
job_resource = model_manager.wait_for_job(job_id)
print()
print("Job resource after training is finished:")
pprint(job_resource)
```
To better understand the Training Job lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/0fc40aa077ce4c708c1e5bfc875aa3be.html).
## Intermission
The model training will take between 5 and 10 minutes.
In the meantime, we can explore the available [resources](#Resources) for both the service and the SDK.
## Inspecting the Model
Once the training job is finished successfully, we can inspect the model using [`ModelManagerClient.read_model_by_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_by_name).
```
model_resource = model_manager.read_model_by_name(model_name)
print()
pprint(model_resource)
```
In the model resource, the `validationResult` key provides information about model performance. You can also use these metrics to compare performance of different [ModelTemplates](#Selecting-the-right-ModelTemplate) or different datasets.
## Summary Exercise 01.3
In exercise 01.3, we have covered the following topics:
* How to select the appropriate ModelTemplate
* How to train a Model from a previously uploaded Dataset
You can find optional exercises related to exercise 01.3 [below](#Optional-Exercises-for-01.3).
# Exercise 01.4
*Back to [table of contents](#Table-of-Contents)*
*To perform this exercise, you need to execute the code in all previous exercises.*
In exercise 01.4, we will deploy the model and predict labels for some unlabeled data.
## Deploying the Model
The training job has finished and the model is ready to be deployed. By deploying the model, we create a server process in the background on the Data Attribute Recommendation service which will serve inference requests.
In the SDK, the [`ModelManagerClient.create_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#module-sap.aibus.dar.client.model_manager_client) method lets us create a Deployment.
```
deployment_resource = model_manager.create_deployment(model_name)
deployment_id = deployment_resource["id"]
print()
print("Deployment resource:")
print()
pprint(deployment_resource)
print(f"Deployment ID: {deployment_id}")
```
*Note: if you are using a trial account and you see errors such as 'The resource can no longer be used. Usage limit has been reached', consider [cleaning up the service instance](#Cleaning-up-a-service-instance) to free up limited trial resources.*
Similar to the data upload and the training job, model deployment is an asynchronous process. We have to poll the API until the Deployment is in status `SUCCEEDED`. The SDK provides the [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) for this purposes.
```
deployment_resource = model_manager.wait_for_deployment(deployment_id)
print()
print("Finished deployment resource:")
print()
pprint(deployment_resource)
```
Once the Deployment is in status `SUCCEEDED`, we can run inference requests.
To better understand the Deployment lifecycle, see the [corresponding document on help.sap.com](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/f473b5b19a3b469e94c40eb27623b4f0.html).
*For trial users: the deployment will be stopped after 8 hours. You can restart it by deleting the deployment and creating a new one for your model. The [`ModelManagerClient.ensure_deployment_exists()`](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html) method will delete and re-create automatically. Then, you need to poll until the deployment is succeeded using [`ModelManagerClient.wait_for_deployment()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.wait_for_deployment) as above.*
## Executing Inference requests
With a single inference request, we can send up to 50 objects to the service to predict the labels. The data send to the service must match the `features` section of the DatasetSchema created earlier. The `labels` defined inside of the DatasetSchema will be predicted for each object and returned as a response to the request.
In the SDK, the [`InferenceClient.create_inference_request()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.inference_client.InferenceClient.create_inference_request) method handles submission of inference requests.
```
from sap.aibus.dar.client.inference_client import InferenceClient
inference = InferenceClient.construct_from_service_key(SERVICE_KEY)
objects_to_be_classified = [
{
"features": [
{"name": "manufacturer", "value": "Energizer"},
{"name": "description", "value": "Alkaline batteries; 1.5V"},
{"name": "price", "value": "5.99"},
],
},
]
inference_response = inference.create_inference_request(model_name, objects_to_be_classified)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
```
*Note: For trial accounts, you only have a limited number of objects which you can classify.*
You can also try to come up with your own example:
```
my_own_items = [
{
"features": [
{"name": "manufacturer", "value": "EDIT THIS"},
{"name": "description", "value": "EDIT THIS"},
{"name": "price", "value": "0.00"},
],
},
]
inference_response = inference.create_inference_request(model_name, my_own_items)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
```
You can also classify multiple objects at once. For each object, the `top_n` parameter determines how many predictions are returned.
```
objects_to_be_classified = [
{
"objectId": "optional-identifier-1",
"features": [
{"name": "manufacturer", "value": "Energizer"},
{"name": "description", "value": "Alkaline batteries; 1.5V"},
{"name": "price", "value": "5.99"},
],
},
{
"objectId": "optional-identifier-2",
"features": [
{"name": "manufacturer", "value": "Eidos"},
{"name": "description", "value": "Unravel a grim conspiracy at the brink of Revolution"},
{"name": "price", "value": "19.99"},
],
},
{
"objectId": "optional-identifier-3",
"features": [
{"name": "manufacturer", "value": "Cadac"},
{"name": "description", "value": "CADAC Grill Plate for Safari Chef Grills: 12\""
+ "cooking surface; designed for use with Safari Chef grills;"
+ "105 sq. in. cooking surface; PTFE nonstick coating;"
+ " 2 grill surfaces"
},
{"name": "price", "value": "39.99"},
],
}
]
inference_response = inference.create_inference_request(model_name, objects_to_be_classified, top_n=3)
print()
print("Inference request processed. Response:")
print()
pprint(inference_response)
```
We can see that the service now returns the `n-best` predictions for each label as indicated by the `top_n` parameter.
In some cases, the predicted category has the special value `nan`. In the `bestBuy.csv` data set, not all records have the full set of three categories. Some records only have a top-level category. The model learns this fact from the data and will occasionally suggest that a record should not have a category.
```
# Inspect all video games with just a top-level category entry
video_games = df[df['level1_category'] == 'Video Games']
video_games.loc[df['level2_category'].isna() & df['level3_category'].isna()].head(5)
```
To learn how to execute inference calls without the SDK just using the underlying RESTful API, see [Inference without the SDK](#Inference-without-the-SDK).
## Summary Exercise 01.4
In exercise 01.4, we have covered the following topics:
* How to deploy a previously trained model
* How to execute inference requests against a deployed model
You can find optional exercises related to exercise 01.4 [below](#Optional-Exercises-for-01.4).
# Wrapping up
In this workshop, we looked into the following topics:
* Installation of the Python SDK for Data Attribute Recommendation
* Modelling data with a DatasetSchema
* Uploading data into a Dataset
* Training a model
* Predicting labels for unlabelled data
Using these tools, we are able to solve the problem of missing Master Data attributes starting from just a CSV file containing training data.
Feel free to revisit the workshop materials at any time. The [resources](#Resources) section below contains additional reading.
If you would like to explore the additional capabilities of the SDK, visit the [optional exercises](#Optional-Exercises) below.
## Cleanup
During the course of the workshop, we have created several resources on the Data Attribute Recommendation Service:
* DatasetSchema
* Dataset
* Job
* Model
* Deployment
The SDK provides several methods to delete these resources. Note that there are dependencies between objects: you cannot delete a Dataset without deleting the Model beforehand.
You will need to set `CLEANUP_SESSION = True` below to execute the cleanup.
```
# Clean up all resources created earlier
CLEANUP_SESSION = False
def cleanup_session():
model_manager.delete_deployment_by_id(deployment_id) # this can take a few seconds
model_manager.delete_model_by_name(model_name)
model_manager.delete_job_by_id(job_id)
data_manager.delete_dataset_by_id(dataset_id)
data_manager.delete_dataset_schema_by_id(dataset_schema_id)
print("DONE cleaning up!")
if CLEANUP_SESSION:
print("Cleaning up resources generated in this session.")
cleanup_session()
else:
print("Not cleaning up. Set 'CLEANUP_SESSION = True' above and run again!")
```
## Resources
*Back to [table of contents](#Table-of-Contents)*
### SDK Resources
* [SDK source code on Github](https://github.com/SAP/data-attribute-recommendation-python-sdk)
* [SDK documentation](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/)
* [How to obtain support](https://github.com/SAP/data-attribute-recommendation-python-sdk/blob/master/README.md#how-to-obtain-support)
* [Tutorials: Classify Data Records with the SDK for Data Attribute Recommendation](https://developers.sap.com/group.cp-aibus-data-attribute-sdk.html)
### Data Attribute Recommendation
* [SAP Help Portal](https://help.sap.com/viewer/product/Data_Attribute_Recommendation/SHIP/en-US)
* [API Reference](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html)
* [Tutorials using Postman - interact with the service RESTful API directly](https://developers.sap.com/mission.cp-aibus-data-attribute.html)
* [Trial Account Limits](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/c03b561eea1744c9b9892b416037b99a.html)
* [Metering and Pricing](https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/1e093326a2764c298759fcb92c5b0500.html)
## Addendum
### Inference without the SDK
*Back to [table of contents](#Table-of-Contents)*
The Data Attribute Service exposes a RESTful API. The SDK we use in this workshop uses this API to interact with the DAR service.
For custom integration, you can implement your own client for the API. The tutorial "[Use Machine Learning to Classify Data Records]" is a great way to explore the Data Attribute Recommendation API with the Postman REST client. Beyond the tutorial, the [API Reference] is a comprehensive documentation of the RESTful interface.
[Use Machine Learning to Classify Data Records]: https://developers.sap.com/mission.cp-aibus-data-attribute.html
[API Reference]: https://help.sap.com/viewer/105bcfd88921418e8c29b24a7a402ec3/SHIP/en-US/b45cf9b24fd042d082c16191aa938c8d.html
To demonstrate the underlying API, the next example uses the `curl` command line tool to perform an inference request against the Inference API.
The example uses the `jq` command to extract the credentials from the service. The authentication token is retrieved from the `uaa_url` and then used for the inference request.
```
# If the following example gives you errors that the jq or curl commands cannot be found,
# you may be able to install them from conda by uncommenting one of the lines below:
#%conda install -q jq
#%conda install -q curl
%%bash -s "$model_name" # Pass the python model_name variable as the first argument to shell script
model_name=$1
echo "Model: $model_name"
key=$(cat key.json)
url=$(echo $key | jq -r .url)
uaa_url=$(echo $key | jq -r .uaa.url)
clientid=$(echo $key | jq -r .uaa.clientid)
clientsecret=$(echo $key | jq -r .uaa.clientsecret)
echo "Service URL: $url"
token_url=${uaa_url}/oauth/token?grant_type=client_credentials
echo "Obtaining token with clientid $clientid from $token_url"
bearer_token=$(curl \
--silent --show-error \
--user $clientid:$clientsecret \
$token_url \
| jq -r .access_token
)
inference_url=${url}/inference/api/v3/models/${model_name}/versions/1
echo "Running inference request against endpoint $inference_url"
echo ""
# We pass the token in the Authorization header.
# The payload for the inference request is passed as
# the body of the POST request below.
# The output of the curl command is piped through `jq`
# for pretty-printing
curl \
--silent --show-error \
--header "Authorization: Bearer ${bearer_token}" \
--header "Content-Type: application/json" \
-XPOST \
${inference_url} \
-d '{
"objects": [
{
"features": [
{
"name": "manufacturer",
"value": "Energizer"
},
{
"name": "description",
"value": "Alkaline batteries; 1.5V"
},
{
"name": "price",
"value": "5.99"
}
]
}
]
}' | jq
```
### Cleaning up a service instance
*Back to [table of contents](#Table-of-Contents)*
To clean all data on the service instance, you can run the following snippet. The code is self-contained and does not require you to execute any of the cells above. However, you will need to have the `key.json` containing a service key in place.
You will need to set `CLEANUP_EVERYTHING = True` below to execute the cleanup.
**NOTE: This will delete all data on the service instance!**
```
CLEANUP_EVERYTHING = False
def cleanup_everything():
import logging
import sys
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
import json
import os
if not os.path.exists("key.json"):
msg = "key.json is not found. Please follow instructions above to create a service key of"
msg += " Data Attribute Recommendation. Then, upload it into the same directory where"
msg += " this notebook is saved."
print(msg)
raise ValueError(msg)
with open("key.json") as file_handle:
key = file_handle.read()
SERVICE_KEY = json.loads(key)
from sap.aibus.dar.client.model_manager_client import ModelManagerClient
model_manager = ModelManagerClient.construct_from_service_key(SERVICE_KEY)
for deployment in model_manager.read_deployment_collection()["deployments"]:
model_manager.delete_deployment_by_id(deployment["id"])
for model in model_manager.read_model_collection()["models"]:
model_manager.delete_model_by_name(model["name"])
for job in model_manager.read_job_collection()["jobs"]:
model_manager.delete_job_by_id(job["id"])
from sap.aibus.dar.client.data_manager_client import DataManagerClient
data_manager = DataManagerClient.construct_from_service_key(SERVICE_KEY)
for dataset in data_manager.read_dataset_collection()["datasets"]:
data_manager.delete_dataset_by_id(dataset["id"])
for dataset_schema in data_manager.read_dataset_schema_collection()["datasetSchemas"]:
data_manager.delete_dataset_schema_by_id(dataset_schema["id"])
print("Cleanup done!")
if CLEANUP_EVERYTHING:
print("Cleaning up all resources in this service instance.")
cleanup_everything()
else:
print("Not cleaning up. Set 'CLEANUP_EVERYTHING = True' above and run again.")
```
### Optional Exercises
*Back to [table of contents](#Table-of-Contents)*
To work with the optional exercises, create a new cell in the Jupyter notebook by clicking the `+` button in the menu above or by using the `b` shortcut on your keyboard. You can then enter your code in the new cell and execute it.
#### Optional Exercises for 01.2
##### DatasetSchemas
Use the [`DataManagerClient.read_dataset_schema_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_schema_by_id) and the [`DataManagerClient.read_dataset_schema_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_schema_collection) methods to list the newly created and all DatasetSchemas, respectively.
##### Datasets
Use the [`DataManagerClient.read_dataset_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_by_id) and the [`DataManagerClient.read_dataset_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.read_dataset_collection) methods to inspect the newly created dataset.
Instead of using two separate methods to upload data and wait for validation to finish, you can also use [`DataManagerClient.upload_data_and_validate()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.data_manager_client.DataManagerClient.upload_data_and_validate).
#### Optional Exercises for 01.3
##### ModelTemplates
Use the [`ModelManagerClient.read_model_template_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_model_template_collection) to list all existing model templates.
##### Jobs
Use [`ModelManagerClient.read_job_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_job_by_id) and [`ModelManagerClient.read_job_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_job_collection) to inspect the job we just created.
The entire process of uploading the data and starting the training is also available as a single method call in [`ModelCreator.create()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.workflow.model.ModelCreator.create).
#### Optional Exercises for 01.4
##### Deployments
Use [`ModelManagerClient.read_deployment_by_id()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_deployment_by_id) and [`ModelManagerClient.read_deployment_collection()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.read_deployment_collection) to inspect the Deployment.
Use the [`ModelManagerclient.lookup_deployment_id_by_model_name()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.model_manager_client.ModelManagerClient.lookup_deployment_id_by_model_name) method to find the deployment ID for a given model name.
##### Inference
Use the [`InferenceClient.do_bulk_inference()`](https://data-attribute-recommendation-python-sdk.readthedocs.io/en/latest/api.html#sap.aibus.dar.client.inference_client.InferenceClient.do_bulk_inference) method to process more than fifty objects at a time. Note how the data format returned changes.
| true | code | 0.38122 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/MattFinney/practical_data_science_in_python/blob/main/Session_2_Practical_Data_Science.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/><a>
# Practical Data Science in Python
## Unsupervised Learning: Classifying Spotify Tracks by Genre with $k$-Means Clustering
Authors: Matthew Finney, Paulina Toro Isaza
#### Run this First! (Function Definitions)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_palette('Set1')
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from IPython.display import Audio, Image, clear_output
rs = 123
np.random.seed(rs)
def pca_plot(df, classes=None):
# Scale data for PCA
scaled_df = StandardScaler().fit_transform(df)
# Fit the PCA and extract the first two components
pca_results = PCA().fit_transform(scaled_df)
pca1_scores = pca_results[:,0]
pca2_scores = pca_results[:,1]
# Sort the legend labels
if classes is None:
hue_order = None
n_classes = 0
elif str(classes[0]).isnumeric():
classes = ['Cluster {}'.format(x) for x in classes]
hue_order = sorted(np.unique(classes))
n_classes = np.max(np.unique(classes).shape)
else:
hue_order = sorted(np.unique(classes))
n_classes = np.max(np.unique(classes).shape)
# Plot the first two principal components
plt.figure(figsize=(8.5,8.5))
plt.grid()
sns.scatterplot(pca1_scores, pca2_scores, s=50, hue=classes,
hue_order=hue_order, palette='Set1')
plt.xlabel("Principal Component {}".format(1))
plt.ylabel("Principal Component {}".format(2))
plt.title('Principal Component Plot')
plt.show()
def tracklist_player(track_list, df, header="Track Player"):
action = ''
for track in track_list:
print('{}\nTrack Name: {}\nArtist Name(s): {}'.format(header, df.loc[track,'name'],df.loc[track,'artist']))
try:
display(Image(df.loc[track,'cover_url'], format='jpeg', height=150))
except:
print('No cover art available')
try:
display(Audio(df.loc[track,'preview_url']+'.mp3', autoplay=True))
except:
print('No audio preview available')
print('Press <Enter> for the next track or q then <Enter> to quit: ')
action = input()
clear_output()
if action=='q':
break
print('No more clusters. Goodbye!')
def play_cluster_tracks(track_df, cluster_column="best_cluster"):
for cluster in sorted(track_df[cluster_column].unique()):
# Get the tracks in the cluster, and shuffle them for variety
tracks_list = track_df[track_df[cluster_column] == cluster].index.values
np.random.shuffle(tracks_list)
# Instantiate a tracklist player
tracklist_player(tracks_list, df=track_df, header='{}'.format(cluster))
# Load Track DataFrame
path = 'https://raw.githubusercontent.com/MattFinney/practical_data_science_in_python/main/spotify_track_data.csv'
tracks_df = pd.read_csv(path)
# Columns from the track dataframe which are relevant for our analysis
audio_feature_cols = ['danceability', 'energy', 'key', 'loudness', 'mode',
'speechiness', 'acousticness', 'instrumentalness',
'liveness', 'valence', 'tempo', 'duration_ms',
'time_signature']
# Show the first five rows of our dataframe
tracks_df.head()
```
## Recap from Session 1
In our earlier session, we started working with a dataset of Spotify tracks. We explored the variables in the dataset, and determined that audio features - like danceability, accousticness, and tempo - vary across the songs in our dataset and might help us to thoughtfully group the tracks into different playlists. We then used Principal Component Analysis (PCA), a dimensionality reduction technique, to visualize the variation in songs.
We'll pick up where we left off, with the PCA plot from last time. If you're just joining us for Session 2, don't fret! Attending Session 1 is NOT a prerequisite to learn and have fun in Session 2 today!
```
# Plot the principal component analysis results
pca_plot(tracks_df[audio_feature_cols])
```
## Today: Classification using $k$-Means Clustering
Our Principal Component Analysis in the first session helped us to visualize the variation of track audio features in just two dimensions. Looking at the scatterplot of the first two principal components above, we can see that there are a few different groups of tracks. But how do we mathematically separate the tracks into these meaningful groups?
One way to separate the tracks into meaningful groups based on similar audio features is to use clustering. Clustering is a machine learning technique that is very powerful for identifying patterns in unlabeled data where the ground truth is not known.
### What is $k$-Means Clustering?
$k$-Means Clustering is one of the most popular clustering algorithms. The algorithm assigns each data point to a cluster using four main steps.
**Step 1: Initialize the Clusters**\
Based on the user's desired number of clusters $k$, the algorithm randomly chooses a centroid for each cluster. In this example, we choose a $k=3$, therefore the algorithm randomly picks 3 centroids.

**Step 2: Assign Each Data Point**\
The algorithm assigns each point to the closest centroid to get $k$ initial clusters.

**Step 3: Recompute the Cluster Centers**\
For every cluster, the algorithm recomputes the centroid by taking the average of all points in the cluster. The changes in centroids are shown below by arrows.

**Step 4: Reassign the Points**\
Since the centroids change, the algorithm then re-assigns the points to the closest centroid. The image below shows the new clusters after re-assignment.

The algorithm repeats the calculation of centroids and assignment of points until points stop changing clusters. When clustering large datasets, you stop the algorithm before reaching convergence, using other criteria instead.
*Note: Some content in this section was [adapted](https://creativecommons.org/licenses/by/4.0/) from Google's free [Clustering in Machine Learning](https://developers.google.com/machine-learning/clustering) course. The course is a great resource if you want to explore clustering in more detail!*
### Cluster the Spotify Tracks using their Audio Features
Now, we will use the `sklearn.cluster.KMeans` Python library to apply the $k$-means algorithm to our `tracks_df` data. Based on our visual inspection of the PCA plot, let's start with a guess k=3 to get 3 clusters.
```
initial_k = ____
# Scale the data, so that the units of features don't impact feature importance
scaled_df = StandardScaler().fit_transform(tracks_df[audio_feature_cols])
# Cluster the data using the k means algorithm
initial_cluster_results = ______(n_clusters=initial_k, n_init=25, random_state=rs).fit(scaled_df)
```
Now, let's print the cluster results. Notice that we're given a number (0 or 1) for each observation in our data set. This number is the id of the cluster assigned to each track.
```
# Print the cluster results
print(initial_cluster_results._______)
```
And let's save the cluster results in our `tracks_df` dataframe as a column named `initial_cluster` so we can access them later.
```
# Save the cluster labels in our dataframe
tracks_df[______________] = ['Cluster ' + str(i) for i in __________.______]
```
Let's plot the PCA plot and color each observation based on the assigned cluster to visualize our $k$-means results.
```
# Show a PCA plot of the clusters
pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['initial_cluster'])
```
Does it look like our $k$-means algorithm correctly separated the tracks into clusters? Does each color map to a distinct group of points?
### How do our clusters of songs differ?
One way we can evaluate our clusters is by looking how the distribution of each data feature varies by cluster. In our case, let's check to see if tracks in the different clusters tend to have different values of energy, loudness, or speechiness.
```
# Plot the distribution of audio features by cluster
g = sns.pairplot(tracks_df, hue="initial_cluster",
vars=['danceability', 'energy', 'loudness', 'speechiness', 'tempo'],
hue_order=sorted(tracks_df.initial_cluster.unique()), palette='Set1')
g.fig.suptitle('Distribution of Audio Features by Cluster', y=1.05)
plt.show()
```
### Experiment with different values of $k$
Use the slider to select different values of $k$, then run the cell below to see how the choice of the number of clusters affects our results.
```
trial_k = 10 #@param {type:"slider", min:1, max:10, step:1}
# Cluster the data using the k means algorithm
trial_cluster_results = KMeans(n_clusters=trial_k, n_init=25, random_state=rs).fit(scaled_df)
# Save the cluster labels in our dataframe
tracks_df['trial_cluster'] = ['Cluster ' + str(i) for i in trial_cluster_results.labels_]
# Show a PCA plot of the clusters
pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['trial_cluster'])
# Plot the distribution of audio features by cluster
g = sns.pairplot(tracks_df, hue="trial_cluster",
vars=['danceability', 'energy', 'loudness', 'speechiness', 'tempo'],
hue_order=sorted(tracks_df.trial_cluster.unique()), palette='Set1')
g.fig.suptitle('Distribution of Audio Features by Cluster', y=1.05)
plt.show()
```
### Which value of $k$ works best for our data?
You may have noticed that the $k$-means algorithm requires you to choose $k$ and decide the number of clusters before you run the algorithm. But how do we know which value of $k$ is the best fit for our data?
One approach is to track the total distance from points to their cluster centroid as we increase the number of clusters, $k$. Usually, the total distance decreases as we increase $k$, but we reach a value of $k$ where increasing $k$ only marginally decreases the total distance. An elbow plot helps us to find that value of $k$; it's the value of $k$ where the slope of the line in the elbow plot crosses the threshold of slope $=-1$. When you plot distance vs $k$, this point often looks like an "elbow".
Let's build an elbow plot to select the value of $k$ that will give us the highest quality clusters that best explain the variation in our data.
```
# Calculate the Total Distance for each value of k between 1 and 10
scores = []
k_list = np.arange(____,____)
for i in k_list:
fit_k = _____(n_clusters=i, n_init=5, random_state=rs).fit(scaled_df)
scores.append(fit_k.inertia_)
# Plot this in an elbow plot
plt.figure(figsize=(11,8.5))
sns.lineplot(______, ______)
plt.xlabel('Number of clusters $k$')
plt.ylabel('Total Point to Centroid Distance')
plt.grid()
plt.title('The Elbow Method showing the optimal $k$')
plt.show()
```
Do you see the "elbow"? At what value of $k$ does it occur?
### Evaluate the results of our clustering algorithm for the best $k$
Use the slider below to choose the "best" $k$ that you determined from looking at the elbow plot. Evaluate the results in the PCA plot. Does this look like a good value of $k$ to separate the data into meaningful clusters?
```
best_k = 1 #@param {type:"slider", min:1, max:10, step:1}
# Cluster the data using the k means algorithm
best_cluster_results = KMeans(n_clusters=best_k, n_init=25, random_state=rs).fit(scaled_df)
# Save the cluster labels in our dataframe
tracks_df['best_cluster'] = ['Cluster ' + str(i) for i in best_cluster_results.labels_]
# Show a PCA plot of the clusters
pca_plot(tracks_df[audio_feature_cols], classes=tracks_df['best_cluster'])
```
## How did we do?
In addition to the mathematical ways to validate the selection of the best $k$ parameter for our model and the quality of our resulting clusters, there's another very important way to evaluate our results: listening to the tracks!
Let's listen to the tracks in each cluster! What do you notice about the attributes that tracks in each cluster have in common? What do you notice about how the clusters are different? What makes each cluster unique?
```
play_cluster_tracks(tracks_df, cluster_column='best_cluster')
```
## Wrap Up and Next Session
That's a wrap! Now that you've learned some practical skills in data science, please join us tomorrow afternoon for the third and final session in our series, where we'll talk about how to continue your studies and/or pursue a career in Data Science!
**Making Your Next Professional Play in Data Science**\
Friday, October 2 | 11:30am - 12:45pm PT\
[https://sched.co/dtqZ](https://sched.co/dtqZ)
| true | code | 0.553385 | null | null | null | null |
|
## These notebooks can be found at https://github.com/jaspajjr/pydata-visualisation if you want to follow along
https://matplotlib.org/users/intro.html
Matplotlib is a library for making 2D plots of arrays in Python.
* Has it's origins in emulating MATLAB, it can also be used in a Pythonic, object oriented way.
* Easy stuff should be easy, difficult stuff should be possible
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
```
Everything in matplotlib is organized in a hierarchy. At the top of the hierarchy is the matplotlib “state-machine environment” which is provided by the matplotlib.pyplot module. At this level, simple functions are used to add plot elements (lines, images, text, etc.) to the current axes in the current figure.
Pyplot’s state-machine environment behaves similarly to MATLAB and should be most familiar to users with MATLAB experience.
The next level down in the hierarchy is the first level of the object-oriented interface, in which pyplot is used only for a few functions such as figure creation, and the user explicitly creates and keeps track of the figure and axes objects. At this level, the user uses pyplot to create figures, and through those figures, one or more axes objects can be created. These axes objects are then used for most plotting actions.
## Scatter Plot
To start with let's do a really basic scatter plot:
```
plt.plot([0, 1, 2, 3, 4, 5], [0, 2, 4, 6, 8, 10])
x = [0, 1, 2, 3, 4, 5]
y = [0, 2, 4, 6, 8, 10]
plt.plot(x, y)
```
What if we don't want a line?
```
plt.plot([0, 1, 2, 3, 4, 5],
[0, 2, 5, 7, 8, 10],
marker='o',
linestyle='')
plt.xlabel('The X Axis')
plt.ylabel('The Y Axis')
plt.show();
```
#### Simple example from matplotlib
https://matplotlib.org/tutorials/intermediate/tight_layout_guide.html#sphx-glr-tutorials-intermediate-tight-layout-guide-py
```
def example_plot(ax, fontsize=12):
ax.plot([1, 2])
ax.locator_params(nbins=5)
ax.set_xlabel('x-label', fontsize=fontsize)
ax.set_ylabel('y-label', fontsize=fontsize)
ax.set_title('Title', fontsize=fontsize)
fig, ax = plt.subplots()
example_plot(ax, fontsize=24)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)
# fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
ax1.plot([0, 1, 2, 3, 4, 5],
[0, 2, 5, 7, 8, 10])
ax2.plot([0, 1, 2, 3, 4, 5],
[0, 2, 4, 9, 16, 25])
ax3.plot([0, 1, 2, 3, 4, 5],
[0, 13, 18, 21, 23, 25])
ax4.plot([0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5])
plt.tight_layout()
```
## Date Plotting
```
import pandas_datareader as pdr
df = pdr.get_data_fred('GS10')
df = df.reset_index()
print(df.info())
df.head()
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot_date(df['DATE'], df['GS10'])
```
## Bar Plot
```
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
x_data = [0, 1, 2, 3, 4]
values = [20, 35, 30, 35, 27]
ax.bar(x_data, values)
ax.set_xticks(x_data)
ax.set_xticklabels(('A', 'B', 'C', 'D', 'E'))
;
```
## Matplotlib basics
http://pbpython.com/effective-matplotlib.html
### Behind the scenes
* matplotlib.backend_bases.FigureCanvas is the area onto which the figure is drawn
* matplotlib.backend_bases.Renderer is the object which knows how to draw on the FigureCanvas
* matplotlib.artist.Artist is the object that knows how to use a renderer to paint onto the canvas
The typical user will spend 95% of their time working with the Artists.
https://matplotlib.org/tutorials/intermediate/artists.html#sphx-glr-tutorials-intermediate-artists-py
```
fig, (ax1, ax2) = plt.subplots(
nrows=1,
ncols=2,
sharey=True,
figsize=(12, 8))
fig.suptitle("Main Title", fontsize=14, fontweight='bold');
x_data = [0, 1, 2, 3, 4]
values = [20, 35, 30, 35, 27]
ax1.barh(x_data, values);
ax1.set_xlim([0, 55])
#ax1.set(xlabel='Unit of measurement', ylabel='Groups')
ax1.set(title='Foo', xlabel='Unit of measurement')
ax1.grid()
ax2.barh(x_data, [y / np.sum(values) for y in values], color='r');
ax2.set_title('Transformed', fontweight='light')
ax2.axvline(x=.1, color='k', linestyle='--')
ax2.set(xlabel='Unit of measurement') # Worth noticing this
ax2.set_axis_off();
fig.savefig('example_plot.png', dpi=80, bbox_inches="tight")
```
| true | code | 0.476641 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/dauparas/tensorflow_examples/blob/master/VAE_cell_cycle.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
https://github.com/PMBio/scLVM/blob/master/tutorials/tcell_demo.ipynb
Variational Autoencoder Model (VAE) with latent subspaces based on:
https://arxiv.org/pdf/1812.06190.pdf
```
#Step 1: import dependencies
from tensorflow.keras import layers
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from keras import regularizers
import time
from __future__ import division
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
%matplotlib inline
plt.style.use('dark_background')
import pandas as pd
import os
from matplotlib import cm
import h5py
import scipy as SP
import pylab as PL
data = os.path.join('data_Tcells_normCounts.h5f')
f = h5py.File(data,'r')
Y = f['LogNcountsMmus'][:] # gene expression matrix
tech_noise = f['LogVar_techMmus'][:] # technical noise
genes_het_bool=f['genes_heterogen'][:] # index of heterogeneous genes
geneID = f['gene_names'][:] # gene names
cellcyclegenes_filter = SP.unique(f['cellcyclegenes_filter'][:].ravel() -1) # idx of cell cycle genes from GO
cellcyclegenes_filterCB = f['ccCBall_gene_indices'][:].ravel() -1 # idx of cell cycle genes from cycle base ...
# filter cell cycle genes
idx_cell_cycle = SP.union1d(cellcyclegenes_filter,cellcyclegenes_filterCB)
# determine non-zero counts
idx_nonzero = SP.nonzero((Y.mean(0)**2)>0)[0]
idx_cell_cycle_noise_filtered = SP.intersect1d(idx_cell_cycle,idx_nonzero)
# subset gene expression matrix
Ycc = Y[:,idx_cell_cycle_noise_filtered]
plt = PL.subplot(1,1,1);
PL.imshow(Ycc,cmap=cm.RdBu,vmin=-3,vmax=+3,interpolation='None');
#PL.colorbar();
plt.set_xticks([]);
plt.set_yticks([]);
PL.xlabel('genes');
PL.ylabel('cells');
X = np.delete(Y, idx_cell_cycle_noise_filtered, axis=1)
X = Y #base case
U = Y[:,idx_cell_cycle_noise_filtered]
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
indx_small_mean = np.argwhere(mean < 0.00001)
X = np.delete(X, indx_small_mean, axis=1)
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
fano = variance/mean
print(fano.shape)
indx_small_fano = np.argwhere(fano < 1.0)
X = np.delete(X, indx_small_fano, axis=1)
mean = np.mean(X, axis=0)
variance = np.var(X, axis=0)
fano = variance/mean
print(fano.shape)
#Reconstruction loss
def x_given_z(z, output_size):
with tf.variable_scope('M/x_given_w_z'):
act = tf.nn.leaky_relu
h = z
h = tf.layers.dense(h, 8, act)
h = tf.layers.dense(h, 16, act)
h = tf.layers.dense(h, 32, act)
h = tf.layers.dense(h, 64, act)
h = tf.layers.dense(h, 128, act)
h = tf.layers.dense(h, 256, act)
loc = tf.layers.dense(h, output_size)
#log_variance = tf.layers.dense(x, latent_size)
#scale = tf.nn.softplus(log_variance)
scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
#KL term for z
def z_given_x(x, latent_size): #+
with tf.variable_scope('M/z_given_x'):
act = tf.nn.leaky_relu
h = x
h = tf.layers.dense(h, 256, act)
h = tf.layers.dense(h, 128, act)
h = tf.layers.dense(h, 64, act)
h = tf.layers.dense(h, 32, act)
h = tf.layers.dense(h, 16, act)
h = tf.layers.dense(h, 8, act)
loc = tf.layers.dense(h,latent_size)
log_variance = tf.layers.dense(h, latent_size)
scale = tf.nn.softplus(log_variance)
# scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
def z_given(latent_size):
with tf.variable_scope('M/z_given'):
loc = tf.zeros(latent_size)
scale = 0.01*tf.ones(tf.shape(loc))
return tfd.MultivariateNormalDiag(loc, scale)
#Connect encoder and decoder and define the loss function
tf.reset_default_graph()
x_in = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_in')
x_out = tf.placeholder(tf.float32, shape=[None, X.shape[1]], name='x_out')
z_latent_size = 2
beta = 0.000001
#KL_z
zI = z_given(z_latent_size)
zIx = z_given_x(x_in, z_latent_size)
zIx_sample = zIx.sample()
zIx_mean = zIx.mean()
#kl_z = tf.reduce_mean(zIx.log_prob(zIx_sample)- zI.log_prob(zIx_sample))
kl_z = tf.reduce_mean(tfd.kl_divergence(zIx, zI)) #analytical
#Reconstruction
xIz = x_given_z(zIx_sample, X.shape[1])
rec_out = xIz.mean()
rec_loss = tf.losses.mean_squared_error(x_out, rec_out)
loss = rec_loss + beta*kl_z
optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
#Helper function
def batch_generator(features, x, u, batch_size):
"""Function to create python generator to shuffle and split features into batches along the first dimension."""
idx = np.arange(features.shape[0])
np.random.shuffle(idx)
for start_idx in range(0, features.shape[0], batch_size):
end_idx = min(start_idx + batch_size, features.shape[0])
part = idx[start_idx:end_idx]
yield features[part,:], x[part,:] , u[part, :]
n_epochs = 5000
batch_size = X.shape[0]
start = time.time()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(n_epochs):
gen = batch_generator(X, X, U, batch_size) #create batch generator
rec_loss_ = 0
kl_z_ = 0
for j in range(np.int(X.shape[0]/batch_size)):
x_in_batch, x_out_batch, u_batch = gen.__next__()
_, rec_loss__, kl_z__= sess.run([optimizer, rec_loss, kl_z], feed_dict={x_in: x_in_batch, x_out: x_out_batch})
rec_loss_ += rec_loss__
kl_z_ += kl_z__
if (i+1)% 50 == 0 or i == 0:
zIx_mean_, rec_out_= sess.run([zIx_mean, rec_out], feed_dict ={x_in:X, x_out:X})
end = time.time()
print('epoch: {0}, rec_loss: {1:.3f}, kl_z: {2:.2f}'.format((i+1), rec_loss_/(1+np.int(X.shape[0]/batch_size)), kl_z_/(1+np.int(X.shape[0]/batch_size))))
start = time.time()
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=2, n_iter=7, random_state=42)
svd.fit(U.T)
print(svd.explained_variance_ratio_)
print(svd.explained_variance_ratio_.sum())
print(svd.singular_values_)
U_ = svd.components_
U_ = U_.T
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs[0].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,0], cmap='viridis', s=5.0);
axs[0].set_xlabel('z1')
axs[0].set_ylabel('z2')
fig.suptitle('X1')
plt.show()
fig, axs = plt.subplots(1, 2, figsize=(14,5))
axs[0].scatter(wIxy_mean_[:,0],wIxy_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0);
axs[0].set_xlabel('w1')
axs[0].set_ylabel('w2')
axs[1].scatter(zIx_mean_[:,0],zIx_mean_[:,1], c=U_[:,1], cmap='viridis', s=5.0);
axs[1].set_xlabel('z1')
axs[1].set_ylabel('z2')
fig.suptitle('X1')
plt.show()
error = np.abs(X-rec_out_)
plt.plot(np.reshape(error, -1), '*', markersize=0.1);
plt.hist(np.reshape(error, -1), bins=50);
```
| true | code | 0.767646 | null | null | null | null |
|
### Cell Painting morphological (CP) and L1000 gene expression (GE) profiles for the following datasets:
- **CDRP**-BBBC047-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 30,430 unique compounds for CP dataset, median number of replicates --> 4
* $\bf{GE}$ There are 21,782 unique compounds for GE dataset, median number of replicates --> 3
* 20,131 compounds are present in both datasets.
- **CDRP-bio**-BBBC036-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 2,242 unique compounds for CP dataset, median number of replicates --> 8
* $\bf{GE}$ There are 1,917 unique compounds for GE dataset, median number of replicates --> 2
* 1916 compounds are present in both datasets.
- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) :
* $\bf{CP}$ There are 593 unique alleles for CP dataset, median number of replicates --> 8
* $\bf{GE}$ There are 529 unique alleles for GE dataset, median number of replicates --> 8
* 525 alleles are present in both datasets.
- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 323 unique alleles for CP dataset, median number of replicates --> 5
* $\bf{GE}$ There are 327 unique alleles for GE dataset, median number of replicates --> 2
* 150 alleles are present in both datasets.
- **LINCS**-Pilot1-CP-GE (Cell line: U2OS) :
* $\bf{CP}$ There are 1570 unique compounds across 7 doses for CP dataset, median number of replicates --> 5
* $\bf{GE}$ There are 1402 unique compounds for GE dataset, median number of replicates --> 3
* $N_{p/d}$: 6984 compounds are present in both datasets.
--------------------------------------------
#### Link to the processed profiles:
https://cellpainting-datasets.s3.us-east-1.amazonaws.com/Rosetta-GE-CP
```
%matplotlib notebook
%load_ext autoreload
%autoreload 2
import numpy as np
import scipy.spatial
import pandas as pd
import sklearn.decomposition
import matplotlib.pyplot as plt
import seaborn as sns
import os
from cmapPy.pandasGEXpress.parse import parse
from utils.replicateCorrs import replicateCorrs
from utils.saveAsNewSheetToExistingFile import saveAsNewSheetToExistingFile,saveDF_to_CSV_GZ_no_timestamp
from importlib import reload
from utils.normalize_funcs import standardize_per_catX
# sns.set_style("whitegrid")
# np.__version__
pd.__version__
```
### Input / ouput files:
- **CDRPBIO**-BBBC047-Bray-CP-GE (Cell line: U2OS) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input: .mat files that are generated using https://github.com/broadinstitute/2014_wawer_pnas
* Output:
- **LUAD**-BBBC041-Caicedo-CP-GE (Cell line: A549) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input:
* Output:
- **TA-ORF**-BBBC037-Rohban-CP-GE (Cell line: U2OS) :
* $\bf{CP}$
* Input:
* Output:
* $\bf{GE}$
* Input: https://data.broadinstitute.org/icmap/custom/TA/brew/pc/TA.OE005_U2OS_72H/
* Output:
### Reformat Cell-Painting Data Sets
- CDRP and TA-ORF are in /storage/data/marziehhaghighi/Rosetta/raw-profiles/
- Luad is already processed by Juan, source of the files is at /storage/luad/profiles_cp
in case you want to reformat
```
fileName='RepCorrDF'
### dirs on gpu cluster
# rawProf_dir='/storage/data/marziehhaghighi/Rosetta/raw-profiles/'
# procProf_dir='/home/marziehhaghighi/workspace_rosetta/workspace/'
### dirs on ec2
rawProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/'
# procProf_dir='./'
procProf_dir='/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/'
# s3://imaging-platform/projects/2018_04_20_Rosetta/workspace/preprocessed_data
# aws s3 sync preprocessed_data s3://cellpainting-datasets/Rosetta-GE-CP/preprocessed_data --profile jumpcpuser
filename='../../results/RepCor/'+fileName+'.xlsx'
# ls ../../
# https://cellpainting-datasets.s3.us-east-1.amazonaws.com/
```
# CDRP-BBBC047-Bray
### GE - L1000 - CDRP
```
os.listdir(rawProf_dir+'/l1000_CDRP/')
cdrp_dataDir=rawProf_dir+'/l1000_CDRP/'
cpd_info = pd.read_csv(cdrp_dataDir+"/compounds.txt", sep="\t", dtype=str)
cpd_info.columns
from scipy.io import loadmat
x = loadmat(cdrp_dataDir+'cdrp.all.prof.mat')
k1=x['metaWell']['pert_id'][0][0]
k2=x['metaGen']['AFFX_PROBE_ID'][0][0]
k3=x['metaWell']['pert_dose'][0][0]
k4=x['metaWell']['det_plate'][0][0]
# pert_dose
# x['metaWell']['pert_id'][0][0][0][0][0]
pertID = []
probID=[]
for r in range(len(k1)):
v = k1[r][0][0]
pertID.append(v)
# probID.append(k2[r][0][0])
for r in range(len(k2)):
probID.append(k2[r][0][0])
pert_dose=[]
det_plate=[]
for r in range(len(k3)):
pert_dose.append(k3[r][0])
det_plate.append(k4[r][0][0])
dataArray=x['pclfc'];
cdrp_l1k_rep = pd.DataFrame(data=dataArray,columns=probID)
cdrp_l1k_rep['pert_id']=pertID
cdrp_l1k_rep['pert_dose']=pert_dose
cdrp_l1k_rep['det_plate']=det_plate
cdrp_l1k_rep['BROAD_CPD_ID']=cdrp_l1k_rep['pert_id'].str[:13]
cdrp_l1k_rep2=pd.merge(cdrp_l1k_rep, cpd_info, how='left',on=['BROAD_CPD_ID'])
l1k_features_cdrp=cdrp_l1k_rep2.columns[cdrp_l1k_rep2.columns.str.contains("_at")]
cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['BROAD_CPD_ID']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)
cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_id']+'_'+cdrp_l1k_rep2['pert_dose'].round(2).astype(str)
# cdrp_l1k_df.head()
print(cpd_info.shape,cdrp_l1k_rep.shape,cdrp_l1k_rep2.shape)
cdrp_l1k_rep2['pert_id_dose']=cdrp_l1k_rep2['pert_id_dose'].replace('DMSO_-666.0', 'DMSO')
cdrp_l1k_rep2['pert_sample_dose']=cdrp_l1k_rep2['pert_sample_dose'].replace('DMSO_-666.0', 'DMSO')
saveDF_to_CSV_GZ_no_timestamp(cdrp_l1k_rep2,procProf_dir+'preprocessed_data/CDRP-BBBC047-Bray/L1000/replicate_level_l1k.csv.gz');
# cdrp_l1k_rep2.head()
# cpd_info
```
### CP - CDRP
```
profileType=['_augmented','_normalized']
bioactiveFlag="";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
for pt in profileType[1:2]:
repLevelCDRP0=[]
for p in plates:
# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))
repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive
repLevelCDRP = pd.concat(repLevelCDRP0)
metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)
repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')
repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')
# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
# ,
if bioactiveFlag:
dataFolderName='CDRPBIO-BBBC036-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
else:
# sgfsgf
dataFolderName='CDRP-BBBC047-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)
dataFolderName='CDRP-BBBC047-Bray'
cp_feats=repLevelCDRP.columns[repLevelCDRP.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()
features_to_remove =find_correlation(repLevelCDRP2[cp_feats], threshold=0.9, remove_negative=False)
repLevelCDRP2_var_sel=repLevelCDRP2.drop(columns=features_to_remove)
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2_var_sel,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+'_normalized_variable_selected'+'.csv.gz')
# features_to_remove
# features_to_remove
# features_to_remove
repLevelCDRP2['Nuclei_Texture_Variance_RNA_3_0']
# repLevelCDRP2.shape
# cp_scaled.columns[cp_scaled.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")].tolist()
```
# CDRP-bio-BBBC036-Bray
### GE - L1000 - CDRPBIO
```
bioactiveFlag="-bioactive";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
# plates
cdrp_l1k_rep2_bioactive=cdrp_l1k_rep2[cdrp_l1k_rep2["pert_sample_dose"].isin(repLevelCDRP2.Metadata_Sample_Dose.unique().tolist())]
cdrp_l1k_rep.det_plate
```
### CP - CDRPBIO
```
profileType=['_augmented','_normalized','_normalized_variable_selected']
bioactiveFlag="-bioactive";# either "-bioactive" or ""
plates=os.listdir(rawProf_dir+'/CDRP'+bioactiveFlag+'/')
for pt in profileType:
repLevelCDRP0=[]
for p in plates:
# repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP/'+p+'/'+p+pt+'.csv'))
repLevelCDRP0.append(pd.read_csv(rawProf_dir+'/CDRP'+bioactiveFlag+'/'+p+'/'+p+pt+'.csv')) #if bioactive
repLevelCDRP = pd.concat(repLevelCDRP0)
metaCDRP1=pd.read_csv(rawProf_dir+'/CP_CDRP/metadata/metadata_CDRP.csv')
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelCDRP2=pd.merge(repLevelCDRP, metaCDRP1, how='left',on=['Metadata_broad_sample'])
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter'].round(0).astype(int).astype(str)
# repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_pert_id']+'_'+(repLevelCDRP2['Metadata_mmoles_per_liter']*2).round(0).astype(int).astype(str)
repLevelCDRP2["Metadata_mmoles_per_liter2"]=(repLevelCDRP2["Metadata_mmoles_per_liter"]*2).round(2)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_broad_sample']+'_'+repLevelCDRP2['Metadata_mmoles_per_liter2'].astype(str)
repLevelCDRP2['Metadata_Sample_Dose']=repLevelCDRP2['Metadata_Sample_Dose'].replace('DMSO_0.0', 'DMSO')
repLevelCDRP2['Metadata_pert_id']=repLevelCDRP2['Metadata_pert_id'].replace(np.nan, 'DMSO')
# repLevelCDRP2.to_csv(procProf_dir+'preprocessed_data/CDRPBIO-BBBC036-Bray/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
# ,
if bioactiveFlag:
dataFolderName='CDRPBIO-BBBC036-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
else:
dataFolderName='CDRP-BBBC047-Bray'
saveDF_to_CSV_GZ_no_timestamp(repLevelCDRP2,procProf_dir+'preprocessed_data/'+dataFolderName+\
'/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaCDRP1.shape,repLevelCDRP.shape,repLevelCDRP2.shape)
```
# LUAD-BBBC041-Caicedo
### GE - L1000 - LUAD
```
os.listdir(rawProf_dir+'/l1000_LUAD/input/')
os.listdir(rawProf_dir+'/l1000_LUAD/output/')
luad_dataDir=rawProf_dir+'/l1000_LUAD/'
luad_info1 = pd.read_csv(luad_dataDir+"/input/TA.OE014_A549_96H.map", sep="\t", dtype=str)
luad_info2 = pd.read_csv(luad_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str)
luad_info=pd.concat([luad_info1, luad_info2], ignore_index=True)
luad_info.head()
luad_l1k_df = parse(luad_dataDir+"/output/high_rep_A549_8reps_141230_ZSPCINF_n4232x978.gctx").data_df.T.reset_index()
luad_l1k_df=luad_l1k_df.rename(columns={"cid":"id"})
# cdrp_l1k_df['XX']=cdrp_l1k_df['cid'].str[0]
# cdrp_l1k_df['BROAD_CPD_ID']=cdrp_l1k_df['cid'].str[2:15]
luad_l1k_df2=pd.merge(luad_l1k_df, luad_info, how='inner',on=['id'])
luad_l1k_df2=luad_l1k_df2.rename(columns={"x_mutation_status":"allele"})
l1k_features=luad_l1k_df2.columns[luad_l1k_df2.columns.str.contains("_at")]
luad_l1k_df2['allele']=luad_l1k_df2['allele'].replace('UnTrt', 'DMSO')
print(luad_info.shape,luad_l1k_df.shape,luad_l1k_df2.shape)
saveDF_to_CSV_GZ_no_timestamp(luad_l1k_df2,procProf_dir+'/preprocessed_data/LUAD-BBBC041-Caicedo/L1000/replicate_level_l1k.csv.gz')
luad_l1k_df_scaled = standardize_per_catX(luad_l1k_df2,'det_plate',l1k_features.tolist());
x_l1k_luad=replicateCorrs(luad_l1k_df_scaled.reset_index(drop=True),'allele',l1k_features,1)
# x_l1k_luad=replicateCorrs(luad_l1k_df2[luad_l1k_df2['allele']!='DMSO'].reset_index(drop=True),'allele',l1k_features,1)
# saveAsNewSheetToExistingFile(filename,x_l1k_luad[2],'l1k-luad')
```
### CP - LUAD
```
profileType=['_augmented','_normalized','_normalized_variable_selected']
plates=os.listdir('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/')
for pt in profileType[1:2]:
repLevelLuad0=[]
for p in plates:
repLevelLuad0.append(pd.read_csv('/storage/luad/profiles_cp/LUAD-BBBC043-Caicedo/'+p+'/'+p+pt+'.csv'))
repLevelLuad = pd.concat(repLevelLuad0)
metaLuad1=pd.read_csv(rawProf_dir+'/CP_LUAD/metadata/combined_platemaps_AHB_20150506_ssedits.csv')
metaLuad1=metaLuad1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
metaLuad1['Metadata_Well']=metaLuad1['Metadata_Well'].str.lower()
# metaLuad2=pd.read_csv('~/workspace_rosetta/workspace/raw_profiles/CP_LUAD/metadata/barcode_platemap.csv')
# Y[Y['Metadata_Well']=='g05']['Nuclei_Texture_Variance_Mito_5_0']
repLevelLuad2=pd.merge(repLevelLuad, metaLuad1, how='inner',on=['Metadata_Plate_Map_Name','Metadata_Well'])
repLevelLuad2['x_mutation_status']=repLevelLuad2['x_mutation_status'].replace(np.nan, 'DMSO')
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# repLevelLuad2.to_csv(procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz',index=False,compression='gzip')
saveDF_to_CSV_GZ_no_timestamp(repLevelLuad2,procProf_dir+'preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaLuad1.shape,repLevelLuad.shape,repLevelLuad2.shape)
pt=['_normalized']
# Read save data
repLevelLuad2=pd.read_csv('./preprocessed_data/LUAD-BBBC041-Caicedo/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')
# repLevelTA.head()
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelLuad2[i].isnull()).sum(axis=0)/repLevelLuad2.shape[0])>0.05]
print(cols2remove0)
repLevelLuad2=repLevelLuad2.drop(cols2remove0, axis=1);
cp_features=repLevelLuad2.columns[repLevelLuad2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelLuad2 = repLevelLuad2.interpolate()
repLevelLuad2 = standardize_per_catX(repLevelLuad2,'Metadata_Plate',cp_features.tolist());
df1=repLevelLuad2[~repLevelLuad2['x_mutation_status'].isnull()].reset_index(drop=True)
x_cp_luad=replicateCorrs(df1,'x_mutation_status',cp_features,1)
saveAsNewSheetToExistingFile(filename,x_cp_luad[2],'cp-luad')
```
# TA-ORF-BBBC037-Rohban
### GE - L1000
```
taorf_datadir=rawProf_dir+'/l1000_TA_ORF/'
gene_info = pd.read_csv(taorf_datadir+"TA.OE005_U2OS_72H.map.txt", sep="\t", dtype=str)
# gene_info.columns
# TA.OE005_U2OS_72H_INF_n729x22268.gctx
# TA.OE005_U2OS_72H_QNORM_n729x978.gctx
# TA.OE005_U2OS_72H_ZSPCINF_n729x22268.gctx
# TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx
taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_ZSPCQNORM_n729x978.gctx")
# taorf_l1k0 = parse(taorf_datadir+"TA.OE005_U2OS_72H_QNORM_n729x978.gctx")
taorf_l1k_df0=taorf_l1k0.data_df
taorf_l1k_df=taorf_l1k_df0.T.reset_index()
l1k_features=taorf_l1k_df.columns[taorf_l1k_df.columns.str.contains("_at")]
taorf_l1k_df=taorf_l1k_df.rename(columns={"cid":"id"})
taorf_l1k_df2=pd.merge(taorf_l1k_df, gene_info, how='inner',on=['id'])
# print(taorf_l1k_df.shape,gene_info.shape,taorf_l1k_df2.shape)
taorf_l1k_df2.head()
# x_genesymbol_mutation
taorf_l1k_df2['pert_id']=taorf_l1k_df2['pert_id'].replace('CMAP-000', 'DMSO')
# compression_opts = dict(method='zip',archive_name='out.csv')
# taorf_l1k_df2.to_csv(procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz',index=False,compression=compression_opts)
saveDF_to_CSV_GZ_no_timestamp(taorf_l1k_df2,procProf_dir+'preprocessed_data/TA-ORF-BBBC037-Rohban/L1000/replicate_level_l1k.csv.gz')
print(gene_info.shape,taorf_l1k_df.shape,taorf_l1k_df2.shape)
# gene_info.head()
taorf_l1k_df2.groupby(['x_genesymbol_mutation']).size().describe()
taorf_l1k_df2.groupby(['pert_id']).size().describe()
```
#### Check Replicate Correlation
```
# df1=taorf_l1k_df2[taorf_l1k_df2['pert_id']!='CMAP-000']
df1_scaled = standardize_per_catX(taorf_l1k_df2,'det_plate',l1k_features.tolist());
df1_scaled2=df1_scaled[df1_scaled['pert_id']!='DMSO']
x=replicateCorrs(df1_scaled2,'pert_id',l1k_features,1)
```
### CP - TAORF
```
profileType=['_augmented','_normalized','_normalized_variable_selected']
plates=os.listdir(rawProf_dir+'TA-ORF-BBBC037-Rohban/')
for pt in profileType[0:1]:
repLevelTA0=[]
for p in plates:
repLevelTA0.append(pd.read_csv(rawProf_dir+'TA-ORF-BBBC037-Rohban/'+p+'/'+p+pt+'.csv'))
repLevelTA = pd.concat(repLevelTA0)
metaTA1=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA.csv')
metaTA2=pd.read_csv(rawProf_dir+'/CP_TA_ORF/metadata/metadata_TA_2.csv')
# metaTA2=metaTA2.rename(columns={"Metadata_broad_sample":"Metadata_broad_sample_2",'Metadata_Treatment':'Gene Allele Name'})
metaTA=pd.merge(metaTA2, metaTA1, how='left',on=['Metadata_broad_sample'])
# metaTA2=metaTA2.rename(columns={"Metadata_Treatment":"Metadata_pert_name"})
# repLevelTA2=pd.merge(repLevelTA, metaTA2, how='left',on=['Metadata_pert_name'])
repLevelTA2=pd.merge(repLevelTA, metaTA, how='left',on=['Metadata_broad_sample'])
# repLevelTA2=repLevelTA2.rename(columns={"Gene Allele Name":"Allele"})
repLevelTA2['Metadata_broad_sample']=repLevelTA2['Metadata_broad_sample'].replace(np.nan, 'DMSO')
saveDF_to_CSV_GZ_no_timestamp(repLevelTA2,procProf_dir+'/preprocessed_data/TA-ORF-BBBC037-Rohban/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(metaTA.shape,repLevelTA.shape,repLevelTA2.shape)
# repLevelTA.head()
cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelTA2[i].isnull()).sum(axis=0)/repLevelTA2.shape[0])>0.05]
print(cols2remove0)
repLevelTA2=repLevelTA2.drop(cols2remove0, axis=1);
# cp_features=list(set(cp_features)-set(cols2remove0))
# repLevelTA2=repLevelTA2.replace('nan', np.nan)
repLevelTA2 = repLevelTA2.interpolate()
cp_features=repLevelTA2.columns[repLevelTA2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelTA2 = standardize_per_catX(repLevelTA2,'Metadata_Plate',cp_features.tolist());
df1=repLevelTA2[~repLevelTA2['Metadata_broad_sample'].isnull()].reset_index(drop=True)
x_taorf_cp=replicateCorrs(df1,'Metadata_broad_sample',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_taorf_cp[2],'cp-taorf')
# plates
```
# LINCS-Pilot1
### GE - L1000 - LINCS
```
os.listdir(rawProf_dir+'/l1000_LINCS/2016_04_01_a549_48hr_batch1_L1000/')
os.listdir(rawProf_dir+'/l1000_LINCS/metadata/')
data_meta_match_ls=[['level_3','level_3_q2norm_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_4W','level_4W_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_4','level_4_zspc_n27837x978.gctx','col_meta_level_3_REP.A_A549_only_n27837.txt'],
['level_5_modz','level_5_modz_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt'],
['level_5_rank','level_5_rank_n9482x978.gctx','col_meta_level_5_REP.A_A549_only_n9482.txt']]
lincs_dataDir=rawProf_dir+'/l1000_LINCS/'
lincs_pert_info = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str)
lincs_meta_level3 = pd.read_csv(lincs_dataDir+"/metadata/col_meta_level_3_REP.A_A549_only_n27837.txt", sep="\t", dtype=str)
# lincs_info1 = pd.read_csv(lincs_dataDir+"/metadata/REP.A_A549_pert_info.txt", sep="\t", dtype=str)
print(lincs_meta_level3.shape)
lincs_meta_level3.head()
# lincs_info2 = pd.read_csv(lincs_dataDir+"/input/TA.OE015_A549_96H.map", sep="\t", dtype=str)
# lincs_info=pd.concat([lincs_info1, lincs_info2], ignore_index=True)
# lincs_info.head()
# lincs_meta_level3.groupby('distil_id').size()
lincs_meta_level3['distil_id'].unique().shape
# lincs_meta_level3.columns.tolist()
# lincs_meta_level3.pert_id
ls /home/ubuntu/workspace_rosetta/workspace/software/2018_04_20_Rosetta/preprocessed_data/LINCS-Pilot1/CellPainting
# procProf_dir+'preprocessed_data/LINCS-Pilot1/'
procProf_dir
for el in data_meta_match_ls:
lincs_l1k_df=parse(lincs_dataDir+"/2016_04_01_a549_48hr_batch1_L1000/"+el[1]).data_df.T.reset_index()
lincs_meta0 = pd.read_csv(lincs_dataDir+"/metadata/"+el[2], sep="\t", dtype=str)
lincs_meta=pd.merge(lincs_meta0, lincs_pert_info, how='left',on=['pert_id'])
lincs_meta=lincs_meta.rename(columns={"distil_id":"cid"})
lincs_l1k_df2=pd.merge(lincs_l1k_df, lincs_meta, how='inner',on=['cid'])
lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id']+'_'+lincs_l1k_df2['nearest_dose'].astype(str)
lincs_l1k_df2['pert_id_dose']=lincs_l1k_df2['pert_id_dose'].replace('DMSO_-666', 'DMSO')
# lincs_l1k_df2.to_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz',index=False,compression='gzip')
saveDF_to_CSV_GZ_no_timestamp(lincs_l1k_df2,procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+el[0]+'.csv.gz')
# lincs_l1k_df2
lincs_l1k_rep['pert_id_dose'].unique()
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[1][0]+'.csv.gz')
# l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
# x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)
# # saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')
# # lincs_l1k_rep.head()
lincs_l1k_rep.pert_id.unique().shape
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains('dose')]
lincs_l1k_rep[['pert_dose', 'pert_dose_unit', 'pert_idose', 'nearest_dose']]
lincs_l1k_rep['nearest_dose'].unique()
# lincs_l1k_rep.rna_plate.unique()
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id',l1k_features,1)
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)
saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')
lincs_l1k_rep = pd.read_csv(procProf_dir+'preprocessed_data/LINCS-Pilot1/L1000/'+data_meta_match_ls[2][0]+'.csv.gz')
l1k_features=lincs_l1k_rep.columns[lincs_l1k_rep.columns.str.contains("_at")]
lincs_l1k_rep = standardize_per_catX(lincs_l1k_rep,'det_plate',l1k_features.tolist());
x_l1k_lincs=replicateCorrs(lincs_l1k_rep[lincs_l1k_rep['pert_iname_x']!='DMSO'].reset_index(drop=True),'pert_id_dose',l1k_features,1)
saveAsNewSheetToExistingFile(filename,x_l1k_lincs[2],'l1k-lincs')
saveAsNewSheetToExistingFile(filename,x[2],'l1k-lincs')
```
raw data
```
# set(repLevelLuad2)-set(Y1.columns)
# Y1[['Allele', 'Category', 'Clone ID', 'Gene Symbol']].head()
# repLevelLuad2[repLevelLuad2['PublicID']=='BRDN0000553807'][['Col','InsertLength','NCBIGeneID','Name','OtherDescriptions','PublicID','Row','Symbol','Transcript','Vector','pert_type','x_mutation_status']].head()
```
#### Check Replicate Correlation
### CP - LINCS
```
# Ran the following on:
# https://ec2-54-242-99-61.compute-1.amazonaws.com:5006/notebooks/workspace_nucleolar/2020_07_20_Nucleolar_Calico/1-NucleolarSizeMetrics.ipynb
# Metadata
def recode_dose(x, doses, return_level=False):
closest_index = np.argmin([np.abs(dose - x) for dose in doses])
if np.isnan(x):
return 0
if return_level:
return closest_index + 1
else:
return doses[closest_index]
primary_dose_mapping = [0.04, 0.12, 0.37, 1.11, 3.33, 10, 20]
metadata=pd.read_csv("/home/ubuntu/bucket/projects/2018_04_20_Rosetta/workspace/raw-profiles/CP_LINCS/metadata/matadata_lincs_2.csv")
metadata['Metadata_mmoles_per_liter']=metadata.mmoles_per_liter.values.round(2)
metadata=metadata.rename(columns={"Assay_Plate_Barcode": "Metadata_Plate",'broad_sample':'Metadata_broad_sample','well_position':'Metadata_Well'})
lincs_submod_root_dir="/home/ubuntu/datasetsbucket/lincs-cell-painting/"
profileType=['_augmented','_normalized','_normalized_dmso',\
'_normalized_feature_select','_normalized_feature_select_dmso']
# profileType=['_normalized']
# plates=metadata.Assay_Plate_Barcode.unique().tolist()
plates=metadata.Metadata_Plate.unique().tolist()
for pt in profileType[4:5]:
repLevelLINCS0=[]
for p in plates:
profile_add=lincs_submod_root_dir+"/profiles/2016_04_01_a549_48hr_batch1/"+p+"/"+p+pt+".csv.gz"
if os.path.exists(profile_add):
repLevelLINCS0.append(pd.read_csv(profile_add))
repLevelLINCS = pd.concat(repLevelLINCS0)
meta_lincs1=metadata.rename(columns={"broad_sample": "Metadata_broad_sample"})
# metaCDRP1=metaCDRP1.rename(columns={"PlateName":"Metadata_Plate_Map_Name",'Well':'Metadata_Well'})
# metaCDRP1['Metadata_Well']=metaCDRP1['Metadata_Well'].str.lower()
repLevelLINCS2=pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample","Metadata_Well","Metadata_Plate",'Metadata_mmoles_per_liter'])
repLevelLINCS2 = repLevelLINCS2.assign(Metadata_dose_recode=(repLevelLINCS2.Metadata_mmoles_per_liter.apply(
lambda x: recode_dose(x, primary_dose_mapping, return_level=False))))
repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)
# repLevelLINCS2['Metadata_Sample_Dose']=repLevelLINCS2['Metadata_broad_sample']+'_'+repLevelLINCS2['Metadata_dose_recode'].astype(str)
repLevelLINCS2['Metadata_pert_id_dose']=repLevelLINCS2['Metadata_pert_id_dose'].replace(np.nan, 'DMSO')
# saveDF_to_CSV_GZ_no_timestamp(repLevelLINCS2,procProf_dir+'/preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt+'.csv.gz')
print(meta_lincs1.shape,repLevelLINCS.shape,repLevelLINCS2.shape)
# (8120, 15) (52223, 1810) (688699, 1825)
# repLevelLINCS
# pd.merge(repLevelLINCS,meta_lincs1,how='left', on=["Metadata_broad_sample"]).shape
repLevelLINCS.shape,meta_lincs1.shape
(8120, 15) (52223, 1238) (52223, 1253)
csv_l1k_lincs=pd.read_csv('./preprocessed_data/LINCS-Pilot1/L1000/replicate_level_l1k'+'.csv.gz')
csv_pddf=pd.read_csv('./preprocessed_data/LINCS-Pilot1/CellPainting/replicate_level_cp'+pt[0]+'.csv.gz')
csv_l1k_lincs.head()
csv_l1k_lincs.pert_id_dose.unique()
csv_pddf.Metadata_pert_id_dose.unique()
```
#### Read saved data
```
repLevelLINCS2.groupby(['Metadata_pert_id']).size()
repLevelLINCS2.groupby(['Metadata_pert_id_dose']).size().describe()
repLevelLINCS2.Metadata_Plate.unique().shape
repLevelLINCS2['Metadata_pert_id_dose'].unique().shape
# csv_pddf['Metadata_mmoles_per_liter'].round(0).unique()
# np.sort(csv_pddf['Metadata_mmoles_per_liter'].unique())
csv_pddf.groupby(['Metadata_dose_recode']).size()#.median()
# repLevelLincs2=csv_pddf.copy()
import gc
cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]
print(cols2remove0)
repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);
print('here0')
# cp_features=list(set(cp_features)-set(cols2remove0))
# repLevelTA2=repLevelTA2.replace('nan', np.nan)
del repLevelLincs2
gc.collect()
print('here0')
cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
repLevelLincs3[cp_features] = repLevelLincs3[cp_features].interpolate()
print('here1')
repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());
print('here1')
# df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)
# repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()
repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id_dose']).size().reset_index()
highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id_dose.tolist()
highRepComp.remove('DMSO')
# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\
# (repLevelLincs3['Metadata_dose_recode']==1.11)]
df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id_dose'].isin(highRepComp))]
x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id_dose',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')
repSizeDF
# repLevelLincs2=csv_pddf.copy()
# cp_features=repLevelLincs2.columns[repLevelLincs2.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# cols2remove0=[i for i in cp_features if ((repLevelLincs2[i].isnull()).sum(axis=0)/repLevelLincs2.shape[0])>0.05]
# print(cols2remove0)
# repLevelLincs3=repLevelLincs2.drop(cols2remove0, axis=1);
# # cp_features=list(set(cp_features)-set(cols2remove0))
# # repLevelTA2=repLevelTA2.replace('nan', np.nan)
# repLevelLincs3 = repLevelLincs3.interpolate()
# repLevelLincs3 = standardize_per_catX(repLevelLincs3,'Metadata_Plate',cp_features.tolist());
# cp_features=repLevelLincs3.columns[repLevelLincs3.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# # df0=repLevelCDRP3[repLevelCDRP3['Metadata_broad_sample']!='DMSO'].reset_index(drop=True)
# # repSizeDF=repLevelLincs3.groupby(['Metadata_broad_sample']).size().reset_index()
repSizeDF=repLevelLincs3.groupby(['Metadata_pert_id']).size().reset_index()
highRepComp=repSizeDF[repSizeDF[0]>1].Metadata_pert_id.tolist()
# highRepComp.remove('DMSO')
# df0=repLevelLincs3[(repLevelLincs3['Metadata_broad_sample'].isin(highRepComp)) &\
# (repLevelLincs3['Metadata_dose_recode']==1.11)]
df0=repLevelLincs3[(repLevelLincs3['Metadata_pert_id'].isin(highRepComp))]
x_lincs_cp=replicateCorrs(df0,'Metadata_pert_id',cp_features,1)
# saveAsNewSheetToExistingFile(filename,x_lincs_cp[2],'cp-lincs')
# x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)
# highRepComp[-1]
saveAsNewSheetToExistingFile(filename,x[2],'cp-lincs')
# repLevelLincs3.Metadata_Plate
repLevelLincs3.head()
# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595")][['Metadata_Plate','Metadata_Well']].drop_duplicates()
# csv_pddf[(csv_pddf['Metadata_dose_recode']==0.04) & (csv_pddf['Metadata_pert_id']=="BRD-A00147595") &
# (csv_pddf['Metadata_Plate']=='SQ00015196') & (csv_pddf['Metadata_Well']=="B12")][csv_pddf.columns[1820:]].drop_duplicates()
# def standardize_per_catX(df,column_name):
column_name='Metadata_Plate'
repLevelLincs_scaled_perPlate=repLevelLincs3.copy()
repLevelLincs_scaled_perPlate[cp_features.tolist()]=repLevelLincs3[cp_features.tolist()+[column_name]].groupby(column_name).transform(lambda x: (x - x.mean()) / x.std()).values
# def standardize_per_catX(df,column_name):
# # column_name='Metadata_Plate'
# cp_features=df.columns[df.columns.str.contains("Cells_|Cytoplasm_|Nuclei_")]
# df_scaled_perPlate=df.copy()
# df_scaled_perPlate[cp_features.tolist()]=\
# df[cp_features.tolist()+[column_name]].groupby(column_name)\
# .transform(lambda x: (x - x.mean()) / x.std()).values
# return df_scaled_perPlate
df0=repLevelLincs_scaled_perPlate[(repLevelLincs_scaled_perPlate['Metadata_Sample_Dose'].isin(highRepComp))]
x=replicateCorrs(df0,'Metadata_broad_sample',cp_features,1)
```
| true | code | 0.294196 | null | null | null | null |
|
# NOAA Wave Watch 3 and NDBC Buoy Data Comparison
*Note: this notebook requires python3.*
This notebook demostrates how to compare [WaveWatch III Global Ocean Wave Model](http://data.planetos.com/datasets/noaa_ww3_global_1.25x1d:noaa-wave-watch-iii-nww3-ocean-wave-model?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) and [NOAA NDBC buoy data](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook) using the Planet OS API.
API documentation is available at http://docs.planetos.com. If you have questions or comments, join the [Planet OS Slack community](http://slack.planetos.com/) to chat with our development team.
For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/. This notebook also makes use of the [matplotlib basemap toolkit.](http://matplotlib.org/basemap/index.html)
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import dateutil.parser
import datetime
from urllib.request import urlopen, Request
import simplejson as json
from datetime import date, timedelta, datetime
import matplotlib.dates as mdates
from mpl_toolkits.basemap import Basemap
```
**Important!** You'll need to replace apikey below with your actual Planet OS API key, which you'll find [on the Planet OS account settings page.](#http://data.planetos.com/account/settings/?utm_source=github&utm_medium=notebook&utm_campaign=ww3-api-notebook) and NDBC buoy station name in which you are intrested.
```
dataset_id = 'noaa_ndbc_stdmet_stations'
## stations with wave height available: '46006', '46013', '46029'
## stations without wave height: icac1', '41047', 'bepb6', '32st0', '51004'
## stations too close to coastline (no point to compare to ww3)'sacv4', 'gelo1', 'hcef1'
station = '46029'
apikey = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
```
Let's first query the API to see what stations are available for the [NDBC Standard Meteorological Data dataset.](http://data.planetos.com/datasets/noaa_ndbc_stdmet_stations?utm_source=github&utm_medium=notebook&utm_campaign=ndbc-wavewatch-iii-notebook)
```
API_url = 'http://api.planetos.com/v1/datasets/%s/stations?apikey=%s' % (dataset_id, apikey)
request = Request(API_url)
response = urlopen(request)
API_data_locations = json.loads(response.read())
# print(API_data_locations)
```
Now we'll use matplotlib to visualize the stations on a simple basemap.
```
m = Basemap(projection='merc',llcrnrlat=-80,urcrnrlat=80,\
llcrnrlon=-180,urcrnrlon=180,lat_ts=20,resolution='c')
fig=plt.figure(figsize=(15,10))
m.drawcoastlines()
##m.fillcontinents()
for i in API_data_locations['station']:
x,y=m(API_data_locations['station'][i]['SpatialExtent']['coordinates'][0],
API_data_locations['station'][i]['SpatialExtent']['coordinates'][1])
plt.scatter(x,y,color='r')
x,y=m(API_data_locations['station'][station]['SpatialExtent']['coordinates'][0],
API_data_locations['station'][station]['SpatialExtent']['coordinates'][1])
plt.scatter(x,y,s=100,color='b')
```
Let's examine the last five days of data. For the WaveWatch III forecast, we'll use the reference time parameter to pull forecast data from the 18:00 model run from five days ago.
```
## Find suitable reference time values
atthemoment = datetime.utcnow()
atthemoment = atthemoment.strftime('%Y-%m-%dT%H:%M:%S')
before5days = datetime.utcnow() - timedelta(days=5)
before5days_long = before5days.strftime('%Y-%m-%dT%H:%M:%S')
before5days_short = before5days.strftime('%Y-%m-%d')
start = before5days_long
end = atthemoment
reftime_start = str(before5days_short) + 'T18:00:00'
reftime_end = reftime_start
```
API request for NOAA NDBC buoy station data
```
API_url = "http://api.planetos.com/v1/datasets/{0}/point?station={1}&apikey={2}&start={3}&end={4}&count=1000".format(dataset_id,station,apikey,start,end)
print(API_url)
request = Request(API_url)
response = urlopen(request)
API_data_buoy = json.loads(response.read())
buoy_variables = []
for k,v in set([(j,i['context']) for i in API_data_buoy['entries'] for j in i['data'].keys()]):
buoy_variables.append(k)
```
Find buoy station coordinates to use them later for finding NOAA Wave Watch III data
```
for i in API_data_buoy['entries']:
#print(i['axes']['time'])
if i['context'] == 'time_latitude_longitude':
longitude = (i['axes']['longitude'])
latitude = (i['axes']['latitude'])
print ('Latitude: '+ str(latitude))
print ('Longitude: '+ str(longitude))
```
API request for NOAA WaveWatch III (NWW3) Ocean Wave Model near the point of selected station. Note that data may not be available at the requested reference time. If the response is empty, try removing the reference time parameters `reftime_start` and `reftime_end` from the query.
```
API_url = 'http://api.planetos.com/v1/datasets/noaa_ww3_global_1.25x1d/point?lat={0}&lon={1}&verbose=true&apikey={2}&count=100&end={3}&reftime_start={4}&reftime_end={5}'.format(latitude,longitude,apikey,end,reftime_start,reftime_end)
request = Request(API_url)
response = urlopen(request)
API_data_ww3 = json.loads(response.read())
print(API_url)
ww3_variables = []
for k,v in set([(j,i['context']) for i in API_data_ww3['entries'] for j in i['data'].keys()]):
ww3_variables.append(k)
```
Manually review the list of WaveWatch and NDBC data variables to determine which parameters are equivalent for comparison.
```
print(ww3_variables)
print(buoy_variables)
```
Next we'll build a dictionary of corresponding variables that we want to compare.
```
buoy_model = {'wave_height':'Significant_height_of_combined_wind_waves_and_swell_surface',
'mean_wave_dir':'Primary_wave_direction_surface',
'average_wpd':'Primary_wave_mean_period_surface',
'wind_spd':'Wind_speed_surface'}
```
Read data from the JSON responses and convert the values to floats for plotting. Note that depending on the dataset, some variables have different timesteps than others, so a separate time array for each variable is recommended.
```
def append_data(in_string):
if in_string == None:
return np.nan
elif in_string == 'None':
return np.nan
else:
return float(in_string)
ww3_data = {}
ww3_times = {}
buoy_data = {}
buoy_times = {}
for k,v in buoy_model.items():
ww3_data[v] = []
ww3_times[v] = []
buoy_data[k] = []
buoy_times[k] = []
for i in API_data_ww3['entries']:
for j in i['data']:
if j in buoy_model.values():
ww3_data[j].append(append_data(i['data'][j]))
ww3_times[j].append(dateutil.parser.parse(i['axes']['time']))
for i in API_data_buoy['entries']:
for j in i['data']:
if j in buoy_model.keys():
buoy_data[j].append(append_data(i['data'][j]))
buoy_times[j].append(dateutil.parser.parse(i['axes']['time']))
for i in ww3_data:
ww3_data[i] = np.array(ww3_data[i])
ww3_times[i] = np.array(ww3_times[i])
```
Finally, let's plot the data using matplotlib.
```
buoy_label = "NDBC Station %s" % station
ww3_label = "WW3 at %s" % reftime_start
for k,v in buoy_model.items():
if np.abs(np.nansum(buoy_data[k]))>0:
fig=plt.figure(figsize=(10,5))
plt.title(k+' '+v)
plt.plot(ww3_times[v],ww3_data[v], label=ww3_label)
plt.plot(buoy_times[k],buoy_data[k],'*',label=buoy_label)
plt.legend(bbox_to_anchor=(1.5, 0.22), loc=1, borderaxespad=0.)
plt.xlabel('Time')
plt.ylabel(k)
fig.autofmt_xdate()
plt.grid()
```
| true | code | 0.350005 | null | null | null | null |
|
```
import json
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad
from scipy.special import comb
from tabulate import tabulate
%matplotlib inline
```
## Expected numbers on Table 3.
```
rows = []
datasets = {
'Binary': 2,
'AG news': 4,
'CIFAR10': 10,
'CIFAR100': 100,
'Wiki3029': 3029,
}
def expectations(C: int) -> float:
"""
C is the number of latent classes.
"""
e = 0.
for k in range(1, C + 1):
e += C / k
return e
for dataset_name, C in datasets.items():
e = expectations(C)
rows.append((dataset_name, C, np.ceil(e)))
# ImageNet is non-uniform label distribution on the training dataset
data = json.load(open("./imagenet_count.json"))
counts = np.array(list(data.values()))
total_num = np.sum(counts)
prob = counts / total_num
def integrand(t: float, prob: np.ndarray) -> float:
return 1. - np.prod(1 - np.exp(-prob * t))
rows.append(("ImageNet", len(prob), np.ceil(quad(integrand, 0, np.inf, args=(prob))[0])))
print(tabulate(rows, headers=["Dataset", "\# classes", "\mathbb{E}[K+1]"]))
```
## Probability $\upsilon$
```
def prob(C, N):
"""
C: the number of latent class
N: the number of samples to draw
"""
theoretical = []
for n in range(C, N + 1):
p = 0.
for m in range(C - 1):
p += comb(C - 1, m) * ((-1) ** m) * np.exp((n - 1) * np.log(1. - (m + 1) / C))
theoretical.append((n, max(p, 0.)))
return np.array(theoretical)
# example of CIFAR-10
C = 10
for N in [32, 63, 128, 256, 512]:
p = np.sum(prob(C, N).T[1])
print("{:3d} {:.7f}".format(N, p))
# example of CIFAR-100
C = 100
ps = []
ns = []
for N in 128 * np.arange(1, 9):
p = np.sum(prob(C, N).T[1])
print("{:4d} {}".format(N, p))
ps.append(p)
ns.append(N)
```
## Simulation
```
n_loop = 10
rnd = np.random.RandomState(7)
labels = np.arange(C).repeat(100)
results = {}
for N in ns:
num_iters = int(len(labels) / N)
total_samples_for_bounds = float(num_iters * N * (n_loop))
for _ in range(n_loop):
rnd.shuffle(labels)
for batch_id in range(len(labels) // N):
if len(set(labels[N * batch_id:N * (batch_id + 1)])) == C:
results[N] = results.get(N, 0.) + N / total_samples_for_bounds
else:
results[N] = results.get(N, 0.) + 0.
xs = []
ys = []
for k, v in results.items():
print(k, v)
ys.append(v)
xs.append(k)
plt.plot(ns, ps, label="Theoretical")
plt.plot(xs, ys, label="Empirical")
plt.ylabel("probability")
plt.xlabel("$K+1$")
plt.title("CIFAR-100 simulation")
plt.legend()
```
| true | code | 0.620133 | null | null | null | null |
|
# PageRank Performance Benchmarking
# Skip notebook test
This notebook benchmarks performance of running PageRank within cuGraph against NetworkX. NetworkX contains several implementations of PageRank. This benchmark will compare cuGraph versus the defaukt Nx implementation as well as the SciPy version
Notebook Credits
Original Authors: Bradley Rees
Last Edit: 08/16/2020
RAPIDS Versions: 0.15
Test Hardware
GV100 32G, CUDA 10,0
Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz
32GB system memory
### Test Data
| File Name | Num of Vertices | Num of Edges |
|:---------------------- | --------------: | -----------: |
| preferentialAttachment | 100,000 | 999,970 |
| caidaRouterLevel | 192,244 | 1,218,132 |
| coAuthorsDBLP | 299,067 | 1,955,352 |
| dblp-2010 | 326,186 | 1,615,400 |
| citationCiteseer | 268,495 | 2,313,294 |
| coPapersDBLP | 540,486 | 30,491,458 |
| coPapersCiteseer | 434,102 | 32,073,440 |
| as-Skitter | 1,696,415 | 22,190,596 |
### Timing
What is not timed: Reading the data
What is timmed: (1) creating a Graph, (2) running PageRank
The data file is read in once for all flavors of PageRank. Each timed block will craete a Graph and then execute the algorithm. The results of the algorithm are not compared. If you are interested in seeing the comparison of results, then please see PageRank in the __notebooks__ repo.
## NOTICE
_You must have run the __dataPrep__ script prior to running this notebook so that the data is downloaded_
See the README file in this folder for a discription of how to get the data
## Now load the required libraries
```
# Import needed libraries
import gc
import time
import rmm
import cugraph
import cudf
# NetworkX libraries
import networkx as nx
from scipy.io import mmread
try:
import matplotlib
except ModuleNotFoundError:
os.system('pip install matplotlib')
import matplotlib.pyplot as plt; plt.rcdefaults()
import numpy as np
```
### Define the test data
```
# Test File
data = {
'preferentialAttachment' : './data/preferentialAttachment.mtx',
'caidaRouterLevel' : './data/caidaRouterLevel.mtx',
'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx',
'dblp' : './data/dblp-2010.mtx',
'citationCiteseer' : './data/citationCiteseer.mtx',
'coPapersDBLP' : './data/coPapersDBLP.mtx',
'coPapersCiteseer' : './data/coPapersCiteseer.mtx',
'as-Skitter' : './data/as-Skitter.mtx'
}
```
### Define the testing functions
```
# Data reader - the file format is MTX, so we will use the reader from SciPy
def read_mtx_file(mm_file):
print('Reading ' + str(mm_file) + '...')
M = mmread(mm_file).asfptype()
return M
# CuGraph PageRank
def cugraph_call(M, max_iter, tol, alpha):
gdf = cudf.DataFrame()
gdf['src'] = M.row
gdf['dst'] = M.col
print('\tcuGraph Solving... ')
t1 = time.time()
# cugraph Pagerank Call
G = cugraph.DiGraph()
G.from_cudf_edgelist(gdf, source='src', destination='dst', renumber=False)
df = cugraph.pagerank(G, alpha=alpha, max_iter=max_iter, tol=tol)
t2 = time.time() - t1
return t2
# Basic NetworkX PageRank
def networkx_call(M, max_iter, tol, alpha):
nnz_per_row = {r: 0 for r in range(M.get_shape()[0])}
for nnz in range(M.getnnz()):
nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]]
for nnz in range(M.getnnz()):
M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]])
M = M.tocsr()
if M is None:
raise TypeError('Could not read the input graph')
if M.shape[0] != M.shape[1]:
raise TypeError('Shape is not square')
# should be autosorted, but check just to make sure
if not M.has_sorted_indices:
print('sort_indices ... ')
M.sort_indices()
z = {k: 1.0/M.shape[0] for k in range(M.shape[0])}
print('\tNetworkX Solving... ')
# start timer
t1 = time.time()
Gnx = nx.DiGraph(M)
pr = nx.pagerank(Gnx, alpha, z, max_iter, tol)
t2 = time.time() - t1
return t2
# SciPy PageRank
def networkx_scipy_call(M, max_iter, tol, alpha):
nnz_per_row = {r: 0 for r in range(M.get_shape()[0])}
for nnz in range(M.getnnz()):
nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]]
for nnz in range(M.getnnz()):
M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]])
M = M.tocsr()
if M is None:
raise TypeError('Could not read the input graph')
if M.shape[0] != M.shape[1]:
raise TypeError('Shape is not square')
# should be autosorted, but check just to make sure
if not M.has_sorted_indices:
print('sort_indices ... ')
M.sort_indices()
z = {k: 1.0/M.shape[0] for k in range(M.shape[0])}
# SciPy Pagerank Call
print('\tSciPy Solving... ')
t1 = time.time()
Gnx = nx.DiGraph(M)
pr = nx.pagerank_scipy(Gnx, alpha, z, max_iter, tol)
t2 = time.time() - t1
return t2
```
### Run the benchmarks
```
# arrays to capture performance gains
time_cu = []
time_nx = []
time_sp = []
perf_nx = []
perf_sp = []
names = []
# init libraries by doing a simple task
v = './data/preferentialAttachment.mtx'
M = read_mtx_file(v)
trapids = cugraph_call(M, 100, 0.00001, 0.85)
del M
for k,v in data.items():
gc.collect()
# Saved the file Name
names.append(k)
# read the data
M = read_mtx_file(v)
# call cuGraph - this will be the baseline
trapids = cugraph_call(M, 100, 0.00001, 0.85)
time_cu.append(trapids)
# Now call NetworkX
tn = networkx_call(M, 100, 0.00001, 0.85)
speedUp = (tn / trapids)
perf_nx.append(speedUp)
time_nx.append(tn)
# Now call SciPy
tsp = networkx_scipy_call(M, 100, 0.00001, 0.85)
speedUp = (tsp / trapids)
perf_sp.append(speedUp)
time_sp.append(tsp)
print("cuGraph (" + str(trapids) + ") Nx (" + str(tn) + ") SciPy (" + str(tsp) + ")" )
del M
```
### plot the output
```
%matplotlib inline
plt.figure(figsize=(10,8))
bar_width = 0.35
index = np.arange(len(names))
_ = plt.bar(index, perf_nx, bar_width, color='g', label='vs Nx')
_ = plt.bar(index + bar_width, perf_sp, bar_width, color='b', label='vs SciPy')
plt.xlabel('Datasets')
plt.ylabel('Speedup')
plt.title('PageRank Performance Speedup')
plt.xticks(index + (bar_width / 2), names)
plt.xticks(rotation=90)
# Text on the top of each barplot
for i in range(len(perf_nx)):
plt.text(x = (i - 0.55) + bar_width, y = perf_nx[i] + 25, s = round(perf_nx[i], 1), size = 12)
for i in range(len(perf_sp)):
plt.text(x = (i - 0.1) + bar_width, y = perf_sp[i] + 25, s = round(perf_sp[i], 1), size = 12)
plt.legend()
plt.show()
```
# Dump the raw stats
```
perf_nx
perf_sp
time_cu
time_nx
time_sp
```
___
Copyright (c) 2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
___
| true | code | 0.387893 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/mjvakili/MLcourse/blob/master/day2/nn_qso_finder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Let's start by importing the libraries that we need for this exercise.
```
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib
from sklearn.model_selection import train_test_split
#matplotlib settings
matplotlib.rcParams['xtick.major.size'] = 7
matplotlib.rcParams['xtick.labelsize'] = 'x-large'
matplotlib.rcParams['ytick.major.size'] = 7
matplotlib.rcParams['ytick.labelsize'] = 'x-large'
matplotlib.rcParams['xtick.top'] = False
matplotlib.rcParams['ytick.right'] = False
matplotlib.rcParams['ytick.direction'] = 'in'
matplotlib.rcParams['xtick.direction'] = 'in'
matplotlib.rcParams['font.size'] = 15
matplotlib.rcParams['figure.figsize'] = [7,7]
#We need the astroml library to fetch the photometric datasets of sdss qsos and stars
pip install astroml
from astroML.datasets import fetch_dr7_quasar
from astroML.datasets import fetch_sdss_sspp
quasars = fetch_dr7_quasar()
stars = fetch_sdss_sspp()
# Data procesing taken from
#https://www.astroml.org/book_figures/chapter9/fig_star_quasar_ROC.html by Jake Van der Plus
# stack colors into matrix X
Nqso = len(quasars)
Nstars = len(stars)
X = np.empty((Nqso + Nstars, 4), dtype=float)
X[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g']
X[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r']
X[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i']
X[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z']
X[Nqso:, 0] = stars['upsf'] - stars['gpsf']
X[Nqso:, 1] = stars['gpsf'] - stars['rpsf']
X[Nqso:, 2] = stars['rpsf'] - stars['ipsf']
X[Nqso:, 3] = stars['ipsf'] - stars['zpsf']
y = np.zeros(Nqso + Nstars, dtype=int)
y[:Nqso] = 1
X = X/np.max(X, axis=0)
# split into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.9)
#Now let's build a simple Sequential model in which fully connected layers come after one another
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(), #this flattens input
tf.keras.layers.Dense(128, activation = "relu"),
tf.keras.layers.Dense(64, activation = "relu"),
tf.keras.layers.Dense(32, activation = "relu"),
tf.keras.layers.Dense(32, activation = "relu"),
tf.keras.layers.Dense(1, activation="sigmoid")
])
model.compile(optimizer='adam', loss='binary_crossentropy')
history = model.fit(X_train, y_train, validation_data = (X_test, y_test), batch_size = 32, epochs=20, verbose = 1)
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(loss))
plt.plot(epochs, loss, lw = 5, label='Training loss')
plt.plot(epochs, val_loss, lw = 5, label='validation loss')
plt.title('Loss')
plt.legend(loc=0)
plt.show()
prob = model.predict_proba(X_test) #model probabilities
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_test, prob)
plt.loglog(fpr, tpr, lw = 4)
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0.0, 0.15)
plt.ylim(0.6, 1.01)
plt.show()
plt.plot(thresholds, tpr, lw = 4)
plt.plot(thresholds, fpr, lw = 4)
plt.xlim(0,1)
plt.yscale("log")
plt.show()
#plt.xlabel('false positive rate')
#plt.ylabel('true positive rate')
##plt.xlim(0.0, 0.15)
#plt.ylim(0.6, 1.01)
#Now let's look at the confusion matrix
y_pred = model.predict(X_test)
z_pred = np.zeros(y_pred.shape[0], dtype = int)
mask = np.where(y_pred>.5)[0]
z_pred[mask] = 1
confusion_matrix(y_test, z_pred.astype(int))
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
```
#Exercise1:
Try to change the number of layers, batchsize, as well as the default learning rate, one at a time. See which one can make a more significant impact on the performance of the model.
#Exercise 2:
Write a simple function for visualizing the predicted decision boundaries in the feature space. Try to identify the regions of the parameter space which contribute significantly to the false positive rates.
#Exercise 3:
This dataset is a bit imbalanced in that the QSOs are outnumbered by the stars. Can you think of a wighting scheme to pass to the loss function, such that the detection rate of QSOs increases?
| true | code | 0.693161 | null | null | null | null |
|
# Exercise: Find correspondences between old and modern english
The purpose of this execise is to use two vecsigrafos, one built on UMBC and Wordnet and another one produced by directly running Swivel against a corpus of Shakespeare's complete works, to try to find corelations between old and modern English, e.g. "thou" -> "you", "dost" -> "do", "raiment" -> "clothing". For example, you can try to pick a set of 100 words in "ye olde" English corpus and see how they correlate to UMBC over WordNet.

Next, we prepare the embeddings from the Shakespeare corpus and load a UMBC vecsigrafo, which will provide the two vector spaces to correlate.
## Download a small text corpus
First, we download the corpus into our environment. We will use the Shakespeare's complete works corpus, published as part of Project Gutenberg and pbublicly available.
```
import os
%ls
#!rm -r tutorial
!git clone https://github.com/HybridNLP2018/tutorial
```
Let us see if the corpus is where we think it is:
```
%cd tutorial/lit
%ls
```
Downloading Swivel
```
!wget http://expertsystemlab.com/hybridNLP18/swivel.zip
!unzip swivel.zip
!rm swivel/*
!rm swivel.zip
```
## Learn the Swivel embeddings over the Old Shakespeare corpus
### Calculating the co-occurrence matrix
```
corpus_path = '/content/tutorial/lit/shakespeare_complete_works.txt'
coocs_path = '/content/tutorial/lit/coocs'
shard_size = 512
freq=3
!python /content/tutorial/scripts/swivel/prep.py --input={corpus_path} --output_dir={coocs_path} --shard_size={shard_size} --min_count={freq}
%ls {coocs_path} | head -n 10
```
### Learning the embeddings from the matrix
```
vec_path = '/content/tutorial/lit/vec/'
!python /content/tutorial/scripts/swivel/swivel.py --input_base_path={coocs_path} \
--output_base_path={vec_path} \
--num_epochs=20 --dim=300 \
--submatrix_rows={shard_size} --submatrix_cols={shard_size}
```
Checking the context of the 'vec' directory. Should contain checkpoints of the model plus tsv files for column and row embeddings.
```
os.listdir(vec_path)
```
Converting tsv to bin:
```
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={vec_path}vocab.txt --output={vec_path}vecs.bin \
{vec_path}row_embedding.tsv \
{vec_path}col_embedding.tsv
%ls {vec_path}
```
### Read stored binary embeddings and inspect them
```
import importlib.util
spec = importlib.util.spec_from_file_location("vecs", "/content/tutorial/scripts/swivel/vecs.py")
m = importlib.util.module_from_spec(spec)
spec.loader.exec_module(m)
shakespeare_vecs = m.Vecs(vec_path + 'vocab.txt', vec_path + 'vecs.bin')
```
##Basic method to print the k nearest neighbors for a given word
```
def k_neighbors(vec, word, k=10):
res = vec.neighbors(word)
if not res:
print('%s is not in the vocabulary, try e.g. %s' % (word, vecs.random_word_in_vocab()))
else:
for word, sim in res[:10]:
print('%0.4f: %s' % (sim, word))
k_neighbors(shakespeare_vecs, 'strife')
k_neighbors(shakespeare_vecs,'youth')
```
## Load vecsigrafo from UMBC over WordNet
```
%ls
!wget https://zenodo.org/record/1446214/files/vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz
%ls
!tar -xvzf vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz
!rm vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz
umbc_wn_vec_path = '/content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/'
```
Extracting the vocabulary from the .tsv file:
```
with open(umbc_wn_vec_path + 'vocab.txt', 'w', encoding='utf_8') as f:
with open(umbc_wn_vec_path + 'row_embedding.tsv', 'r', encoding='utf_8') as vec_lines:
vocab = [line.split('\t')[0].strip() for line in vec_lines]
for word in vocab:
print(word, file=f)
```
Converting tsv to bin:
```
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={umbc_wn_vec_path}vocab.txt --output={umbc_wn_vec_path}vecs.bin \
{umbc_wn_vec_path}row_embedding.tsv
%ls
umbc_wn_vecs = m.Vecs(umbc_wn_vec_path + 'vocab.txt', umbc_wn_vec_path + 'vecs.bin')
k_neighbors(umbc_wn_vecs, 'lem_California')
```
# Add your solution to the proposed exercise here
Follow the instructions given in the prvious lesson (*Vecsigrafos for curating and interlinking knowledge graphs*) to find correlation between terms in old Enlgish extracted from the Shakespeare corpus and terms in modern English extracted from UMBC. You will need to generate a dictionary relating pairs of lemmas between the two vocabularies and use to produce a pair of translation matrices to transform vectors from one vector space to the other. Then apply the k_neighbors method to identify the correlations.
# Conclusion
This notebook proposes the use of Shakespeare's complete works and UMBC to provide the student with embeddings that can be exploited for different operations between the two vector spaces. Particularly, we propose to identify terms and their correlations over such spaces.
# Acknowledgements
In memory of Dr. Jack Brandabur, whose passion for Shakespeare and Cervantes inspired this notebook.
| true | code | 0.421254 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 4: Training for Tabular Data**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 4 Material
* Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb)
* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb)
* **Part 4.3: Keras Regression for Deep Neural Networks with RMSE** [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb)
* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb)
* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 4.3: Keras Regression for Deep Neural Networks with RMSE
Regression results are evaluated differently than classification. Consider the following code that trains a neural network for regression on the data set **jh-simple-dataset.csv**.
```
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
# Create train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.callbacks import EarlyStopping
# Build the neural network
model = Sequential()
model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=5, verbose=1, mode='auto', restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)
```
### Mean Square Error
The mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired.
$ \mbox{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $
```
from sklearn import metrics
# Predict
pred = model.predict(x_test)
# Measure MSE error.
score = metrics.mean_squared_error(pred,y_test)
print("Final score (MSE): {}".format(score))
```
### Root Mean Square Error
The root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired.
$ \mbox{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $
```
import numpy as np
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
```
### Lift Chart
To generate a lift chart, perform the following activities:
* Sort the data by expected output. Plot the blue line above.
* For every point on the x-axis plot the predicted value for that same data point. This is the green line above.
* The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high.
* The y-axis is ranged according to the values predicted.
Reading a lift chart:
* The expected and predict lines should be close. Notice where one is above the ot other.
* The below chart is the most accurate on lower age.
```
# Regression chart.
def chart_regression(pred, y, sort=True):
t = pd.DataFrame({'pred': pred, 'y': y.flatten()})
if sort:
t.sort_values(by=['y'], inplace=True)
plt.plot(t['y'].tolist(), label='expected')
plt.plot(t['pred'].tolist(), label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Plot the chart
chart_regression(pred.flatten(),y_test)
```
| true | code | 0.592136 | null | null | null | null |
|
# About this Notebook
In this notebook, we provide the tensor factorization implementation using an iterative Alternating Least Square (ALS), which is a good starting point for understanding tensor factorization.
```
import numpy as np
from numpy.linalg import inv as inv
```
# Part 1: Matrix Computation Concepts
## 1) Kronecker product
- **Definition**:
Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as
$$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$
where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).
- **Example**:
If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have
$$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$
$$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$
## 2) Khatri-Rao product (`kr_prod`)
- **Definition**:
Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,
$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r},$$
where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.
- **Example**:
If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have
$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$
$$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$
$$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
```
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
```
## 3) CP decomposition
### CP Combination (`cp_combination`)
- **Definition**:
The CP decomposition factorizes a tensor into a sum of outer products of vectors. For example, for a third-order tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, the CP decomposition can be written as
$$\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s},$$
or element-wise,
$$\hat{y}_{ijt}=\sum_{s=1}^{r}u_{is}v_{js}x_{ts},\forall (i,j,t),$$
where vectors $\boldsymbol{u}_{s}\in\mathbb{R}^{m},\boldsymbol{v}_{s}\in\mathbb{R}^{n},\boldsymbol{x}_{s}\in\mathbb{R}^{f}$ are columns of factor matrices $U\in\mathbb{R}^{m\times r},V\in\mathbb{R}^{n\times r},X\in\mathbb{R}^{f\times r}$, respectively. The symbol $\circ$ denotes vector outer product.
- **Example**:
Given matrices $U=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]\in\mathbb{R}^{2\times 2}$, $V=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ 5 & 6 \\ \end{array} \right]\in\mathbb{R}^{3\times 2}$ and $X=\left[ \begin{array}{cc} 1 & 5 \\ 2 & 6 \\ 3 & 7 \\ 4 & 8 \\ \end{array} \right]\in\mathbb{R}^{4\times 2}$, then if $\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s}$, then, we have
$$\hat{Y}_1=\hat{\mathcal{Y}}(:,:,1)=\left[ \begin{array}{ccc} 31 & 42 & 65 \\ 63 & 86 & 135 \\ \end{array} \right],$$
$$\hat{Y}_2=\hat{\mathcal{Y}}(:,:,2)=\left[ \begin{array}{ccc} 38 & 52 & 82 \\ 78 & 108 & 174 \\ \end{array} \right],$$
$$\hat{Y}_3=\hat{\mathcal{Y}}(:,:,3)=\left[ \begin{array}{ccc} 45 & 62 & 99 \\ 93 & 130 & 213 \\ \end{array} \right],$$
$$\hat{Y}_4=\hat{\mathcal{Y}}(:,:,4)=\left[ \begin{array}{ccc} 52 & 72 & 116 \\ 108 & 152 & 252 \\ \end{array} \right].$$
```
def cp_combine(U, V, X):
return np.einsum('is, js, ts -> ijt', U, V, X)
U = np.array([[1, 2], [3, 4]])
V = np.array([[1, 3], [2, 4], [5, 6]])
X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]])
print(cp_combine(U, V, X))
print()
print('tensor size:')
print(cp_combine(U, V, X).shape)
```
## 4) Tensor Unfolding (`ten2mat`)
Using numpy reshape to perform 3rd rank tensor unfold operation. [[**link**](https://stackoverflow.com/questions/49970141/using-numpy-reshape-to-perform-3rd-rank-tensor-unfold-operation)]
```
def ten2mat(tensor, mode):
return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F')
X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]],
[[5, 6, 7, 8], [7, 8, 9, 10]],
[[9, 10, 11, 12], [11, 12, 13, 14]]])
print('tensor size:')
print(X.shape)
print('original tensor:')
print(X)
print()
print('(1) mode-1 tensor unfolding:')
print(ten2mat(X, 0))
print()
print('(2) mode-2 tensor unfolding:')
print(ten2mat(X, 1))
print()
print('(3) mode-3 tensor unfolding:')
print(ten2mat(X, 2))
```
# Part 2: Tensor CP Factorization using ALS (TF-ALS)
Regarding CP factorization as a machine learning problem, we could perform a learning task by minimizing the loss function over factor matrices, that is,
$$\min _{U, V, X} \sum_{(i, j, t) \in \Omega}\left(y_{i j t}-\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}\right)^{2}.$$
Within this optimization problem, multiplication among three factor matrices (acted as parameters) makes this problem difficult. Alternatively, we apply the ALS algorithm for CP factorization.
In particular, the optimization problem for each row $\boldsymbol{u}_{i}\in\mathbb{R}^{R},\forall i\in\left\{1,2,...,M\right\}$ of factor matrix $U\in\mathbb{R}^{M\times R}$ is given by
$$\min _{\boldsymbol{u}_{i}} \sum_{j,t:(i, j, t) \in \Omega}\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]^\top.$$
The least square for this optimization is
$$u_{i} \Leftarrow\left(\sum_{j, t, i, j, t ) \in \Omega} \left(x_{t} \odot v_{j}\right)\left(x_{t} \odot v_{j}\right)^{\top}\right)^{-1}\left(\sum_{j, t :(i, j, t) \in \Omega} y_{i j t} \left(x_{t} \odot v_{j}\right)\right), \forall i \in\{1,2, \ldots, M\}.$$
The alternating least squares for $V\in\mathbb{R}^{N\times R}$ and $X\in\mathbb{R}^{T\times R}$ are
$$\boldsymbol{v}_{j}\Leftarrow\left(\sum_{i,t:(i,j,t)\in\Omega}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,t:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\right),\forall j\in\left\{1,2,...,N\right\},$$
$$\boldsymbol{x}_{t}\Leftarrow\left(\sum_{i,j:(i,j,t)\in\Omega}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,j:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\right),\forall t\in\left\{1,2,...,T\right\}.$$
```
def CP_ALS(sparse_tensor, rank, maxiter):
dim1, dim2, dim3 = sparse_tensor.shape
dim = np.array([dim1, dim2, dim3])
U = 0.1 * np.random.rand(dim1, rank)
V = 0.1 * np.random.rand(dim2, rank)
X = 0.1 * np.random.rand(dim3, rank)
pos = np.where(sparse_tensor != 0)
binary_tensor = np.zeros((dim1, dim2, dim3))
binary_tensor[pos] = 1
tensor_hat = np.zeros((dim1, dim2, dim3))
for iters in range(maxiter):
for order in range(dim.shape[0]):
if order == 0:
var1 = kr_prod(X, V).T
elif order == 1:
var1 = kr_prod(X, U).T
else:
var1 = kr_prod(V, U).T
var2 = kr_prod(var1, var1)
var3 = np.matmul(var2, ten2mat(binary_tensor, order).T).reshape([rank, rank, dim[order]])
var4 = np.matmul(var1, ten2mat(sparse_tensor, order).T)
for i in range(dim[order]):
var_Lambda = var3[ :, :, i]
inv_var_Lambda = inv((var_Lambda + var_Lambda.T)/2 + 10e-12 * np.eye(rank))
vec = np.matmul(inv_var_Lambda, var4[:, i])
if order == 0:
U[i, :] = vec.copy()
elif order == 1:
V[i, :] = vec.copy()
else:
X[i, :] = vec.copy()
tensor_hat = cp_combine(U, V, X)
mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0]
rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0])
if (iters + 1) % 100 == 0:
print('Iter: {}'.format(iters + 1))
print('Training MAPE: {:.6}'.format(mape))
print('Training RMSE: {:.6}'.format(rmse))
print()
return tensor_hat, U, V, X
```
# Part 3: Data Organization
## 1) Matrix Structure
We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),
$$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$
## 2) Tensor Structure
We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),
$$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$
therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$.
**How to transform a data set into something we can use for time series imputation?**
# Part 4: Experiments on Guangzhou Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat')
dense_tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario:
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
# binary_tensor = np.zeros(dense_tensor.shape)
# for i1 in range(dense_tensor.shape[0]):
# for i2 in range(dense_tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
```
**Question**: Given only the partially observed data $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, how can we impute the unknown missing values?
The main influential factors for such imputation model are:
- `rank`.
- `maxiter`.
```
import time
start = time.time()
rank = 80
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**20%, RM**| 80 | 1000 | **0.0833** | **3.5928**|
|**40%, RM**| 80 | 1000 | **0.0837** | **3.6190**|
|**20%, NM**| 10 | 1000 | **0.1027** | **4.2960**|
|**40%, NM**| 10 | 1000 | **0.1028** | **4.3274**|
# Part 5: Experiments on Birmingham Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat')
dense_tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario:
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
# binary_tensor = np.zeros(dense_tensor.shape)
# for i1 in range(dense_tensor.shape[0]):
# for i2 in range(dense_tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 30
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|-----------:|
|**10%, RM**| 30 | 1000 | **0.0615** | **18.5005**|
|**30%, RM**| 30 | 1000 | **0.0583** | **18.9148**|
|**10%, NM**| 10 | 1000 | **0.1447** | **41.6710**|
|**30%, NM**| 10 | 1000 | **0.1765** | **63.8465**|
# Part 6: Experiments on Hangzhou Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat')
dense_tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario:
binary_tensor = np.round(random_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
# binary_tensor = np.zeros(dense_tensor.shape)
# for i1 in range(dense_tensor.shape[0]):
# for i2 in range(dense_tensor.shape[1]):
# binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 50
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**20%, RM**| 50 | 1000 | **0.1991** |**111.303**|
|**40%, RM**| 50 | 1000 | **0.2098** |**100.315**|
|**20%, NM**| 5 | 1000 | **0.2837** |**42.6136**|
|**40%, NM**| 5 | 1000 | **0.2811** |**38.4201**|
# Part 7: Experiments on New York Data Set
```
import scipy.io
tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat')
dense_tensor = tensor['tensor']
rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat')
rm_tensor = rm_tensor['rm_tensor']
nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat')
nm_tensor = nm_tensor['nm_tensor']
missing_rate = 0.1
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
# binary_tensor = np.round(rm_tensor + 0.5 - missing_rate)
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros(dense_tensor.shape)
for i1 in range(dense_tensor.shape[0]):
for i2 in range(dense_tensor.shape[1]):
for i3 in range(61):
binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 30
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**10%, RM**| 30 | 1000 | **0.5262** | **6.2444**|
|**30%, RM**| 30 | 1000 | **0.5488** | **6.8968**|
|**10%, NM**| 30 | 1000 | **0.5170** | **5.9863**|
|**30%, NM**| 30 | 100 | **-** | **-**|
# Part 8: Experiments on Seattle Data Set
```
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288])
missing_rate = 0.2
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(RM_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 50
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
RM_mat = RM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288])
missing_rate = 0.4
# =============================================================================
### Random missing (RM) scenario
### Set the RM scenario by:
binary_tensor = np.round(RM_tensor + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 50
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
missing_rate = 0.2
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 10
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
import pandas as pd
dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0)
NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0)
dense_mat = dense_mat.values
NM_mat = NM_mat.values
dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288])
missing_rate = 0.4
# =============================================================================
### Non-random missing (NM) scenario
### Set the NM scenario by:
binary_tensor = np.zeros((dense_mat.shape[0], 28, 288))
for i1 in range(binary_tensor.shape[0]):
for i2 in range(binary_tensor.shape[1]):
binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate)
# =============================================================================
sparse_tensor = np.multiply(dense_tensor, binary_tensor)
import time
start = time.time()
rank = 10
maxiter = 1000
tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter)
pos = np.where((dense_tensor != 0) & (sparse_tensor == 0))
final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0])
print('Final Imputation MAPE: {:.6}'.format(final_mape))
print('Final Imputation RMSE: {:.6}'.format(final_rmse))
print()
end = time.time()
print('Running time: %d seconds'%(end - start))
```
**Experiment results** of missing data imputation using TF-ALS:
| scenario |`rank`| `maxiter`| mape | rmse |
|:----------|-----:|---------:|-----------:|----------:|
|**20%, RM**| 50 | 1000 | **0.0742** |**4.4929**|
|**40%, RM**| 50 | 1000 | **0.0758** |**4.5574**|
|**20%, NM**| 10 | 1000 | **0.0995** |**5.6331**|
|**40%, NM**| 10 | 1000 | **0.1004** |**5.7034**|
| true | code | 0.442757 | null | null | null | null |
|
# Communication in Crisis
## Acquire
Data: [Los Angeles Parking Citations](https://www.kaggle.com/cityofLA/los-angeles-parking-citations)<br>
Load the dataset and filter for:
- Citations issued from 2017-01-01 to 2021-04-12.
- Street Sweeping violations - `Violation Description` == __"NO PARK/STREET CLEAN"__
Let's acquire the parking citations data from our file.
1. Import libraries.
1. Load the dataset.
1. Display the shape and first/last 2 rows.
1. Display general infomation about the dataset - w/ the # of unique values in each column.
1. Display the number of missing values in each column.
1. Descriptive statistics for all numeric features.
```
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import sys
import time
import folium.plugins as plugins
from IPython.display import HTML
import json
import datetime
import calplot
import folium
import math
sns.set()
from tqdm.notebook import tqdm
import src
# Filter warnings
from warnings import filterwarnings
filterwarnings('ignore')
# Load the data
df = src.get_sweep_data(prepared=False)
# Display the shape and dtypes of each column
print(df.shape)
df.info()
# Display the first two citations
df.head(2)
# Display the last two citations
df.tail(2)
# Display descriptive statistics of numeric columns
df.describe()
df.hist(figsize=(16, 8), bins=15)
plt.tight_layout();
```
__Initial findings__
- `Issue time` and `Marked Time` are quasi-normally distributed. Note: Poisson Distribution
- It's interesting to see the distribution of our activity on earth follows a normal distribution.
- Agencies 50+ write the most parking citations.
- Most fine amounts are less than $100.00
- There are a few null or invalid license plates.
# Prepare
- Remove spaces + capitalization from each column name.
- Cast `Plate Expiry Date` to datetime data type.
- Cast `Issue Date` and `Issue Time` to datetime data types.
- Drop columns missing >=74.42\% of their values.
- Drop missing values.
- Transform Latitude and Longitude columns from NAD1983StatePlaneCaliforniaVFIPS0405 feet projection to EPSG:4326 World Geodetic System 1984: used in GPS [Standard]
- Filter data for street sweeping citations only.
```
# Prepare the data using a function stored in prepare.py
df_citations = src.get_sweep_data(prepared=True)
# Display the first two rows
df_citations.head(2)
# Check the column data types and non-null counts.
df_citations.info()
```
# Exploration
## How much daily revenue is generated from street sweeper citations?
### Daily Revenue from Street Sweeper Citations
Daily street sweeper citations increased in 2020.
```
# Daily street sweeping citation revenue
daily_revenue = df_citations.groupby('issue_date').fine_amount.sum()
daily_revenue.index = pd.to_datetime(daily_revenue.index)
df_sweep = src.street_sweep(data=df_citations)
df_d = src.resample_period(data=df_sweep)
df_m = src.resample_period(data=df_sweep, period='M')
df_d.head()
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
```
> __Anomaly__: Between March 2020 and October 2020 a Local Emergency was Declared by the Mayor of Los Angeles in response to COVID-19. Street Sweeping was halted to help Angelenos Shelter in Place. _Street Sweeping resumed on 10/15/2020_.
### Anomaly: Declaration of Local Emergency
```
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axvspan('2020-03-16', '2020-10-14', color='grey', alpha=.25)
plt.text('2020-03-29', 890_000, 'Declaration of\nLocal Emergency', fontsize=11)
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200', '$400', '$600', '$800',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
sns.set_context('talk')
# Plot daily revenue from street sweeping citations
df_d.revenue.plot(figsize=(14, 7), label='Revenue', color='DodgerBlue')
plt.axhline(df_d.revenue.mean(skipna=True), color='black', label='Average Revenue')
plt.axvline(datetime.datetime(2020, 10, 15), color='red', linestyle="--", label='October 15, 2020')
plt.title("Daily Revenue from Street Sweeping Citations")
plt.xlabel('')
plt.ylabel("Revenue (in thousand's)")
plt.xticks(rotation=0, horizontalalignment='center', fontsize=13)
plt.yticks(range(0, 1_000_000, 200_000), ['$0', '$200K', '$400K', '$600K', '$800K',])
plt.ylim(0, 1_000_000)
plt.legend(loc=2, framealpha=.8);
```
## Hypothesis Test
### General Inquiry
Is the daily citation revenue after 10/15/2020 significantly greater than average?
### Z-Score
$H_0$: The daily citation revenue after 10/15/2020 is less than or equal to the average daily revenue.
$H_a$: The daily citation revenue after 10/15/2020 is significantly greater than average.
```
confidence_interval = .997
# Directional Test
alpha = (1 - confidence_interval)/2
# Data to calculate z-scores using precovid values to calculate the mean and std
daily_revenue_precovid = df_d.loc[df_d.index < '2020-03-16']['revenue']
mean_precovid, std_precovid = daily_revenue_precovid.agg(['mean', 'std']).values
mean, std = df_d.agg(['mean', 'std']).values
# Calculating Z-Scores using precovid mean and std
z_scores_precovid = (df_d.revenue - mean_precovid)/std_precovid
z_scores_precovid.index = pd.to_datetime(z_scores_precovid.index)
sig_zscores_pre_covid = z_scores_precovid[z_scores_precovid>3]
# Calculating Z-Scores using entire data
z_scores = (df_d.revenue - mean)/std
z_scores.index = pd.to_datetime(z_scores.index)
sig_zscores = z_scores[z_scores>3]
sns.set_context('talk')
plt.figure(figsize=(12, 6))
sns.histplot(data=z_scores_precovid,
bins=50,
label='preCOVID z-scores')
sns.histplot(data=z_scores,
bins=50,
color='orange',
label='z-scores')
plt.title('Daily citation revenue after 10/15/2020 is significantly greater than average', fontsize=16)
plt.xlabel('Standard Deviations')
plt.ylabel('# of Days')
plt.axvline(3, color='Black', linestyle="--", label='3 Standard Deviations')
plt.xticks(np.linspace(-1, 9, 11))
plt.legend(fontsize=13);
a = stats.zscore(daily_revenue)
fig, ax = plt.subplots(figsize=(8, 8))
stats.probplot(a, plot=ax)
plt.xlabel("Quantile of Normal Distribution")
plt.ylabel("z-score");
```
### p-values
```
p_values_precovid = z_scores_precovid.apply(stats.norm.cdf)
p_values = z_scores_precovid.apply(stats.norm.cdf)
significant_dates_precovid = p_values_precovid[(1-p_values_precovid) < alpha]
significant_dates = p_values[(1-p_values) < alpha]
# The chance of an outcome occuring by random chance
print(f'{alpha:0.3%}')
```
### Cohen's D
```
fractions = [.1, .2, .5, .7, .9]
cohen_d = []
for percentage in fractions:
cohen_d_trial = []
for i in range(10000):
sim = daily_revenue.sample(frac=percentage)
sim_mean = sim.mean()
d = (sim_mean - mean) / (std/math.sqrt(int(len(daily_revenue)*percentage)))
cohen_d_trial.append(d)
cohen_d.append(np.mean(cohen_d_trial))
cohen_d
fractions = [.1, .2, .5, .7, .9]
cohen_d_precovid = []
for percentage in fractions:
cohen_d_trial = []
for i in range(10000):
sim = daily_revenue_precovid.sample(frac=percentage)
sim_mean = sim.mean()
d = (sim_mean - mean_precovid) / (std_precovid/math.sqrt(int(len(daily_revenue_precovid)*percentage)))
cohen_d_trial.append(d)
cohen_d_precovid.append(np.mean(cohen_d_trial))
cohen_d_precovid
```
### Significant Dates with less than a 0.15% chance of occuring
- All dates that are considered significant occur after 10/15/2020
- In the two weeks following 10/15/2020 significant events occured on __Tuesday's and Wednesday's__.
```
dates_precovid = set(list(sig_zscores_pre_covid.index))
dates = set(list(sig_zscores.index))
common_dates = list(dates.intersection(dates_precovid))
common_dates = pd.to_datetime(common_dates).sort_values()
sig_zscores
pd.Series(common_dates.day_name(),
common_dates)
np.random.seed(sum(map(ord, 'calplot')))
all_days = pd.date_range('1/1/2020', '12/22/2020', freq='D')
significant_events = pd.Series(np.ones_like(len(common_dates)), index=common_dates)
calplot.calplot(significant_events, figsize=(18, 12), cmap='coolwarm_r');
```
## Which parts of the city were impacted the most?
```
df_outliers = df_citations.loc[df_citations.issue_date.isin(list(common_dates.astype('str')))]
df_outliers.reset_index(drop=True, inplace=True)
print(df_outliers.shape)
df_outliers.head()
m = folium.Map(location=[34.0522, -118.2437],
min_zoom=8,
max_bounds=True)
mc = plugins.MarkerCluster()
for index, row in df_outliers.iterrows():
mc.add_child(
folium.Marker(location=[str(row['latitude']), str(row['longitude'])],
popup='Cited {} {} at {}'.format(row['day_of_week'],
row['issue_date'],
row['issue_time'][:-3]),
control_scale=True,
clustered_marker=True
)
)
m.add_child(mc)
```
Transfering map to Tablaeu
# Conclusions
# Appendix
## What time(s) are Street Sweeping citations issued?
Most citations are issued during the hours of 8am, 10am, and 12pm.
### Citation Times
```
# Filter street sweeping data for citations issued between
# 8 am and 2 pm, 8 and 14 respectively.
df_citation_times = df_citations.loc[(df_citations.issue_hour >= 8)&(df_citations.issue_hour < 14)]
sns.set_context('talk')
# Issue Hour Plot
df_citation_times.issue_hour.value_counts().sort_index().plot.bar(figsize=(8, 6))
# Axis labels
plt.title('Most Street Sweeper Citations are Issued at 8am')
plt.xlabel('Issue Hour (24HR)')
plt.ylabel('# of Citations (in thousands)')
# Chart Formatting
plt.xticks(rotation=0)
plt.yticks(range(100_000, 400_001,100_000), ['100', '200', '300', '400'])
plt.show()
sns.set_context('talk')
# Issue Minute Plot
df_citation_times.issue_minute.value_counts().sort_index().plot.bar(figsize=(20, 9))
# Axis labels
plt.title('Most Street Sweeper Citations are Issued in the First 30 Minutes')
plt.xlabel('Issue Minute')
plt.ylabel('# of Citations (in thousands)')
# plt.axvspan(0, 30, facecolor='grey', alpha=0.1)
# Chart Formatting
plt.xticks(rotation=0)
plt.yticks(range(5_000, 40_001, 5_000), ['5', '10', '15', '20', '25', '30', '35', '40'])
plt.tight_layout()
plt.show()
```
## Which state has the most Street Sweeping violators?
### License Plate
Over 90% of all street sweeping citations are issued to California Residents.
```
sns.set_context('talk')
fig = df_citations.rp_state_plate.value_counts(normalize=True).nlargest(3).plot.bar(figsize=(12, 6))
# Chart labels
plt.title('California residents receive the most street sweeping citations', fontsize=16)
plt.xlabel('State')
plt.ylabel('% of all Citations')
# Tick Formatting
plt.xticks(rotation=0)
plt.yticks(np.linspace(0, 1, 11), labels=[f'{i:0.0%}' for i in np.linspace(0, 1, 11)])
plt.grid(axis='x', alpha=.5)
plt.tight_layout();
```
## Which street has the most Street Sweeping citations?
The characteristics of the top 3 streets:
1. Vehicles are parked bumper to bumper leaving few parking spaces available
2. Parking spaces have a set time limit
```
df_citations['street_name'] = df_citations.location.str.replace('^[\d+]{2,}', '').str.strip()
sns.set_context('talk')
# Removing the street number and white space from the address
df_citations.street_name.value_counts().nlargest(3).plot.barh(figsize=(16, 6))
# Chart formatting
plt.title('Streets with the Most Street Sweeping Citations', fontsize=24)
plt.xlabel('# of Citations');
```
### __Abbot Kinney Blvd: "Small Boutiques, No Parking"__
> [Abbot Kinney Blvd on Google Maps](https://www.google.com/maps/@33.9923689,-118.4731719,3a,75y,112.99h,91.67t/data=!3m6!1e1!3m4!1sKD3cG40eGmdWxhwqLD1BvA!2e0!7i16384!8i8192)
<img src="./visuals/abbot.png" alt="Abbot" style="width: 450px;" align="left"/>
- Near Venice Beach
- Small businesses and name brand stores line both sides of the street
- Little to no parking in this area
- Residential area inland
- Multiplex style dwellings with available parking spaces
- Weekly Street Sweeping on Monday from 7:30 am - 9:30 am
### __Clinton Street: "Packed Street"__
> [Clinton Street on Google Maps](https://www.google.com/maps/@34.0816611,-118.3306842,3a,75y,70.72h,57.92t/data=!3m9!1e1!3m7!1sdozFgC7Ms3EvaOF4-CeNAg!2e0!7i16384!8i8192!9m2!1b1!2i37)
<img src="./visuals/clinton.png" alt="Clinton" style="width: 600px;" align="Left"/>
- All parking spaces on the street are filled
- Residential Area
- Weekly Street Sweeping on Friday from 8:00 am - 11:00 am
### __Kelton Ave: "2 Hour Time Limit"__
> [Kelton Ave on Google Maps](https://www.google.com/maps/place/Kelton+Ave,+Los+Angeles,+CA/@34.0475262,-118.437594,3a,49.9y,183.92h,85.26t/data=!3m9!1e1!3m7!1s5VICHNYMVEk9utaV5egFYg!2e0!7i16384!8i8192!9m2!1b1!2i25!4m5!3m4!1s0x80c2bb7efb3a05eb:0xe155071f3fe49df3!8m2!3d34.0542999!4d-118.4434919)
<img src="./visuals/kelton.png" width="600" height="600" align="left"/>
- Most parking spaces on this street are available. This is due to the strict 2 hour time limit for parked vehicles without the proper exception permit.
- Multiplex, Residential Area
- Weekly Street Sweeping on Thursday from 10:00 am - 1:00 pm
- Weekly Street Sweeping on Friday from 8:00 am - 10:00 am
## Which street has the most Street Sweeping citations, given the day of the week?
- __Abbot Kinney Blvd__ is the most cited street on __Monday and Tuesday__
- __4th Street East__ is the most cited street on __Saturday and Sunday__
```
# Group by the day of the week and street name
df_day_street = df_citations.groupby(by=['day_of_week', 'street_name'])\
.size()\
.sort_values()\
.groupby(level=0)\
.tail(1)\
.reset_index()\
.rename(columns={0:'count'})
# Create a new column to sort the values by the day of the
# week starting with Monday
df_day_street['order'] = [5, 6, 4, 3, 0, 2, 1]
# Display the street with the most street sweeping citations
# given the day of the week.
df_day_street.sort_values('order').set_index('order')
```
## Which Agencies issue the most street sweeping citations?
The Department of Transportation's __Western, Hollywood, and Valley__ subdivisions issue the most street sweeping citations.
```
sns.set_context('talk')
df_citations.agency.value_counts().nlargest(5).plot.barh(figsize=(12, 6));
# plt.axhspan(2.5, 5, facecolor='0.5', alpha=.8)
plt.title('Agencies With the Most Street Sweeper Citations')
plt.xlabel('# of Citations (in thousands)')
plt.xticks(np.arange(0, 400_001, 100_000), list(np.arange(0, 401, 100)))
plt.yticks([0, 1, 2, 3, 4], labels=['DOT-WESTERN',
'DOT-HOLLYWOOD',
'DOT-VALLEY',
'DOT-SOUTHERN',
'DOT-CENTRAL']);
```
When taking routes into consideration, __"Western"__ Subdivision, route 00500, has issued the most street sweeping citations.
- Is route 00500 larger than other street sweeping routes?
```
top_3_routes = df_citations.groupby(['agency', 'route'])\
.size()\
.nlargest(3)\
.sort_index()\
.rename('num_citations')\
.reset_index()\
.sort_values(by='num_citations', ascending=False)
top_3_routes.agency = ["DOT-WESTERN", "DOT-SOUTHERN", "DOT-CENTRAL"]
data = top_3_routes.set_index(['agency', 'route'])
data.plot(kind='barh', stacked=True, figsize=(12, 6), legend=None)
plt.title("Agency-Route ID's with the most Street Sweeping Citations")
plt.ylabel('')
plt.xlabel('# of Citations (in thousands)')
plt.xticks(np.arange(0, 70_001, 10_000), [str(i) for i in np.arange(0, 71, 10)]);
df_citations['issue_time_num'] = df_citations.issue_time.str.replace(":00", '')
df_citations['issue_time_num'] = df_citations.issue_time_num.str.replace(':', '').astype(np.int)
```
## What is the weekly distibution of citation times?
```
sns.set_context('talk')
plt.figure(figsize=(13, 12))
sns.boxplot(data=df_citations,
x="day_of_week",
y="issue_time_num",
order=["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"],
whis=3);
plt.title("Distribution Citation Issue Times Throughout the Week")
plt.xlabel('')
plt.ylabel('Issue Time (24HR)')
plt.yticks(np.arange(0, 2401, 200), [str(i) + ":00" for i in range(0, 25, 2)]);
```
| true | code | 0.614972 | null | null | null | null |
|
##### Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
# Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/sequences/_nmt.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
# This notebook is still under construction! Please come back later.
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using TF 2.0 APIs. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as *"¿todavia estan en casa?"*, and return the English translation: *"are you still at home?"*
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
```
import collections
import io
import itertools
import os
import random
import re
import time
import unicodedata
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
import matplotlib.pyplot as plt
print(tf.__version__)
```
## Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
```
May I borrow this book? ¿Puedo tomar prestado este libro?
```
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
1. Clean the sentences by removing special characters.
1. Add a *start* and *end* token to each sentence.
1. Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
1. Pad each sentence to a maximum length.
```
# TODO(brianklee): This preprocessing should ideally be implemented in TF
# because preprocessing should be exported as part of the SavedModel.
# Converts the unicode file to ascii
# https://stackoverflow.com/a/518232/2809427
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
START_TOKEN = u'<start>'
END_TOKEN = u'<end>'
def preprocess_sentence(w):
# remove accents; lowercase everything
w = unicode_to_ascii(w.strip()).lower()
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# https://stackoverflow.com/a/3645931/3645946
w = re.sub(r'([?.!,¿])', r' \1 ', w)
# replacing everything with space except (a-z, '.', '?', '!', ',')
w = re.sub(r'[^a-z?.!,¿]+', ' ', w)
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence))
```
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset (of course, translation quality degrades with less data).
```
def load_anki_data(num_examples=None):
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip) + '/spa-eng/spa.txt'
with io.open(path_to_file, 'rb') as f:
lines = f.read().decode('utf8').strip().split('\n')
# Data comes as tab-separated strings; one per line.
eng_spa_pairs = [[preprocess_sentence(w) for w in line.split('\t')] for line in lines]
# The translations file is ordered from shortest to longest, so slicing from
# the front will select the shorter examples. This also speeds up training.
if num_examples is not None:
eng_spa_pairs = eng_spa_pairs[:num_examples]
eng_sentences, spa_sentences = zip(*eng_spa_pairs)
eng_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
spa_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
eng_tokenizer.fit_on_texts(eng_sentences)
spa_tokenizer.fit_on_texts(spa_sentences)
return (eng_spa_pairs, eng_tokenizer, spa_tokenizer)
NUM_EXAMPLES = 30000
sentence_pairs, english_tokenizer, spanish_tokenizer = load_anki_data(NUM_EXAMPLES)
# Turn our english/spanish pairs into TF Datasets by mapping words -> integers.
def make_dataset(eng_spa_pairs, eng_tokenizer, spa_tokenizer):
eng_sentences, spa_sentences = zip(*eng_spa_pairs)
eng_ints = eng_tokenizer.texts_to_sequences(eng_sentences)
spa_ints = spa_tokenizer.texts_to_sequences(spa_sentences)
padded_eng_ints = tf.keras.preprocessing.sequence.pad_sequences(
eng_ints, padding='post')
padded_spa_ints = tf.keras.preprocessing.sequence.pad_sequences(
spa_ints, padding='post')
dataset = tf.data.Dataset.from_tensor_slices((padded_eng_ints, padded_spa_ints))
return dataset
# Train/test split
train_size = int(len(sentence_pairs) * 0.8)
random.shuffle(sentence_pairs)
train_sentence_pairs, test_sentence_pairs = sentence_pairs[:train_size], sentence_pairs[train_size:]
# Show length
len(train_sentence_pairs), len(test_sentence_pairs)
_english, _spanish = train_sentence_pairs[0]
_eng_ints, _spa_ints = english_tokenizer.texts_to_sequences([_english])[0], spanish_tokenizer.texts_to_sequences([_spanish])[0]
print("Source language: ")
print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_eng_ints, _english.split())))
print("Target language: ")
print('\n'.join('{:4d} ----> {}'.format(i, word) for i, word in zip(_spa_ints, _spanish.split())))
# Set up datasets
BATCH_SIZE = 64
train_ds = make_dataset(train_sentence_pairs, english_tokenizer, spanish_tokenizer)
test_ds = make_dataset(test_sentence_pairs, english_tokenizer, spanish_tokenizer)
train_ds = train_ds.shuffle(len(train_sentence_pairs)).batch(BATCH_SIZE, drop_remainder=True)
test_ds = test_ds.batch(BATCH_SIZE, drop_remainder=True)
print("Dataset outputs elements with shape ({}, {})".format(
*train_ds.output_shapes))
```
## Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape *(batch_size, max_length, hidden_size)* and the encoder hidden state of shape *(batch_size, hidden_size)*.
```
ENCODER_SIZE = DECODER_SIZE = 1024
EMBEDDING_DIM = 256
MAX_OUTPUT_LENGTH = train_ds.output_shapes[1][1]
def gru(units):
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, encoder_size):
super(Encoder, self).__init__()
self.embedding_dim = embedding_dim
self.encoder_size = encoder_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(encoder_size)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state=hidden)
return output, state
def initial_hidden_state(self, batch_size):
return tf.zeros((batch_size, self.encoder_size))
```
For the decoder, we're using *Bahdanau attention*. Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
Lets decide on notation before writing the simplified form:
* FC = Fully connected (dense) layer
* EO = Encoder output
* H = hidden state
* X = input to the decoder
And the pseudo-code:
* `score = FC(tanh(FC(EO) + FC(H)))`
* `attention weights = softmax(score, axis = 1)`. Softmax by default is applied on the last axis but here we want to apply it on the *1st axis*, since the shape of score is *(batch_size, max_length, hidden_size)*. `Max_length` is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
* `context vector = sum(attention weights * EO, axis = 1)`. Same reason as above for choosing axis as 1.
* `embedding output` = The input to the decoder X is passed through an embedding layer.
* `merged vector = concat(embedding output, context vector)`
* This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
```
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, hidden_state, enc_output):
# enc_output shape = (batch_size, max_length, hidden_size)
# (batch_size, hidden_size) -> (batch_size, 1, hidden_size)
hidden_with_time = tf.expand_dims(hidden_state, 1)
# score shape == (batch_size, max_length, 1)
score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum = (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, decoder_size):
super(Decoder, self).__init__()
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.decoder_size = decoder_size
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(decoder_size)
self.fc = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(decoder_size)
def call(self, x, hidden, enc_output):
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
```
## Define a translate function
Now, let's put the encoder and decoder halves together. The encoder step is fairly straightforward; we'll just reuse Keras's dynamic unroll. For the decoder, we have to make some choices about how to feed the decoder RNN. Overall the process goes as follows:
1. Pass the *input* through the *encoder* which return *encoder output* and the *encoder hidden state*.
2. The encoder output, encoder hidden state and the <START> token is passed to the decoder.
3. The decoder returns the *predictions* and the *decoder hidden state*.
4. The encoder output, hidden state and next token is then fed back into the decoder repeatedly. This has two different behaviors under training and inference:
- during training, we use *teacher forcing*, where the correct next token is fed into the decoder, regardless of what the decoder emitted.
- during inference, we use `tf.argmax(predictions)` to select the most likely continuation and feed it back into the decoder. Another strategy that yields more robust results is called *beam search*.
5. Repeat step 4 until either the decoder emits an <END> token, indicating that it's done translating, or we run into a hardcoded length limit.
```
class NmtTranslator(tf.keras.Model):
def __init__(self, encoder, decoder, start_token_id, end_token_id):
super(NmtTranslator, self).__init__()
self.encoder = encoder
self.decoder = decoder
# (The token_id should match the decoder's language.)
# Uses start_token_id to initialize the decoder.
self.start_token_id = tf.constant(start_token_id)
# Check for sequence completion using this token_id
self.end_token_id = tf.constant(end_token_id)
@tf.function
def call(self, inp, target=None, max_output_length=MAX_OUTPUT_LENGTH):
'''Translate an input.
If target is provided, teacher forcing is used to generate the translation.
'''
batch_size = inp.shape[0]
hidden = self.encoder.initial_hidden_state(batch_size)
enc_output, enc_hidden = self.encoder(inp, hidden)
dec_hidden = enc_hidden
if target is not None:
output_length = target.shape[1]
else:
output_length = max_output_length
predictions_array = tf.TensorArray(tf.float32, size=output_length - 1)
attention_array = tf.TensorArray(tf.float32, size=output_length - 1)
# Feed <START> token to start decoder.
dec_input = tf.cast([self.start_token_id] * batch_size, tf.int32)
# Keep track of which sequences have emitted an <END> token
is_done = tf.zeros([batch_size], dtype=tf.bool)
for i in tf.range(output_length - 1):
dec_input = tf.expand_dims(dec_input, 1)
predictions, dec_hidden, attention_weights = self.decoder(dec_input, dec_hidden, enc_output)
predictions = tf.where(is_done, tf.zeros_like(predictions), predictions)
# Write predictions/attention for later visualization.
predictions_array = predictions_array.write(i, predictions)
attention_array = attention_array.write(i, attention_weights)
# Decide what to pass into the next iteration of the decoder.
if target is not None:
# if target is known, use teacher forcing
dec_input = target[:, i + 1]
else:
# Otherwise, pick the most likely continuation
dec_input = tf.argmax(predictions, axis=1, output_type=tf.int32)
# Figure out which sentences just completed.
is_done = tf.logical_or(is_done, tf.equal(dec_input, self.end_token_id))
# Exit early if all our sentences are done.
if tf.reduce_all(is_done):
break
# [time, batch, predictions] -> [batch, time, predictions]
return tf.transpose(predictions_array.stack(), [1, 0, 2]), tf.transpose(attention_array.stack(), [1, 0, 2, 3])
```
## Define the loss function
Our loss function is a word-for-word comparison between true answer and model prediction.
real = [<start>, 'This', 'is', 'the', 'correct', 'answer', '.', '<end>', '<oov>']
pred = ['This', 'is', 'what', 'the', 'model', 'emitted', '.', '<end>']
results in comparing
This/This, is/is, the/what, correct/the, answer/model, ./emitted, <end>/.
and ignoring the rest of the prediction.
```
def loss_fn(real, pred):
# The prediction doesn't include the <start> token.
real = real[:, 1:]
# Cut down the prediction to the correct shape (We ignore extra words).
pred = pred[:, :real.shape[1]]
# If real == <OOV>, then mask out the loss.
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
# Sum loss over the time dimension, but average it over the batch dimension.
return tf.reduce_mean(tf.reduce_sum(loss_, axis=1))
```
## Configure model directory
We'll use one directory to save all of our relevant artifacts (summary logs, checkpoints, SavedModel exports, etc.)
```
# Where to save checkpoints, tensorboard summaries, etc.
MODEL_DIR = '/tmp/tensorflow/nmt_attention'
def apply_clean():
if tf.io.gfile.exists(MODEL_DIR):
print('Removing existing model dir: {}'.format(MODEL_DIR))
tf.io.gfile.rmtree(MODEL_DIR)
# Optional: remove existing data
apply_clean()
# Summary writers
train_summary_writer = tf.summary.create_file_writer(
os.path.join(MODEL_DIR, 'summaries', 'train'), flush_millis=10000)
test_summary_writer = tf.summary.create_file_writer(
os.path.join(MODEL_DIR, 'summaries', 'eval'), flush_millis=10000, name='test')
# Set up all stateful objects
encoder = Encoder(len(english_tokenizer.word_index) + 1, EMBEDDING_DIM, ENCODER_SIZE)
decoder = Decoder(len(spanish_tokenizer.word_index) + 1, EMBEDDING_DIM, DECODER_SIZE)
start_token_id = spanish_tokenizer.word_index[START_TOKEN]
end_token_id = spanish_tokenizer.word_index[END_TOKEN]
model = NmtTranslator(encoder, decoder, start_token_id, end_token_id)
# TODO(brianklee): Investigate whether Adam defaults have changed and whether it affects training.
optimizer = tf.keras.optimizers.Adam(epsilon=1e-8)# tf.keras.optimizers.SGD(learning_rate=0.01)#Adam()
# Checkpoints
checkpoint_dir = os.path.join(MODEL_DIR, 'checkpoints')
checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')
checkpoint = tf.train.Checkpoint(
encoder=encoder, decoder=decoder, optimizer=optimizer)
# Restore variables on creation if a checkpoint exists.
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
# SavedModel exports
export_path = os.path.join(MODEL_DIR, 'export')
```
# Visualize the model's output
Let's visualize our model's output. (It hasn't been trained yet, so it will output gibberish.)
We'll use this visualization to check on the model's progress.
```
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence.split(), fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence.split(), fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def ints_to_words(tokenizer, ints):
return ' '.join(tokenizer.index_word[int(i)] if int(i) != 0 else '<OOV>' for i in ints)
def sentence_to_ints(tokenizer, sentence):
sentence = preprocess_sentence(sentence)
return tf.constant(tokenizer.texts_to_sequences([sentence])[0])
def translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, ints, target_ints=None):
"""Run translation on a sentence and plot an attention matrix.
Sentence should be passed in as list of integers.
"""
ints = tf.expand_dims(ints, 0)
predictions, attention = model(ints)
prediction_ids = tf.squeeze(tf.argmax(predictions, axis=-1))
attention = tf.squeeze(attention)
sentence = ints_to_words(english_tokenizer, ints[0])
predicted_sentence = ints_to_words(spanish_tokenizer, prediction_ids)
print(u'Input: {}'.format(sentence))
print(u'Predicted translation: {}'.format(predicted_sentence))
if target_ints is not None:
print(u'Correct translation: {}'.format(ints_to_words(spanish_tokenizer, target_ints)))
plot_attention(attention, sentence, predicted_sentence)
def translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, sentence, target_sentence=None):
"""Same as translate_and_plot_ints, but pass in a sentence as a string."""
english_ints = sentence_to_ints(english_tokenizer, sentence)
spanish_ints = sentence_to_ints(spanish_tokenizer, target_sentence) if target_sentence is not None else None
translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, english_ints, target_ints=spanish_ints)
translate_and_plot_words(model, english_tokenizer, spanish_tokenizer, u"it's really cold here", u'hace mucho frio aqui')
```
# Train the model
```
def train(model, optimizer, dataset):
"""Trains model on `dataset` using `optimizer`."""
start = time.time()
avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)
for inp, target in dataset:
with tf.GradientTape() as tape:
predictions, _ = model(inp, target=target)
loss = loss_fn(target, predictions)
avg_loss(loss)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
if tf.equal(optimizer.iterations % 10, 0):
tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)
avg_loss.reset_states()
rate = 10 / (time.time() - start)
print('Step #%d\tLoss: %.6f (%.2f steps/sec)' % (optimizer.iterations, loss, rate))
start = time.time()
if tf.equal(optimizer.iterations % 100, 0):
# translate_and_plot_words(model, english_index, spanish_index, u"it's really cold here.", u'hace mucho frio aqui.')
translate_and_plot_ints(model, english_tokenizer, spanish_tokenizer, inp[0], target[0])
def test(model, dataset, step_num):
"""Perform an evaluation of `model` on the examples from `dataset`."""
avg_loss = tf.keras.metrics.Mean('loss', dtype=tf.float32)
for inp, target in dataset:
predictions, _ = model(inp)
loss = loss_fn(target, predictions)
avg_loss(loss)
print('Model test set loss: {:0.4f}'.format(avg_loss.result()))
tf.summary.scalar('loss', avg_loss.result(), step=step_num)
NUM_TRAIN_EPOCHS = 10
for i in range(NUM_TRAIN_EPOCHS):
start = time.time()
with train_summary_writer.as_default():
train(model, optimizer, train_ds)
end = time.time()
print('\nTrain time for epoch #{} ({} total steps): {}'.format(
i + 1, optimizer.iterations, end - start))
with test_summary_writer.as_default():
test(model, test_ds, optimizer.iterations)
checkpoint.save(checkpoint_prefix)
# TODO(brianklee): This seems to be complaining about input shapes not being set?
# tf.saved_model.save(model, export_path)
```
## Next steps
* [Download a different dataset](http://www.manythings.org/anki/) to experiment with translations, for example, English to German, or English to French.
* Experiment with training on a larger dataset, or using more epochs
```
```
| true | code | 0.482978 | null | null | null | null |
|
# Implementing TF-IDF
------------------------------------
Here we implement TF-IDF, (Text Frequency - Inverse Document Frequency) for the spam-ham text data.
We will use a hybrid approach of encoding the texts with sci-kit learn's TFIDF vectorizer. Then we will use the regular TensorFlow logistic algorithm outline.
Creating the TF-IDF vectors requires us to load all the text into memory and count the occurrences of each word before we can start training our model. Because of this, it is not implemented fully in Tensorflow, so we will use Scikit-learn for creating our TF-IDF embedding, but use Tensorflow to fit the logistic model.
We start by loading the necessary libraries.
```
import tensorflow as tf
import matplotlib.pyplot as plt
import csv
import numpy as np
import os
import string
import requests
import io
import nltk
from zipfile import ZipFile
from sklearn.feature_extraction.text import TfidfVectorizer
from tensorflow.python.framework import ops
ops.reset_default_graph()
```
Start a computational graph session.
```
sess = tf.Session()
```
We set two parameters, `batch_size` and `max_features`. `batch_size` is the size of the batch we will train our logistic model on, and `max_features` is the maximum number of tf-idf textual words we will use in our logistic regression.
```
batch_size = 200
max_features = 1000
```
Check if data was downloaded, otherwise download it and save for future use
```
save_file_name = 'temp_spam_data.csv'
if os.path.isfile(save_file_name):
text_data = []
with open(save_file_name, 'r') as temp_output_file:
reader = csv.reader(temp_output_file)
for row in reader:
text_data.append(row)
else:
zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
r = requests.get(zip_url)
z = ZipFile(io.BytesIO(r.content))
file = z.read('SMSSpamCollection')
# Format Data
text_data = file.decode()
text_data = text_data.encode('ascii',errors='ignore')
text_data = text_data.decode().split('\n')
text_data = [x.split('\t') for x in text_data if len(x)>=1]
# And write to csv
with open(save_file_name, 'w') as temp_output_file:
writer = csv.writer(temp_output_file)
writer.writerows(text_data)
```
We now clean our texts. This will decrease our vocabulary size by converting everything to lower case, removing punctuation and getting rid of numbers.
```
texts = [x[1] for x in text_data]
target = [x[0] for x in text_data]
# Relabel 'spam' as 1, 'ham' as 0
target = [1. if x=='spam' else 0. for x in target]
# Normalize text
# Lower case
texts = [x.lower() for x in texts]
# Remove punctuation
texts = [''.join(c for c in x if c not in string.punctuation) for x in texts]
# Remove numbers
texts = [''.join(c for c in x if c not in '0123456789') for x in texts]
# Trim extra whitespace
texts = [' '.join(x.split()) for x in texts]
```
Define tokenizer function and create the TF-IDF vectors with SciKit-Learn.
```
import nltk
nltk.download('punkt')
def tokenizer(text):
words = nltk.word_tokenize(text)
return words
# Create TF-IDF of texts
tfidf = TfidfVectorizer(tokenizer=tokenizer, stop_words='english', max_features=max_features)
sparse_tfidf_texts = tfidf.fit_transform(texts)
```
Split up data set into train/test.
```
train_indices = np.random.choice(sparse_tfidf_texts.shape[0], round(0.8*sparse_tfidf_texts.shape[0]), replace=False)
test_indices = np.array(list(set(range(sparse_tfidf_texts.shape[0])) - set(train_indices)))
texts_train = sparse_tfidf_texts[train_indices]
texts_test = sparse_tfidf_texts[test_indices]
target_train = np.array([x for ix, x in enumerate(target) if ix in train_indices])
target_test = np.array([x for ix, x in enumerate(target) if ix in test_indices])
```
Now we create the variables and placeholders necessary for logistic regression. After which, we declare our logistic regression operation. Remember that the sigmoid part of the logistic regression will be in the loss function.
```
# Create variables for logistic regression
A = tf.Variable(tf.random_normal(shape=[max_features,1]))
b = tf.Variable(tf.random_normal(shape=[1,1]))
# Initialize placeholders
x_data = tf.placeholder(shape=[None, max_features], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# Declare logistic model (sigmoid in loss function)
model_output = tf.add(tf.matmul(x_data, A), b)
```
Next, we declare the loss function (which has the sigmoid in it), and the prediction function. The prediction function will have to have a sigmoid inside of it because it is not in the model output.
```
# Declare loss function (Cross Entropy loss)
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=model_output, labels=y_target))
# Prediction
prediction = tf.round(tf.sigmoid(model_output))
predictions_correct = tf.cast(tf.equal(prediction, y_target), tf.float32)
accuracy = tf.reduce_mean(predictions_correct)
```
Now we create the optimization function and initialize the model variables.
```
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.0025)
train_step = my_opt.minimize(loss)
# Intitialize Variables
init = tf.global_variables_initializer()
sess.run(init)
```
Finally, we perform our logisitic regression on the 1000 TF-IDF features.
```
train_loss = []
test_loss = []
train_acc = []
test_acc = []
i_data = []
for i in range(10000):
rand_index = np.random.choice(texts_train.shape[0], size=batch_size)
rand_x = texts_train[rand_index].todense()
rand_y = np.transpose([target_train[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
# Only record loss and accuracy every 100 generations
if (i+1)%100==0:
i_data.append(i+1)
train_loss_temp = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
train_loss.append(train_loss_temp)
test_loss_temp = sess.run(loss, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])})
test_loss.append(test_loss_temp)
train_acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x, y_target: rand_y})
train_acc.append(train_acc_temp)
test_acc_temp = sess.run(accuracy, feed_dict={x_data: texts_test.todense(), y_target: np.transpose([target_test])})
test_acc.append(test_acc_temp)
if (i+1)%500==0:
acc_and_loss = [i+1, train_loss_temp, test_loss_temp, train_acc_temp, test_acc_temp]
acc_and_loss = [np.round(x,2) for x in acc_and_loss]
print('Generation # {}. Train Loss (Test Loss): {:.2f} ({:.2f}). Train Acc (Test Acc): {:.2f} ({:.2f})'.format(*acc_and_loss))
```
Here is matplotlib code to plot the loss and accuracies.
```
# Plot loss over time
plt.plot(i_data, train_loss, 'k-', label='Train Loss')
plt.plot(i_data, test_loss, 'r--', label='Test Loss', linewidth=4)
plt.title('Cross Entropy Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Cross Entropy Loss')
plt.legend(loc='upper right')
plt.show()
# Plot train and test accuracy
plt.plot(i_data, train_acc, 'k-', label='Train Set Accuracy')
plt.plot(i_data, test_acc, 'r--', label='Test Set Accuracy', linewidth=4)
plt.title('Train and Test Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
test complete; Gopal
```
| true | code | 0.469642 | null | null | null | null |
|
# Mixture Density Networks with Edward, Keras and TensorFlow
This notebook explains how to implement Mixture Density Networks (MDN) with Edward, Keras and TensorFlow.
Keep in mind that if you want to use Keras and TensorFlow, like we do in this notebook, you need to set the backend of Keras to TensorFlow, [here](http://keras.io/backend/) it is explained how to do that.
In you are not familiar with MDNs have a look at the [following blog post](http://cbonnett.github.io/MDN.html) or at orginal [paper](http://research.microsoft.com/en-us/um/people/cmbishop/downloads/Bishop-NCRG-94-004.pdf) by Bishop.
Edward implements many probability distribution functions that are TensorFlow compatible, this makes it attractive to use Edward for MDNs.
Here are all the distributions that are currently implemented in Edward, there are more to come:
1. [Bernoulli](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L49)
2. [Beta](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L58)
3. [Binomial](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L68)
4. [Chi Squared](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L79)
5. [Dirichlet](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L89)
6. [Exponential](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L109)
7. [Gamma](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L118)
8. [Geometric](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L129)
9. [Inverse Gamma](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L138)
10. [log Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L155)
11. [Multinomial](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L165)
12. [Multivariate Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L194)
13. [Negative Binomial](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L283)
14. [Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L294)
15. [Poisson](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L310)
16. [Student-t](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L319)
17. [Truncated Normal](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L333)
18. [Uniform](https://github.com/blei-lab/edward/blob/master/edward/stats/distributions.py#L352)
Let's start with the necessary imports.
```
# imports
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import edward as ed
import numpy as np
import tensorflow as tf
from edward.stats import norm # Normal distribution from Edward.
from keras import backend as K
from keras.layers import Dense
from sklearn.cross_validation import train_test_split
```
We will need some functions to plot the results later on, these are defined in the next code block.
```
from scipy.stats import norm as normal
def plot_normal_mix(pis, mus, sigmas, ax, label='', comp=True):
"""
Plots the mixture of Normal models to axis=ax
comp=True plots all components of mixture model
"""
x = np.linspace(-10.5, 10.5, 250)
final = np.zeros_like(x)
for i, (weight_mix, mu_mix, sigma_mix) in enumerate(zip(pis, mus, sigmas)):
temp = normal.pdf(x, mu_mix, sigma_mix) * weight_mix
final = final + temp
if comp:
ax.plot(x, temp, label='Normal ' + str(i))
ax.plot(x, final, label='Mixture of Normals ' + label)
ax.legend(fontsize=13)
def sample_from_mixture(x, pred_weights, pred_means, pred_std, amount):
"""
Draws samples from mixture model.
Returns 2 d array with input X and sample from prediction of Mixture Model
"""
samples = np.zeros((amount, 2))
n_mix = len(pred_weights[0])
to_choose_from = np.arange(n_mix)
for j,(weights, means, std_devs) in enumerate(zip(pred_weights, pred_means, pred_std)):
index = np.random.choice(to_choose_from, p=weights)
samples[j,1]= normal.rvs(means[index], std_devs[index], size=1)
samples[j,0]= x[j]
if j == amount -1:
break
return samples
```
## Making some toy-data to play with.
This is the same toy-data problem set as used in the [blog post](http://blog.otoro.net/2015/11/24/mixture-density-networks-with-tensorflow/) by Otoro where he explains MDNs. This is an inverse problem as you can see, for every ```X``` there are multiple ```y``` solutions.
```
def build_toy_dataset(nsample=40000):
y_data = np.float32(np.random.uniform(-10.5, 10.5, (1, nsample))).T
r_data = np.float32(np.random.normal(size=(nsample, 1))) # random noise
x_data = np.float32(np.sin(0.75 * y_data) * 7.0 + y_data * 0.5 + r_data * 1.0)
return train_test_split(x_data, y_data, random_state=42, train_size=0.1)
X_train, X_test, y_train, y_test = build_toy_dataset()
print("Size of features in training data: {:s}".format(X_train.shape))
print("Size of output in training data: {:s}".format(y_train.shape))
print("Size of features in test data: {:s}".format(X_test.shape))
print("Size of output in test data: {:s}".format(y_test.shape))
sns.regplot(X_train, y_train, fit_reg=False)
```
### Building a MDN using Edward, Keras and TF
We will define a class that can be used to construct MDNs. In this notebook we will be using a mixture of Normal Distributions. The advantage of defining a class is that we can easily reuse this to build other MDNs with different amount of mixture components. Furthermore, this makes it play nicely with Edward.
```
class MixtureDensityNetwork:
"""
Mixture density network for outputs y on inputs x.
p((x,y), (z,theta))
= sum_{k=1}^K pi_k(x; theta) Normal(y; mu_k(x; theta), sigma_k(x; theta))
where pi, mu, sigma are the output of a neural network taking x
as input and with parameters theta. There are no latent variables
z, which are hidden variables we aim to be Bayesian about.
"""
def __init__(self, K):
self.K = K # here K is the amount of Mixtures
def mapping(self, X):
"""pi, mu, sigma = NN(x; theta)"""
hidden1 = Dense(15, activation='relu')(X) # fully-connected layer with 15 hidden units
hidden2 = Dense(15, activation='relu')(hidden1)
self.mus = Dense(self.K)(hidden2) # the means
self.sigmas = Dense(self.K, activation=K.exp)(hidden2) # the variance
self.pi = Dense(self.K, activation=K.softmax)(hidden2) # the mixture components
def log_prob(self, xs, zs=None):
"""log p((xs,ys), (z,theta)) = sum_{n=1}^N log p((xs[n,:],ys[n]), theta)"""
# Note there are no parameters we're being Bayesian about. The
# parameters are baked into how we specify the neural networks.
X, y = xs
self.mapping(X)
result = tf.exp(norm.logpdf(y, self.mus, self.sigmas))
result = tf.mul(result, self.pi)
result = tf.reduce_sum(result, 1)
result = tf.log(result)
return tf.reduce_sum(result)
```
We can set a seed in Edward so we can reproduce all the random components. The following line:
```ed.set_seed(42)```
sets the seed in Numpy and TensorFlow under the [hood](https://github.com/blei-lab/edward/blob/master/edward/util.py#L191). We use the class we defined above to initiate the MDN with 20 mixtures, this now can be used as an Edward model.
```
ed.set_seed(42)
model = MixtureDensityNetwork(20)
```
In the following code cell we define the TensorFlow placeholders that are then used to define the Edward data model.
The following line passes the ```model``` and ```data``` to ```MAP``` from Edward which is then used to initialise the TensorFlow variables.
```inference = ed.MAP(model, data)```
MAP is a Bayesian concept and stands for Maximum A Posteriori, it tries to find the set of parameters which maximizes the posterior distribution. In the example here we don't have a prior, in a Bayesian context this means we have a flat prior. For a flat prior MAP is equivalent to Maximum Likelihood Estimation. Edward is designed to be Bayesian about its statistical inference. The cool thing about MDN's with Edward is that we could easily include priors!
```
X = tf.placeholder(tf.float32, shape=(None, 1))
y = tf.placeholder(tf.float32, shape=(None, 1))
data = ed.Data([X, y]) # Make Edward Data model
inference = ed.MAP(model, data) # Make the inference model
sess = tf.Session() # Start TF session
K.set_session(sess) # Pass session info to Keras
inference.initialize(sess=sess) # Initialize all TF variables using the Edward interface
```
Having done that we can train the MDN in TensorFlow just like we normally would, and we can get out the predictions we are interested in from ```model```, in this case:
* ```model.pi``` the mixture components,
* ```model.mus``` the means,
* ```model.sigmas``` the standard deviations.
This is done in the last line of the code cell :
```
pred_weights, pred_means, pred_std = sess.run([model.pi, model.mus, model.sigmas],
feed_dict={X: X_test})
```
The default minimisation technique used is ADAM with a decaying scale factor.
This can be seen [here](https://github.com/blei-lab/edward/blob/master/edward/inferences.py#L94) in the code base of Edward. Having a decaying scale factor is not the standard way of using ADAM, this is inspired by the Automatic Differentiation Variational Inference [(ADVI)](http://arxiv.org/abs/1603.00788) work where it was used in the RMSPROP minimizer.
The loss that is minimised in the ```MAP``` model from Edward is the negative log-likelihood, this calculation uses the ```log_prob``` method in the ```MixtureDensityNetwork``` class we defined above.
The ```build_loss``` method in the ```MAP``` class can be found [here](https://github.com/blei-lab/edward/blob/master/edward/inferences.py#L396).
However the method ```inference.loss``` used below, returns the log-likelihood, so we expect this quantity to be maximized.
```
NEPOCH = 1000
train_loss = np.zeros(NEPOCH)
test_loss = np.zeros(NEPOCH)
for i in range(NEPOCH):
_, train_loss[i] = sess.run([inference.train, inference.loss],
feed_dict={X: X_train, y: y_train})
test_loss[i] = sess.run(inference.loss, feed_dict={X: X_test, y: y_test})
pred_weights, pred_means, pred_std = sess.run([model.pi, model.mus, model.sigmas],
feed_dict={X: X_test})
```
We can plot the log-likelihood of the training and test sample as function of training epoch.
Keep in mind that ```inference.loss``` returns the total log-likelihood, so not the loss per data point, so in the plotting routine we divide by the size of the train and test data respectively.
We see that it converges after 400 training steps.
```
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(16, 3.5))
plt.plot(np.arange(NEPOCH), test_loss/len(X_test), label='Test')
plt.plot(np.arange(NEPOCH), train_loss/len(X_train), label='Train')
plt.legend(fontsize=20)
plt.xlabel('Epoch', fontsize=15)
plt.ylabel('Log-likelihood', fontsize=15)
```
Next we can have a look at how some individual examples perform. Keep in mind this is an inverse problem
so we can't get the answer correct, we can hope that the truth lies in area where the model has high probability.
In the next plot the truth is the vertical grey line while the blue line is the prediction of the mixture density network. As you can see, we didn't do too bad.
```
obj = [0, 4, 6]
fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(16, 6))
plot_normal_mix(pred_weights[obj][0], pred_means[obj][0], pred_std[obj][0], axes[0], comp=False)
axes[0].axvline(x=y_test[obj][0], color='black', alpha=0.5)
plot_normal_mix(pred_weights[obj][2], pred_means[obj][2], pred_std[obj][2], axes[1], comp=False)
axes[1].axvline(x=y_test[obj][2], color='black', alpha=0.5)
plot_normal_mix(pred_weights[obj][1], pred_means[obj][1], pred_std[obj][1], axes[2], comp=False)
axes[2].axvline(x=y_test[obj][1], color='black', alpha=0.5)
```
We can check the ensemble by drawing samples of the prediction and plotting the density of those.
Seems the MDN learned what it needed too.
```
a = sample_from_mixture(X_test, pred_weights, pred_means, pred_std, amount=len(X_test))
sns.jointplot(a[:,0], a[:,1], kind="hex", color="#4CB391", ylim=(-10,10), xlim=(-14,14))
```
| true | code | 0.762009 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/cseveriano/spatio-temporal-forecasting/blob/master/notebooks/thesis_experiments/20200924_eMVFTS_Wind_Energy_Raw.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Forecasting experiments for GEFCOM 2012 Wind Dataset
## Install Libs
```
!pip3 install -U git+https://github.com/PYFTS/pyFTS
!pip3 install -U git+https://github.com/cseveriano/spatio-temporal-forecasting
!pip3 install -U git+https://github.com/cseveriano/evolving_clustering
!pip3 install -U git+https://github.com/cseveriano/fts2image
!pip3 install -U hyperopt
!pip3 install -U pyts
import pandas as pd
import numpy as np
from hyperopt import hp
from spatiotemporal.util import parameter_tuning, sampling
from spatiotemporal.util import experiments as ex
from sklearn.metrics import mean_squared_error
from google.colab import files
import matplotlib.pyplot as plt
import pickle
import math
from pyFTS.benchmarks import Measures
from pyts.decomposition import SingularSpectrumAnalysis
from google.colab import files
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import datetime
```
## Aux Functions
```
def normalize(df):
mindf = df.min()
maxdf = df.max()
return (df-mindf)/(maxdf-mindf)
def denormalize(norm, _min, _max):
return [(n * (_max-_min)) + _min for n in norm]
def getRollingWindow(index):
pivot = index
train_start = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=20)
train_end = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=1)
test_start = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=6)
test_end = pivot.strftime('%Y-%m-%d')
return train_start, train_end, test_start, test_end
def calculate_rolling_error(cv_name, df, forecasts, order_list):
cv_results = pd.DataFrame(columns=['Split', 'RMSE', 'SMAPE'])
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
for i in np.arange(len(forecasts)):
train_start, train_end, test_start, test_end = getRollingWindow(index)
test = df[test_start : test_end]
yhat = forecasts[i]
order = order_list[i]
rmse = Measures.rmse(test.iloc[order:], yhat[:-1])
smape = Measures.smape(test.iloc[order:], yhat[:-1])
res = {'Split' : index.strftime('%Y-%m-%d') ,'RMSE' : rmse, 'SMAPE' : smape}
cv_results = cv_results.append(res, ignore_index=True)
cv_results.to_csv(cv_name+".csv")
index = index + datetime.timedelta(days=7)
return cv_results
def get_final_forecast(norm_forecasts):
forecasts_final = []
for i in np.arange(len(norm_forecasts)):
f_raw = denormalize(norm_forecasts[i], min_raw, max_raw)
forecasts_final.append(f_raw)
return forecasts_final
from spatiotemporal.test import methods_space_oahu as ms
from spatiotemporal.util import parameter_tuning, sampling
from spatiotemporal.util import experiments as ex
from sklearn.metrics import mean_squared_error
import numpy as np
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
from hyperopt import space_eval
import traceback
from . import sampling
import pickle
def calculate_error(loss_function, test_df, forecast, offset):
error = loss_function(test_df.iloc[(offset):], forecast)
print("Error : "+str(error))
return error
def method_optimize(experiment, forecast_method, train_df, test_df, space, loss_function, max_evals):
def objective(params):
print(params)
try:
_output = list(params['output'])
forecast = forecast_method(train_df, test_df, params)
_step = params.get('step', 1)
offset = params['order'] + _step - 1
error = calculate_error(loss_function, test_df[_output], forecast, offset)
except Exception:
traceback.print_exc()
error = 1000
return {'loss': error, 'status': STATUS_OK}
print("Running experiment: " + experiment)
trials = Trials()
best = fmin(objective, space, algo=tpe.suggest, max_evals=max_evals, trials=trials)
print('best parameters: ')
print(space_eval(space, best))
pickle.dump(best, open("best_" + experiment + ".pkl", "wb"))
pickle.dump(trials, open("trials_" + experiment + ".pkl", "wb"))
def run_search(methods, data, train, loss_function, max_evals=100, resample=None):
if resample:
data = sampling.resample_data(data, resample)
train_df, test_df = sampling.train_test_split(data, train)
for experiment, method, space in methods:
method_optimize(experiment, method, train_df, test_df, space, loss_function, max_evals)
```
## Load Dataset
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
from sklearn.metrics import mean_squared_error
#columns names
wind_farms = ['wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7']
# read raw dataset
import pandas as pd
df = pd.read_csv('https://query.data.world/s/3zx2jusk4z6zvlg2dafqgshqp3oao6', parse_dates=['date'], index_col=0)
df.index = pd.to_datetime(df.index, format="%Y%m%d%H")
interval = ((df.index >= '2009-07') & (df.index <= '2010-08'))
df = df.loc[interval]
#Normalize Data
# Save Min-Max for Denorm
min_raw = df.min()
max_raw = df.max()
# Perform Normalization
norm_df = normalize(df)
# Tuning split
tuning_df = norm_df["2009-07-01":"2009-07-31"]
norm_df = norm_df["2009-08-01":"2010-08-30"]
df = df["2009-08-01":"2010-08-30"]
```
## Forecasting Methods
### Persistence
```
def persistence_forecast(train, test, step):
predictions = []
for t in np.arange(0,len(test), step):
yhat = [test.iloc[t]] * step
predictions.extend(yhat)
return predictions
def rolling_cv_persistence(df, step):
forecasts = []
lags_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
yhat = persistence_forecast(train, test, step)
lags_list.append(1)
forecasts.append(yhat)
return forecasts, lags_list
forecasts_raw, order_list = rolling_cv_persistence(norm_df, 1)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_persistence", norm_df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_persistence.csv')
```
### VAR
```
from statsmodels.tsa.api import VAR, DynamicVAR
def evaluate_VAR_models(test_name, train, validation,target, maxlags_list):
var_results = pd.DataFrame(columns=['Order','RMSE'])
best_score, best_cfg, best_model = float("inf"), None, None
for lgs in maxlags_list:
model = VAR(train)
results = model.fit(maxlags=lgs, ic='aic')
order = results.k_ar
forecast = []
for i in range(len(validation)-order) :
forecast.extend(results.forecast(validation.values[i:i+order],1))
forecast_df = pd.DataFrame(columns=validation.columns, data=forecast)
rmse = Measures.rmse(validation[target].iloc[order:], forecast_df[target].values)
if rmse < best_score:
best_score, best_cfg, best_model = rmse, order, results
res = {'Order' : str(order) ,'RMSE' : rmse}
print('VAR (%s) RMSE=%.3f' % (str(order),rmse))
var_results = var_results.append(res, ignore_index=True)
var_results.to_csv(test_name+".csv")
print('Best VAR(%s) RMSE=%.3f' % (best_cfg, best_score))
return best_model
def var_forecast(train, test, params):
order = params['order']
step = params['step']
model = VAR(train.values)
results = model.fit(maxlags=order)
lag_order = results.k_ar
print("Lag order:" + str(lag_order))
forecast = []
for i in np.arange(0,len(test)-lag_order+1,step) :
forecast.extend(results.forecast(test.values[i:i+lag_order],step))
forecast_df = pd.DataFrame(columns=test.columns, data=forecast)
return forecast_df.values, lag_order
def rolling_cv_var(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Concat train & validation for test
yhat, lag_order = var_forecast(train, test, params)
forecasts.append(yhat)
order_list.append(lag_order)
return forecasts, order_list
params_raw = {'order': 4, 'step': 1}
forecasts_raw, order_list = rolling_cv_var(norm_df, params_raw)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_var", df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_var.csv')
```
### e-MVFTS
```
from spatiotemporal.models.clusteredmvfts.fts import evolvingclusterfts
def evolvingfts_forecast(train_df, test_df, params, train_model=True):
_variance_limit = params['variance_limit']
_defuzzy = params['defuzzy']
_t_norm = params['t_norm']
_membership_threshold = params['membership_threshold']
_order = params['order']
_step = params['step']
model = evolvingclusterfts.EvolvingClusterFTS(variance_limit=_variance_limit, defuzzy=_defuzzy, t_norm=_t_norm,
membership_threshold=_membership_threshold)
model.fit(train_df.values, order=_order, verbose=False)
forecast = model.predict(test_df.values, steps_ahead=_step)
forecast_df = pd.DataFrame(data=forecast, columns=test_df.columns)
return forecast_df.values
def rolling_cv_evolving(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
first_time = True
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Concat train & validation for test
yhat = list(evolvingfts_forecast(train, test, params, train_model=first_time))
#yhat.append(yhat[-1]) #para manter o formato do vetor de metricas
forecasts.append(yhat)
order_list.append(params['order'])
first_time = False
return forecasts, order_list
params_raw = {'variance_limit': 0.001, 'order': 2, 'defuzzy': 'weighted', 't_norm': 'threshold', 'membership_threshold': 0.6, 'step':1}
forecasts_raw, order_list = rolling_cv_evolving(norm_df, params_raw)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_emvfts", df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_emvfts.csv')
```
### MLP
```
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.constraints import maxnorm
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.normalization import BatchNormalization
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = pd.DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = pd.concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
```
#### MLP Parameter Tuning
```
from spatiotemporal.util import parameter_tuning, sampling
from spatiotemporal.util import experiments as ex
from sklearn.metrics import mean_squared_error
from hyperopt import hp
import numpy as np
mlp_space = {'choice':
hp.choice('num_layers',
[
{'layers': 'two',
},
{'layers': 'three',
'units3': hp.choice('units3', [8, 16, 64, 128, 256, 512]),
'dropout3': hp.choice('dropout3', [0, 0.25, 0.5, 0.75])
}
]),
'units1': hp.choice('units1', [8, 16, 64, 128, 256, 512]),
'units2': hp.choice('units2', [8, 16, 64, 128, 256, 512]),
'dropout1': hp.choice('dropout1', [0, 0.25, 0.5, 0.75]),
'dropout2': hp.choice('dropout2', [0, 0.25, 0.5, 0.75]),
'batch_size': hp.choice('batch_size', [28, 64, 128, 256, 512]),
'order': hp.choice('order', [1, 2, 3]),
'input': hp.choice('input', [wind_farms]),
'output': hp.choice('output', [wind_farms]),
'epochs': hp.choice('epochs', [100, 200, 300])}
def mlp_tuning(train_df, test_df, params):
_input = list(params['input'])
_nlags = params['order']
_epochs = params['epochs']
_batch_size = params['batch_size']
nfeat = len(train_df.columns)
nsteps = params.get('step',1)
nobs = _nlags * nfeat
output_index = -nfeat*nsteps
train_reshaped_df = series_to_supervised(train_df[_input], n_in=_nlags, n_out=nsteps)
train_X, train_Y = train_reshaped_df.iloc[:, :nobs].values, train_reshaped_df.iloc[:, output_index:].values
test_reshaped_df = series_to_supervised(test_df[_input], n_in=_nlags, n_out=nsteps)
test_X, test_Y = test_reshaped_df.iloc[:, :nobs].values, test_reshaped_df.iloc[:, output_index:].values
# design network
model = Sequential()
model.add(Dense(params['units1'], input_dim=train_X.shape[1], activation='relu'))
model.add(Dropout(params['dropout1']))
model.add(BatchNormalization())
model.add(Dense(params['units2'], activation='relu'))
model.add(Dropout(params['dropout2']))
model.add(BatchNormalization())
if params['choice']['layers'] == 'three':
model.add(Dense(params['choice']['units3'], activation='relu'))
model.add(Dropout(params['choice']['dropout3']))
model.add(BatchNormalization())
model.add(Dense(train_Y.shape[1], activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
# includes the call back object
model.fit(train_X, train_Y, epochs=_epochs, batch_size=_batch_size, verbose=False, shuffle=False)
# predict the test set
forecast = model.predict(test_X, verbose=False)
return forecast
methods = []
methods.append(("EXP_OAHU_MLP", mlp_tuning, mlp_space))
train_split = 0.6
run_search(methods, tuning_df, train_split, Measures.rmse, max_evals=30, resample=None)
```
#### MLP Forecasting
```
def mlp_multi_forecast(train_df, test_df, params):
nfeat = len(train_df.columns)
nlags = params['order']
nsteps = params.get('step',1)
nobs = nlags * nfeat
output_index = -nfeat*nsteps
train_reshaped_df = series_to_supervised(train_df, n_in=nlags, n_out=nsteps)
train_X, train_Y = train_reshaped_df.iloc[:, :nobs].values, train_reshaped_df.iloc[:, output_index:].values
test_reshaped_df = series_to_supervised(test_df, n_in=nlags, n_out=nsteps)
test_X, test_Y = test_reshaped_df.iloc[:, :nobs].values, test_reshaped_df.iloc[:, output_index:].values
# design network
model = designMLPNetwork(train_X.shape[1], train_Y.shape[1], params)
# fit network
model.fit(train_X, train_Y, epochs=500, batch_size=1000, verbose=False, shuffle=False)
forecast = model.predict(test_X)
# fcst = [f[0] for f in forecast]
fcst = forecast
return fcst
def designMLPNetwork(input_shape, output_shape, params):
model = Sequential()
model.add(Dense(params['units1'], input_dim=input_shape, activation='relu'))
model.add(Dropout(params['dropout1']))
model.add(BatchNormalization())
model.add(Dense(params['units2'], activation='relu'))
model.add(Dropout(params['dropout2']))
model.add(BatchNormalization())
if params['choice']['layers'] == 'three':
model.add(Dense(params['choice']['units3'], activation='relu'))
model.add(Dropout(params['choice']['dropout3']))
model.add(BatchNormalization())
model.add(Dense(output_shape, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
return model
def rolling_cv_mlp(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Perform forecast
yhat = list(mlp_multi_forecast(train, test, params))
yhat.append(yhat[-1]) #para manter o formato do vetor de metricas
forecasts.append(yhat)
order_list.append(params['order'])
return forecasts, order_list
# Enter best params
params_raw = {'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
forecasts_raw, order_list = rolling_cv_mlp(norm_df, params_raw)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_mlp_multi", df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_mlp_multi.csv')
```
### Granular FTS
```
from pyFTS.models.multivariate import granular
from pyFTS.partitioners import Grid, Entropy
from pyFTS.models.multivariate import variable
from pyFTS.common import Membership
from pyFTS.partitioners import Grid, Entropy
```
#### Granular Parameter Tuning
```
granular_space = {
'npartitions': hp.choice('npartitions', [100, 150, 200]),
'order': hp.choice('order', [1, 2]),
'knn': hp.choice('knn', [1, 2, 3, 4, 5]),
'alpha_cut': hp.choice('alpha_cut', [0, 0.1, 0.2, 0.3]),
'input': hp.choice('input', [['wp1', 'wp2', 'wp3']]),
'output': hp.choice('output', [['wp1', 'wp2', 'wp3']])}
def granular_tuning(train_df, test_df, params):
_input = list(params['input'])
_output = list(params['output'])
_npartitions = params['npartitions']
_order = params['order']
_knn = params['knn']
_alpha_cut = params['alpha_cut']
_step = params.get('step',1)
## create explanatory variables
exp_variables = []
for vc in _input:
exp_variables.append(variable.Variable(vc, data_label=vc, alias=vc,
npart=_npartitions, func=Membership.trimf,
data=train_df, alpha_cut=_alpha_cut))
model = granular.GranularWMVFTS(explanatory_variables=exp_variables, target_variable=exp_variables[0], order=_order,
knn=_knn)
model.fit(train_df[_input], num_batches=1)
if _step > 1:
forecast = pd.DataFrame(columns=test_df.columns)
length = len(test_df.index)
for k in range(0,(length -(_order + _step - 1))):
fcst = model.predict(test_df[_input], type='multivariate', start_at=k, steps_ahead=_step)
forecast = forecast.append(fcst.tail(1))
else:
forecast = model.predict(test_df[_input], type='multivariate')
return forecast[_output].values
methods = []
methods.append(("EXP_WIND_GRANULAR", granular_tuning, granular_space))
train_split = 0.6
run_search(methods, tuning_df, train_split, Measures.rmse, max_evals=10, resample=None)
```
#### Granular Forecasting
```
def granular_forecast(train_df, test_df, params):
_input = list(params['input'])
_output = list(params['output'])
_npartitions = params['npartitions']
_knn = params['knn']
_alpha_cut = params['alpha_cut']
_order = params['order']
_step = params.get('step',1)
## create explanatory variables
exp_variables = []
for vc in _input:
exp_variables.append(variable.Variable(vc, data_label=vc, alias=vc,
npart=_npartitions, func=Membership.trimf,
data=train_df, alpha_cut=_alpha_cut))
model = granular.GranularWMVFTS(explanatory_variables=exp_variables, target_variable=exp_variables[0], order=_order,
knn=_knn)
model.fit(train_df[_input], num_batches=1)
if _step > 1:
forecast = pd.DataFrame(columns=test_df.columns)
length = len(test_df.index)
for k in range(0,(length -(_order + _step - 1))):
fcst = model.predict(test_df[_input], type='multivariate', start_at=k, steps_ahead=_step)
forecast = forecast.append(fcst.tail(1))
else:
forecast = model.predict(test_df[_input], type='multivariate')
return forecast[_output].values
def rolling_cv_granular(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Perform forecast
yhat = list(granular_forecast(train, test, params))
yhat.append(yhat[-1]) #para manter o formato do vetor de metricas
forecasts.append(yhat)
order_list.append(params['order'])
return forecasts, order_list
def granular_get_final_forecast(forecasts_raw, input):
forecasts_final = []
l_min = df[input].min()
l_max = df[input].max()
for i in np.arange(len(forecasts_raw)):
f_raw = denormalize(forecasts_raw[i], l_min, l_max)
forecasts_final.append(f_raw)
return forecasts_final
# Enter best params
params_raw = {'alpha_cut': 0.3, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 5, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
forecasts_raw, order_list = rolling_cv_granular(norm_df, params_raw)
forecasts_final = granular_get_final_forecast(forecasts_raw, list(params_raw['input']))
calculate_rolling_error("rolling_cv_wind_raw_granular", df[list(params_raw['input'])], forecasts_final, order_list)
files.download('rolling_cv_wind_raw_granular.csv')
```
## Result Analysis
```
import pandas as pd
from google.colab import files
files.upload()
def createBoxplot(filename, data, xticklabels, ylabel):
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(data, patch_artist=True)
## change outline color, fill color and linewidth of the boxes
for box in bp['boxes']:
# change outline color
box.set( color='#7570b3', linewidth=2)
# change fill color
box.set( facecolor = '#AACCFF' )
## change color and linewidth of the whiskers
for whisker in bp['whiskers']:
whisker.set(color='#7570b3', linewidth=2)
## change color and linewidth of the caps
for cap in bp['caps']:
cap.set(color='#7570b3', linewidth=2)
## change color and linewidth of the medians
for median in bp['medians']:
median.set(color='#FFE680', linewidth=2)
## change the style of fliers and their fill
for flier in bp['fliers']:
flier.set(marker='o', color='#e7298a', alpha=0.5)
## Custom x-axis labels
ax.set_xticklabels(xticklabels)
ax.set_ylabel(ylabel)
plt.show()
fig.savefig(filename, bbox_inches='tight')
var_results = pd.read_csv("rolling_cv_wind_raw_var.csv")
evolving_results = pd.read_csv("rolling_cv_wind_raw_emvfts.csv")
mlp_results = pd.read_csv("rolling_cv_wind_raw_mlp_multi.csv")
granular_results = pd.read_csv("rolling_cv_wind_raw_granular.csv")
metric = 'RMSE'
results_data = [evolving_results[metric],var_results[metric], mlp_results[metric], granular_results[metric]]
xticks = ['e-MVFTS','VAR','MLP','FIG-FTS']
ylab = 'RMSE'
createBoxplot("e-mvfts_boxplot_rmse_solar", results_data, xticks, ylab)
pd.options.display.float_format = '{:.2f}'.format
metric = 'RMSE'
rmse_df = pd.DataFrame(columns=['e-MVFTS','VAR','MLP','FIG-FTS'])
rmse_df["e-MVFTS"] = evolving_results[metric]
rmse_df["VAR"] = var_results[metric]
rmse_df["MLP"] = mlp_results[metric]
rmse_df["FIG-FTS"] = granular_results[metric]
rmse_df.std()
metric = 'SMAPE'
results_data = [evolving_results[metric],var_results[metric], mlp_results[metric], granular_results[metric]]
xticks = ['e-MVFTS','VAR','MLP','FIG-FTS']
ylab = 'SMAPE'
createBoxplot("e-mvfts_boxplot_smape_solar", results_data, xticks, ylab)
metric = 'SMAPE'
smape_df = pd.DataFrame(columns=['e-MVFTS','VAR','MLP','FIG-FTS'])
smape_df["e-MVFTS"] = evolving_results[metric]
smape_df["VAR"] = var_results[metric]
smape_df["MLP"] = mlp_results[metric]
smape_df["FIG-FTS"] = granular_results[metric]
smape_df.std()
metric = "RMSE"
data = pd.DataFrame(columns=["VAR", "Evolving", "MLP", "Granular"])
data["VAR"] = var_results[metric]
data["Evolving"] = evolving_results[metric]
data["MLP"] = mlp_results[metric]
data["Granular"] = granular_results[metric]
ax = data.plot(figsize=(18,6))
ax.set(xlabel='Window', ylabel=metric)
fig = ax.get_figure()
#fig.savefig(path_images + exp_id + "_prequential.png")
x = np.arange(len(data.columns.values))
names = data.columns.values
values = data.mean().values
plt.figure(figsize=(5,6))
plt.bar(x, values, align='center', alpha=0.5, width=0.9)
plt.xticks(x, names)
#plt.yticks(np.arange(0, 1.1, 0.1))
plt.ylabel(metric)
#plt.savefig(path_images + exp_id + "_bars.png")
metric = "SMAPE"
data = pd.DataFrame(columns=["VAR", "Evolving", "MLP", "Granular"])
data["VAR"] = var_results[metric]
data["Evolving"] = evolving_results[metric]
data["MLP"] = mlp_results[metric]
data["Granular"] = granular_results[metric]
ax = data.plot(figsize=(18,6))
ax.set(xlabel='Window', ylabel=metric)
fig = ax.get_figure()
#fig.savefig(path_images + exp_id + "_prequential.png")
x = np.arange(len(data.columns.values))
names = data.columns.values
values = data.mean().values
plt.figure(figsize=(5,6))
plt.bar(x, values, align='center', alpha=0.5, width=0.9)
plt.xticks(x, names)
#plt.yticks(np.arange(0, 1.1, 0.1))
plt.ylabel(metric)
#plt.savefig(path_images + exp_id + "_bars.png")
```
| true | code | 0.44059 | null | null | null | null |
|
# Trade-off between classification accuracy and reconstruction error during dimensionality reduction
- Low-dimensional LSTM representations are excellent at dimensionality reduction, but are poor at reconstructing the original data
- On the other hand, PCs are excellent at reconstructing the original data but these high-variance components do not preserve class information
```
import numpy as np
import pandas as pd
import scipy as sp
import pickle
import os
import random
import sys
# visualizations
from _plotly_future_ import v4_subplots
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.subplots as tls
import plotly.figure_factory as ff
import plotly.io as pio
import plotly.express as px
pio.templates.default = 'plotly_white'
pio.orca.config.executable = '/home/joyneelm/fire/bin/orca'
colors = px.colors.qualitative.Plotly
class ARGS():
roi = 300
net = 7
subnet = 'wb'
train_size = 100
batch_size = 32
num_epochs = 50
zscore = 1
#gru
k_hidden = 32
k_layers = 1
dims = [3, 4, 5, 10]
args = ARGS()
def _get_results(k_dim):
RES_DIR = 'results/clip_gru_recon'
load_path = (RES_DIR +
'/roi_%d_net_%d' %(args.roi, args.net) +
'_trainsize_%d' %(args.train_size) +
'_k_hidden_%d' %(args.k_hidden) +
'_kdim_%d' %(k_dim) +
'_k_layers_%d' %(args.k_layers) +
'_batch_size_%d' %(args.batch_size) +
'_num_epochs_45' +
'_z_%d.pkl' %(args.zscore))
with open(load_path, 'rb') as f:
results = pickle.load(f)
# print(results.keys())
return results
r = {}
for k_dim in args.dims:
r[k_dim] = _get_results(k_dim)
def _plot_fig(ss):
title_text = ss
if ss=='var':
ss = 'mse'
invert = True
else:
invert = False
subplot_titles = ['train', 'test']
fig = tls.make_subplots(rows=1,
cols=2,
subplot_titles=subplot_titles,
print_grid=False)
for ii, x in enumerate(['train', 'test']):
gru_score = {'mean':[], 'ste':[]}
pca_score = {'mean':[], 'ste':[]}
for k_dim in args.dims:
a = r[k_dim]
# gru decoder
y = np.mean(a['%s_%s'%(x, ss)])
gru_score['mean'].append(y)
# pca decoder
y = np.mean(a['%s_pca_%s'%(x, ss)])
pca_score['mean'].append(y)
x = np.arange(len(args.dims))
if invert:
y = 1 - np.array(gru_score['mean'])
else:
y = gru_score['mean']
error_y = gru_score['ste']
trace = go.Bar(x=x, y=y,
name='lstm decoder',
marker_color=colors[0])
fig.add_trace(trace, 1, ii+1)
if invert:
y = 1 - np.array(pca_score['mean'])
else:
y = pca_score['mean']
error_y = pca_score['ste']
trace = go.Bar(x=x, y=y,
name='pca recon',
marker_color=colors[1])
fig.add_trace(trace, 1, ii+1)
fig.update_xaxes(tickvals=np.arange(len(args.dims)),
ticktext=args.dims)
fig.update_layout(height=350, width=700,
title_text=title_text)
return fig
```
## Mean-squared error vs number of dimensions
```
'''
mse
'''
ss = 'mse'
fig = _plot_fig(ss)
fig.show()
```
## Variance captured vs number of dimensions
```
'''
variance
'''
ss = 'var'
fig = _plot_fig(ss)
fig.show()
```
## R-squared vs number of dimensions
```
'''
r2
'''
ss = 'r2'
fig = _plot_fig(ss)
fig.show()
results = r[10]
# variance not captured by pca recon
pca_not = 1 - np.sum(results['pca_var'])
print('percent variance captured by pca components = %0.3f' %(1 - pca_not))
# this is proportional to pca mse
pca_mse = results['test_pca_mse']
# variance not captured by lstm decoder?
lstm_mse = results['test_mse']
lstm_not = lstm_mse*(pca_not/pca_mse)
print('percent variance captured by lstm recon = %0.3f' %(1 - lstm_not))
def _plot_fig_ext(ss):
title_text = ss
if ss=='var':
ss = 'mse'
invert = True
else:
invert = False
subplot_titles = ['train', 'test']
fig = go.Figure()
x = 'test'
lstm_score = {'mean':[], 'ste':[]}
pca_score = {'mean':[], 'ste':[]}
lstm_acc = {'mean':[], 'ste':[]}
pc_acc = {'mean':[], 'ste':[]}
for k_dim in args.dims:
a = r[k_dim]
# lstm encoder
k_sub = len(a['test'])
y = np.mean(a['test'])
error_y = 3/np.sqrt(k_sub)*np.std(a['test'])
lstm_acc['mean'].append(y)
lstm_acc['ste'].append(error_y)
# lstm decoder
y = np.mean(a['%s_%s'%(x, ss)])
lstm_score['mean'].append(y)
lstm_score['ste'].append(error_y)
# pca encoder
b = r_pc[k_dim]
y = np.mean(b['test'])
error_y = 3/np.sqrt(k_sub)*np.std(b['test'])
pc_acc['mean'].append(y)
pc_acc['ste'].append(error_y)
# pca decoder
y = np.mean(a['%s_pca_%s'%(x, ss)])
pca_score['mean'].append(y)
pca_score['ste'].append(error_y)
x = np.arange(len(args.dims))
y = lstm_acc['mean']
error_y = lstm_acc['ste']
trace = go.Bar(x=x, y=y,
name='GRU Accuracy',
error_y=dict(type='data',
array=error_y),
marker_color=colors[3])
fig.add_trace(trace)
y = pc_acc['mean']
error_y = pc_acc['ste']
trace = go.Bar(x=x, y=y,
name='PCA Accuracy',
error_y=dict(type='data',
array=error_y),
marker_color=colors[4])
fig.add_trace(trace)
if invert:
y = 1 - np.array(lstm_score['mean'])
else:
y = lstm_score['mean']
error_y = lstm_score['ste']
trace = go.Bar(x=x, y=y,
name='GRU Reconstruction',
error_y=dict(type='data',
array=error_y),
marker_color=colors[5])
fig.add_trace(trace)
if invert:
y = 1 - np.array(pca_score['mean'])
else:
y = pca_score['mean']
error_y = pca_score['ste']
trace = go.Bar(x=x, y=y,
name='PCA Reconstruction',
error_y=dict(type='data',
array=error_y),
marker_color=colors[2])
fig.add_trace(trace)
fig.update_yaxes(title=dict(text='Accuracy or % variance',
font_size=20),
gridwidth=1, gridcolor='#bfbfbf',
tickfont=dict(size=20))
fig.update_xaxes(title=dict(text='Number of dimensions',
font_size=20),
tickvals=np.arange(len(args.dims)),
ticktext=args.dims,
tickfont=dict(size=20))
fig.update_layout(height=470, width=570,
font_color='black',
legend_orientation='h',
legend_font_size=20,
legend_x=-0.1,
legend_y=-0.3)
return fig
def _get_pc_results(PC_DIR, k_dim):
load_path = (PC_DIR +
'/roi_%d_net_%d' %(args.roi, args.net) +
'_nw_%s' %(args.subnet) +
'_trainsize_%d' %(args.train_size) +
'_kdim_%d_batch_size_%d' %(k_dim, args.batch_size) +
'_num_epochs_%d_z_%d.pkl' %(args.num_epochs, args.zscore))
with open(load_path, 'rb') as f:
results = pickle.load(f)
print(results.keys())
return results
```
## Comparison of LSTM and PCA: classification accuracy and variance captured
```
'''
variance
'''
r_pc = {}
PC_DIR = 'results/clip_pca'
for k_dim in args.dims:
r_pc[k_dim] = _get_pc_results(PC_DIR, k_dim)
colors = px.colors.qualitative.Set3
#colors = ["#D55E00", "#009E73", "#56B4E9", "#E69F00"]
ss = 'var'
fig = _plot_fig_ext(ss)
fig.show()
fig.write_image('figures/fig3c.png')
```
| true | code | 0.569853 | null | null | null | null |
|
# Controlling Flow with Conditional Statements
Now that you've learned how to create conditional statements, let's learn how to use them to control the flow of our programs. This is done with `if`, `elif`, and `else` statements.
## The `if` Statement
What if we wanted to check if a number was divisible by 2 and if so then print that number out. Let's diagram that out.

- Check to see if A is even
- If yes, then print our message: "A is even"
This use case can be translated into a "if" statement. I'm going to write this out in pseudocode which looks very similar to Python.
```text
if A is even:
print "A is even"
```
```
# Let's translate this into Python code
def check_evenness(A):
if A % 2 == 0:
print(f"A ({A:02}) is even!")
for i in range(1, 11):
check_evenness(i)
# You can do multiple if statements and they're executed sequentially
A = 10
if A > 0:
print('A is positive')
if A % 2 == 0:
print('A is even!')
```
## The `else` Statement
But what if we wanted to know if the number was even OR odd? Let's diagram that out:

Again, translating this to pseudocode, we're going to use the 'else' statement:
```text
if A is even:
print "A is even"
else:
print "A is odd"
```
```
# Let's translate this into Python code
def check_evenness(A):
if A % 2 == 0:
print(f"A ({A:02}) is even!")
else:
print(f'A ({A:02}) is odd!')
for i in range(1, 11):
check_evenness(i)
```
# The 'else if' or `elif` Statement
What if we wanted to check if A is divisible by 2 or 3? Let's diagram that out:

Again, translating this into psuedocode, we're going to use the 'else if' statement.
```text
if A is divisible by 2:
print "2 divides A"
else if A is divisible by 3:
print "3 divides A"
else
print "2 and 3 don't divide A"
```
```
# Let's translate this into Python code
def check_divisible_by_2_and_3(A):
if A % 2 == 0:
print(f"2 divides A ({A:02})!")
# else if in Python is elif
elif A % 3 == 0:
print(f'3 divides A ({A:02})!')
else:
print(f'A ({A:02}) is not divisible by 2 or 3)')
for i in range(1, 11):
check_divisible_by_2_and_3(i)
```
## Order Matters
When chaining conditionals, you need to be careful how you order them. For example, what if we wanted te check if a number is divisible by 2, 3, or both:

```
# Let's translate this into Python code
def check_divisible_by_2_and_3(A):
if A % 2 == 0:
print(f"2 divides A ({A:02})!")
elif A % 3 == 0:
print(f'3 divides A ({A:02})!')
elif A % 2 == 0 and A % 3 == 0:
print(f'2 and 3 divides A ({A:02})!')
else:
print(f"2 or 3 doesn't divide A ({A:02})")
for i in range(1, 11):
check_divisible_by_2_and_3(i)
```
Wait! we would expect that 6, which is divisible by both 2 and 3 to show that! Looking back at the graphic, we can see that the flow is checking for 2 first, and since that's true we follow that path first. Let's make a correction to our diagram to fix this:

```
# Let's translate this into Python code
def check_divisible_by_2_and_3(A):
if A % 2 == 0 and A % 3 == 0:
print(f'2 and 3 divides A ({A:02})!')
elif A % 3 == 0:
print(f'3 divides A ({A:02})!')
elif A % 2 == 0:
print(f"2 divides A ({A:02})!")
else:
print(f"2 or 3 doesn't divide A ({A:02})")
for i in range(1, 11):
check_divisible_by_2_and_3(i)
```
**NOTE:** Always put your most restrictive conditional at the top of your if statements and then work your way down to the least restrictive.

## In-Class Assignments
- Create a funcition that takes two inputs variables `A` and `divisor`. Check if `divisor` divides into `A`. If it does, print `"<value of A> is divided by <value of divisor>"`. Don't forget about the `in` operator that checks if a substring is in another string.
- Create a function that takes an input variable `A` which is a string. Check if `A` has the substring `apple`, `peach`, or `blueberry` in it. Print out which of these are found within the string. Note: you could do this using just if/elif/else statements, but is there a better way using lists, for loops, and if/elif/else statements?
## Solutions
```
def is_divisible(A, divisor):
if A % divisor == 0:
print(f'{A} is divided by {divisor}')
A = 37
# this is actually a crude way to find if the number is prime
for i in range(2, int(A / 2)):
is_divisible(A, i)
# notice that nothing was printed? That's because 37 is prime
B = 27
for i in range(2, int(B / 2)):
is_divisible(B, i)
# this is ONE solution. There are more out there and probably better
# one too
def check_for_fruit(A):
found_fruit = []
if 'apple' in A:
found_fruit.append('apple')
if 'peach' in A:
found_fruit.append('peach')
if 'blueberry' in A:
found_fruit.append('blueberry')
found_fruit_str = ''
for fruit in found_fruit:
found_fruit_str += fruit
found_fruit_str += ', '
if len(found_fruit) > 0:
print(found_fruit_str + ' is found within the string')
else:
print('No fruit found in the string')
check_for_fruit('there are apples and peaches in this pie')
```
| true | code | 0.238772 | null | null | null | null |
|
# BERT finetuning on AG_news-4
## Librairy
```
# !pip install transformers==4.8.2
# !pip install datasets==1.7.0
import os
import time
import pickle
import numpy as np
import torch
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score
from transformers import BertTokenizer, BertTokenizerFast
from transformers import BertForSequenceClassification, AdamW
from transformers import Trainer, TrainingArguments
from transformers import EarlyStoppingCallback
from transformers.data.data_collator import DataCollatorWithPadding
from datasets import load_dataset, Dataset, concatenate_datasets
# print(torch.__version__)
# print(torch.cuda.device_count())
# print(torch.cuda.is_available())
# print(torch.cuda.get_device_name(0))
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# if torch.cuda.is_available():
# torch.set_default_tensor_type('torch.cuda.FloatTensor')
device
```
## Global variables
```
BATCH_SIZE = 24
NB_EPOCHS = 4
RESULTS_FILE = '~/Results/BERT_finetune/ag_news-4_BERT_finetune_b'+str(BATCH_SIZE)+'_results.pkl'
RESULTS_PATH = '~/Results/BERT_finetune/ag_news-4_b'+str(BATCH_SIZE)+'/'
CACHE_DIR = '~/Data/huggignface/' # path of your folder
```
## Dataset
```
# download dataset
raw_datasets = load_dataset('ag_news', cache_dir=CACHE_DIR)
# tokenize
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding=True, truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets.set_format(type='torch', columns=['input_ids', 'attention_mask', 'label'])
train_dataset = tokenized_datasets["train"].shuffle(seed=42)
train_val_datasets = train_dataset.train_test_split(train_size=0.8)
train_dataset = train_val_datasets['train'].rename_column('label', 'labels')
val_dataset = train_val_datasets['test'].rename_column('label', 'labels')
test_dataset = tokenized_datasets["test"].shuffle(seed=42).rename_column('label', 'labels')
# get number of labels
num_labels = len(set(train_dataset['labels'].tolist()))
num_labels
```
## Model
#### Model
```
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=num_labels)
model.to(device)
```
#### Training
```
training_args = TrainingArguments(
# output
output_dir=RESULTS_PATH,
# params
num_train_epochs=NB_EPOCHS, # nb of epochs
per_device_train_batch_size=BATCH_SIZE, # batch size per device during training
per_device_eval_batch_size=BATCH_SIZE, # cf. paper Sun et al.
learning_rate=2e-5, # cf. paper Sun et al.
# warmup_steps=500, # number of warmup steps for learning rate scheduler
warmup_ratio=0.1, # cf. paper Sun et al.
weight_decay=0.01, # strength of weight decay
# # eval
evaluation_strategy="steps",
eval_steps=50,
# evaluation_strategy='no', # no more evaluation, takes time
# log
logging_dir=RESULTS_PATH+'logs',
logging_strategy='steps',
logging_steps=50,
# save
# save_strategy='epoch',
# save_strategy='steps',
# load_best_model_at_end=False
load_best_model_at_end=True # cf. paper Sun et al.
)
def compute_metrics(p):
pred, labels = p
pred = np.argmax(pred, axis=1)
accuracy = accuracy_score(y_true=labels, y_pred=pred)
return {"val_accuracy": accuracy}
trainer = Trainer(
model=model,
args=training_args,
tokenizer=tokenizer,
train_dataset=train_dataset,
eval_dataset=val_dataset,
# compute_metrics=compute_metrics,
# callbacks=[EarlyStoppingCallback(early_stopping_patience=5)]
)
results = trainer.train()
training_time = results.metrics["train_runtime"]
training_time_per_epoch = training_time / training_args.num_train_epochs
training_time_per_epoch
trainer.save_model(os.path.join(RESULTS_PATH, 'best_model-0'))
```
## Results
```
results_d = {}
epoch = 1
ordered_files = sorted( [f for f in os.listdir(RESULTS_PATH)
if (not f.endswith("logs")) and (f.startswith("best")) # best model eval only
],
key=lambda x: int(x.split('-')[1]) )
for filename in ordered_files:
print(filename)
# load model
model_file = os.path.join(RESULTS_PATH, filename)
finetuned_model = BertForSequenceClassification.from_pretrained(model_file, num_labels=num_labels)
finetuned_model.to(device)
finetuned_model.eval()
# compute test acc
test_trainer = Trainer(finetuned_model, data_collator=DataCollatorWithPadding(tokenizer))
raw_preds, labels, _ = test_trainer.predict(test_dataset)
preds = np.argmax(raw_preds, axis=1)
test_acc = accuracy_score(y_true=labels, y_pred=preds)
# results_d[filename] = (test_acc, training_time_per_epoch*epoch)
results_d[filename] = test_acc # best model evaluation only
print((test_acc, training_time_per_epoch*epoch))
epoch += 1
results_d['training_time'] = training_time
# save results
with open(RESULTS_FILE, 'wb') as fh:
pickle.dump(results_d, fh)
# load results
with open(RESULTS_FILE, 'rb') as fh:
results_d = pickle.load(fh)
results_d
```
| true | code | 0.617657 | null | null | null | null |
|
# Graphs from the presentation
```
import matplotlib.pyplot as plt
%matplotlib notebook
# create a new figure
plt.figure()
# create x and y coordinates via lists
x = [99, 19, 88, 12, 95, 47, 81, 64, 83, 76]
y = [43, 18, 11, 4, 78, 47, 77, 70, 21, 24]
# scatter the points onto the figure
plt.scatter(x, y)
# create a new figure
plt.figure()
# create x and y values via lists
x = [1, 2, 3, 4, 5, 6, 7, 8]
y = [1, 4, 9, 16, 25, 36, 49, 64]
# plot the line
plt.plot(x, y)
# create a new figure
plt.figure()
# create a list of observations
observations = [5.24, 3.82, 3.73, 5.3 , 3.93, 5.32, 6.43, 4.4 , 5.79, 4.05, 5.34, 5.62, 6.02, 6.08, 6.39, 5.03, 5.34, 4.98, 3.84, 4.91, 6.62, 4.66, 5.06, 2.37, 5. , 3.7 , 5.22, 5.86, 3.88, 4.68, 4.88, 5.01, 3.09, 5.38, 4.78, 6.26, 6.29, 5.77, 4.33, 5.96, 4.74, 4.54, 7.99, 5. , 4.85, 5.68, 3.73, 4.42, 4.99, 4.47, 6.06, 5.88, 4.56, 5.37, 6.39, 4.15]
# create a histogram with 15 intervals
plt.hist(observations, bins=15)
# create a new figure
plt.figure()
# plot a red line with a transparancy of 40%. Label this 'line 1'
plt.plot(x, y, color='red', alpha=0.4, label='line 1')
# make a key appear on the plot
plt.legend()
# import pandas
import pandas as pd
# read in data from a csv
data = pd.read_csv('data/weather.csv', parse_dates=['Date'])
# create a new matplotlib figure
plt.figure()
# plot the temperature over time
plt.plot(data['Date'], data['Temp (C)'])
# add a ylabel
plt.ylabel('Temperature (C)')
plt.figure()
# create inputs
x = ['UK', 'France', 'Germany', 'Spain', 'Italy']
y = [67.5, 65.1, 83.5, 46.7, 60.6]
# plot the chart
plt.bar(x, y)
plt.ylabel('Population (M)')
plt.figure()
# create inputs
x = ['UK', 'France', 'Germany', 'Spain', 'Italy']
y = [67.5, 65.1, 83.5, 46.7, 60.6]
# create a list of colours
colour = ['red', 'green', 'blue', 'orange', 'purple']
# plot the chart with the colors and transparancy
plt.bar(x, y, color=colour, alpha=0.5)
plt.ylabel('Population (M)')
plt.figure()
x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18]
y2 = [4, 8, 12, 16, 20, 24, 28, 32, 36]
plt.scatter(x, y1, color='cyan', s=5)
plt.scatter(x, y2, color='violet', s=15)
plt.figure()
x = [1, 2, 3, 4, 5, 6, 7, 8, 9]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18]
y2 = [4, 8, 12, 16, 20, 24, 28, 32, 36]
size1 = [10, 20, 30, 40, 50, 60, 70, 80, 90]
size2 = [90, 80, 70, 60, 50, 40, 30, 20, 10]
plt.scatter(x, y1, color='cyan', s=size1)
plt.scatter(x, y2, color='violet', s=size2)
co2_file = '../5. Examples of Visual Analytics in Python/data/national/co2_emissions_tonnes_per_person.csv'
gdp_file = '../5. Examples of Visual Analytics in Python/data/national/gdppercapita_us_inflation_adjusted.csv'
pop_file = '../5. Examples of Visual Analytics in Python/data/national/population.csv'
co2_per_cap = pd.read_csv(co2_file, index_col=0, parse_dates=True)
gdp_per_cap = pd.read_csv(gdp_file, index_col=0, parse_dates=True)
population = pd.read_csv(pop_file, index_col=0, parse_dates=True)
plt.figure()
x = gdp_per_cap.loc['2017'] # gdp in 2017
y = co2_per_cap.loc['2017'] # co2 emmissions in 2017
# population in 2017 will give size of points (divide pop by 1M)
size = population.loc['2017'] / 1e6
# scatter points with vector size and some transparancy
plt.scatter(x, y, s=size, alpha=0.5)
# set a log-scale
plt.xscale('log')
plt.yscale('log')
plt.xlabel('GDP per capita, $US')
plt.ylabel('CO2 emissions per person per year, tonnes')
plt.figure()
# create grid of numbers
grid = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
# plot the grid with 'autumn' color map
plt.imshow(grid, cmap='autumn')
# add a colour key
plt.colorbar()
import pandas as pd
data = pd.read_csv("../5. Examples of Visual Analytics in Python/data/stocks/FTSE_stock_prices.csv", index_col=0)
correlation_matrix = data.pct_change().corr()
# create a new figure
plt.figure()
# imshow the grid of correlation
plt.imshow(correlation_matrix, cmap='terrain')
# add a color bar
plt.colorbar()
# remove cluttering x and y ticks
plt.xticks([])
plt.yticks([])
elevation = pd.read_csv('data/UK_elevation.csv', index_col=0)
# create figure
plt.figure()
# imshow data
plt.imshow(elevation, # grid data
vmin=-50, # minimum for colour bar
vmax=500, # maximum for colour bar
cmap='terrain', # terrain style colour map
extent=[-11, 3, 50, 60]) # [x1, x2, y1, y2] plot boundaries
# add axis labels and a title
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.title('UK Elevation Profile')
# add a colourbar
plt.colorbar()
```
| true | code | 0.663723 | null | null | null | null |
|
# BLU15 - Model CSI
## Intro:
It often happens that your data distribution changes with time.
More than that, sometimes you don't know how a model was trained and what was the original training data.
In this learning unit we're going to try to identify whether an existing model meets our expectations and redeploy it.
## Problem statement:
As an example, we're going to use the same problem that you met in the last BLU.
You're already familiar with the problem, but just as a reminder:
> The police department has received lots of complaints about its stop and search policy. Every time a car is stopped, the police officers have to decide whether or not to search the car for contraband. According to critics, these searches have a bias against people of certain backgrounds.
You got a model from your client, and **here is the model's description:**
> It's a LightGBM model (LGBMClassifier) trained on the following features:
> - Department Name
> - InterventionLocationName
> - InterventionReasonCode
> - ReportingOfficerIdentificationID
> - ResidentIndicator
> - SearchAuthorizationCode
> - StatuteReason
> - SubjectAge
> - SubjectEthnicityCode
> - SubjectRaceCode
> - SubjectSexCode
> - TownResidentIndicator
> All the categorical feature were one-hot encoded. The only numerical feature (SubjectAge) was not changed. The rows that contain rare categorical features (the ones that appear less than N times in the dataset) were removed. Check the original_model.ipynb notebook for more details.
P.S., if you never heard about lightgbm, XGboost and other gradient boosting, I highly recommend you to read this [article](https://mlcourse.ai/articles/topic10-boosting/) or watch these videos: [part1](https://www.youtube.com/watch?v=g0ZOtzZqdqk), [part2](https://www.youtube.com/watch?v=V5158Oug4W8)
It's not essential for this BLU, so you might leave this link as a desert after you go through the learning materials and solve the exercises, but these are very good models you can use later on, so I suggest reading about them.
**Here are the requirements that the police department created:**
> - A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found)
> - No police sub-department should have a discrepancy bigger than 5% between the search success rate between protected classes (race, ethnicity, gender)
> - The largest possible amount of contraband found, given the constraints above.
**And here is the description of how the current model succeeds with the requirements:**
- precision score = 50%
- recall = 89.3%
- roc_auc_score for the probability predictions = 82.7%
The precision and recall above are met for probability predictions with a specified threshold equal to **0.21073452797732833**
It's not said whether the second requirement is met, and as it was not met in the previous learning unit, let's ignore it for now.
## Model diagnosing:
Let's firstly try to compare these models to the ones that we created in the previous BLU:
| Model | Baseline | Second iteration | New model | Best model |
|-------------------|---------|--------|--------|--------|
| Requirement 1 - success rate | 0.53 | 0.38 | 0.5 | 1 |
| Requirement 2 - global discrimination (race) | 0.105 | 0.11 | NaN | 1 |
| Requirement 2 - global discrimination (sex) | 0.012 | 0.014 | NaN | 1 |
| Requirement 2 - global discrimination (ethnicity) | 0.114 | 0.101 | NaN | 2 |
| Requirement 2 - # department discrimination (race) | 27 | 17 | NaN | 2 |
| Requirement 2 - # department discrimination (sex) | 19 | 23 | NaN | 1 |
| Requirement 2 - # department discrimination (ethnicity) | 24 | NaN | 23 | 2 |
| Requirement 3 - contraband found (Recall) | 0.65 | 0.76 | 0.893 | 3 |
As we can see, the last model has the exact required success rate (Requirement 1) as we need, and a very good Recall (Requirement 3).
But it might be risky to have such a specific threshold, as we might end up success rate < 0.5 really quickly. It might be a better idea to have a bigger threshold (e.g. 0.25), but let's see.
Let's imagine that the model was trained long time ago.
And now you're in the future trying to evaluate the model, because things might have changed. Data distribution is not always the same, so something that used to work even a year ago could be completely wrong today.
Especially in 2020!
<img src="media/future_2020.jpg" width=400/>
First of all, let's start the server which is running this model.
Open the shell,
```sh
python protected_server.py
```
And read a csv files with new observations from 2020:
```
import joblib
import pandas as pd
import json
import joblib
import pickle
from sklearn.metrics import precision_score, recall_score, roc_auc_score
from sklearn.metrics import confusion_matrix
import requests
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.metrics import precision_recall_curve
%matplotlib inline
df = pd.read_csv('./data/new_observations.csv')
df.head()
```
Let's start from sending all those requests and comparing the model prediction results with the target values.
The model is already prepared to convert our observations to the format its expecting, the only thing we need to change is making department and intervention location names lowercase, and we're good to extract fields from the dataframe and put them to the post request.
```
# lowercaes departments and location names
df['Department Name'] = df['Department Name'].apply(lambda x: str(x).lower())
df['InterventionLocationName'] = df['InterventionLocationName'].apply(lambda x: str(x).lower())
url = "http://127.0.0.1:5000/predict"
headers = {'Content-Type': 'application/json'}
def send_request(index: int, obs: dict, url: str, headers: dict):
observation = {
"id": index,
"observation": {
"Department Name": obs["Department Name"],
"InterventionLocationName": obs["InterventionLocationName"],
"InterventionReasonCode": obs["InterventionReasonCode"],
"ReportingOfficerIdentificationID": obs["ReportingOfficerIdentificationID"],
"ResidentIndicator": obs["ResidentIndicator"],
"SearchAuthorizationCode": obs["SearchAuthorizationCode"],
"StatuteReason": obs["StatuteReason"],
"SubjectAge": obs["SubjectAge"],
"SubjectEthnicityCode": obs["SubjectEthnicityCode"],
"SubjectRaceCode": obs["SubjectRaceCode"],
"SubjectSexCode": obs["SubjectSexCode"],
"TownResidentIndicator": obs["TownResidentIndicator"]
}
}
r = requests.post(url, data=json.dumps(observation), headers=headers)
result = json.loads(r.text)
return result
responses = [send_request(i, obs, url, headers) for i, obs in df.iterrows()]
print(responses[0])
df['proba'] = [r['proba'] for r in responses]
threshold = 0.21073452797732833
# we're going to use the threshold we got from the client
df['prediction'] = [1 if p >= threshold else 0 for p in df['proba']]
```
**NOTE:** We could also load the model and make predictions locally (without using the api), but:
1. I wanted to show you how you might send requests in a similar situation
2. If you have a running API and some model file, you always need to understand how the API works (if it makes any kind of data preprocessing), which might sometimes be complicated, and if you're trying to analyze the model running in production, you still need to make sure that the local predictions you do are equal to the one that the production api does.
```
confusion_matrix(df['ContrabandIndicator'], df['prediction'])
```
If you're not familiar with confusion matrixes, **here is an explanation of the values:**
<img src="./media/confusion_matrix.jpg" alt="drawing" width="500"/>
These values don't seem to be good. Let's once again take a look on the client's requirements and see if we still meet them:
> A minimum 50% success rate for searches (when a car is searched, it should be at least 50% likely that contraband is found)
```
def verify_success_rate_above(y_true, y_pred, min_success_rate=0.5):
"""
Verifies the success rate on a test set is above a provided minimum
"""
precision = precision_score(y_true, y_pred, pos_label=True)
is_satisfied = (precision >= min_success_rate)
return is_satisfied, precision
verify_success_rate_above(df['ContrabandIndicator'], df['prediction'], 0.5)
```

> The largest possible amount of contraband found, given the constraints above.
As the client says, their model recall was 0.893. And what now?
```
def verify_amount_found(y_true, y_pred):
"""
Verifies the amout of contraband found in the test dataset - a.k.a the recall in our test set
"""
recall = recall_score(y_true, y_pred, pos_label=True)
return recall
verify_amount_found(df['ContrabandIndicator'], df['prediction'])
```
<img src="./media/no_please_2.jpg" alt="drawing" width="500"/>
Okay, relax, it happens. Let's start from checking different thresholds. Maybe the selected threshold was to specific and doesn't work anymore.
What about 0.25?
```
threshold = 0.25
df['prediction'] = [1 if p >= threshold else 0 for p in df['proba']]
verify_success_rate_above(df['ContrabandIndicator'], df['prediction'], 0.5)
verify_amount_found(df['ContrabandIndicator'], df['prediction'])
```
<img src="./media/poker.jpg" alt="drawing" width="200"/>
Okay, let's try the same technique to identify the best threshold as they originally did. Maybe we find something good enough.
It's not a good idea to verify such things on the test data, but we're going to use it just to confirm the model's performance, not to select the threshold.
```
precision, recall, thresholds = precision_recall_curve(df['ContrabandIndicator'], df['proba'])
precision = precision[:-1]
recall = recall[:-1]
fig=plt.figure()
ax1 = plt.subplot(211)
ax2 = plt.subplot(212)
ax1.hlines(y=0.5,xmin=0, xmax=1, colors='red')
ax1.plot(thresholds,precision)
ax2.plot(thresholds,recall)
ax1.get_shared_x_axes().join(ax1, ax2)
ax1.set_xticklabels([])
plt.xlabel('Threshold')
ax1.set_title('Precision')
ax2.set_title('Recall')
plt.show()
```
So what do we see? There is some threshold value (around 0.6) that gives us precision >= 0.5.
But the threshold is so big, that the recall at this point is really-really low.
Let's calculate the exact values:
```
min_index = [i for i, prec in enumerate(precision) if prec >= 0.5][0]
print(min_index)
thresholds[min_index]
precision[min_index]
recall[min_index]
```
<img src="./media/incredible.jpg" alt="drawing" width="400"/>
Before we move on, we need to understand why this happens, so that we can decide what kind of action to perform.
Let's try to analyze the changes in data and discuss different things we might want to do.
```
old_df = pd.read_csv('./data/train_searched.csv')
old_df.head()
```
We're going to apply the same changes to the dataset as in the original model notebook unit to understand what was the original data like and how the current dataset differs.
```
old_df = old_df[(old_df['VehicleSearchedIndicator']==True)]
# lowercaes departments and location names
old_df['Department Name'] = old_df['Department Name'].apply(lambda x: str(x).lower())
old_df['InterventionLocationName'] = old_df['InterventionLocationName'].apply(lambda x: str(x).lower())
train_features = old_df.columns.drop(['VehicleSearchedIndicator', 'ContrabandIndicator'])
categorical_features = train_features.drop(['InterventionDateTime', 'SubjectAge'])
numerical_features = ['SubjectAge']
target = 'ContrabandIndicator'
# I'm going to remove less common features.
# Let's create a dictionary with the minimum required number of appearences
min_frequency = {
"Department Name": 50,
"InterventionLocationName": 50,
"ReportingOfficerIdentificationID": 30,
"StatuteReason": 10
}
def filter_values(df: pd.DataFrame, column_name: str, threshold: int):
value_counts = df[column_name].value_counts()
to_keep = value_counts[value_counts > threshold].index
filtered = df[df[column_name].isin(to_keep)]
return filtered
for feature, threshold in min_frequency.items():
old_df = filter_values(old_df, feature, threshold)
old_df.shape
old_df.head()
old_df['ContrabandIndicator'].value_counts(normalize=True)
df['ContrabandIndicator'].value_counts(normalize=True)
```
Looks like we got a bit more contraband now, and it's already a good sign:
if the training data had a different target feature distribution than the test set, the model's predictions might have a different distribution as well. It's a good practice to have the same target feature distribution both in training and test sets.
Let's investigate further
```
new_department_names = df['Department Name'].unique()
old_department_names = old_df['Department Name'].unique()
unknown_departments = [department for department in new_department_names if department not in old_department_names]
len(unknown_departments)
df[df['Department Name'].isin(unknown_departments)].shape
```
So we have 10 departments that the original model was not trained on, but they are only 23 rows from the test set.
Let's repeat the same thing for the Intervention Location names
```
new_location_names = df['InterventionLocationName'].unique()
old_location_names = old_df['InterventionLocationName'].unique()
unknown_locations = [location for location in new_location_names if location not in old_location_names]
len(unknown_locations)
df[df['InterventionLocationName'].isin(unknown_locations)].shape[0]
print('unknown locations: ', df[df['InterventionLocationName'].isin(unknown_locations)].shape[0] * 100 / df.shape[0], '%')
```
Alright, a bit more of unknown locations.
We don't know if the feature was important for the model, so these 5.3% of unknown locations might be important or not.
But it's worth keeping it in mind.
**Here are a few ideas of what we could try to do:**
1. Reanalyze the filtered locations, e.g. filter more rare ones.
2. Create a new category for the rare locations
3. Analyze the unknown locations for containing typos
Let's go further and take a look on the relation between department names and the number of contrabands they find.
We're going to select the most common department names, and then see the percentage of contraband indicator in each one for the training and test sets
```
common_departments = df['Department Name'].value_counts().head(20).index
departments_new = df[df['Department Name'].isin(common_departments)]
departments_old = old_df[old_df['Department Name'].isin(common_departments)]
pd.crosstab(departments_new['ContrabandIndicator'], departments_new['Department Name'], normalize="columns")
pd.crosstab(departments_old['ContrabandIndicator'], departments_old['Department Name'], normalize="columns")
```
We can clearly see that some departments got a huge difference in the contraband indicator.
E.g. Bridgeport used to have 93% of False contrabands, and now has only 62%.
Similar situation with Danbury and New Haven.
Why? Hard to say. There are really a lot of variables here. Maybe the departments got instructed on how to look for contraband.
But we might need to retrain the model.
Let's just finish reviewing other columns.
```
common_location = df['InterventionLocationName'].value_counts().head(20).index
locations_new = df[df['InterventionLocationName'].isin(common_location)]
locations_old = old_df[old_df['InterventionLocationName'].isin(common_location)]
pd.crosstab(locations_new['ContrabandIndicator'], locations_new['InterventionLocationName'], normalize="columns")
pd.crosstab(locations_old['ContrabandIndicator'], locations_old['InterventionLocationName'], normalize="columns")
```
What do we see? First of all, the InterventionLocationName and the Department Name are often same.
It sounds pretty logic, as probably policeman's usually work in the area of their department. But we could try to create a feature saying whether InterventionLocationName is equal to the Department Name.
Or maybe we could just get rid of one of them, if all the values are equal.
What else?
Well, There are similar changes in the Contraband distribution as in Department Name case.
Let's move on:
```
pd.crosstab(df['ContrabandIndicator'], df['InterventionReasonCode'], normalize="columns")
pd.crosstab(old_df['ContrabandIndicator'], old_df['InterventionReasonCode'], normalize="columns")
```
There are some small changes, but they don't seem to be significant.
Especially that all the 3 values have around 33% of Contraband.
Time for officers:
```
df['ReportingOfficerIdentificationID'].value_counts()
filter_values(df, 'ReportingOfficerIdentificationID', 2)['ReportingOfficerIdentificationID'].nunique()
```
Well, looks like there are a lot of unique values for the officer id (1166 for 2000 records), and there are not so many common ones (only 206 officers have more than 2 rows in the dataset) so it doesn't make much sense to analyze it.
Let's quickly go throw the rest of the columns:
```
df.columns
rest = ['ResidentIndicator', 'SearchAuthorizationCode',
'StatuteReason', 'SubjectEthnicityCode',
'SubjectRaceCode', 'SubjectSexCode','TownResidentIndicator']
for col in rest:
display(pd.crosstab(df['ContrabandIndicator'], df[col], normalize="columns"))
display(pd.crosstab(old_df['ContrabandIndicator'], old_df[col], normalize="columns"))
```
We see that all the columns got changes, but they don't seem to be so significant as in the Departments cases.
Anyway, it seems like we need to retrain the model.
<img src="./media/retrain.jpg" alt="drawing" width="400"/>
Retraining a model is always a decision we need to think about.
Was this change in data constant, temporary or seasonal?
In other words, do we expect the data distribution to stay as it is? To change back after Covid? To change from season to season?
**Depending on that, we could retrain the model differently:**
- **If it's a seasonality**, we might want to add features like season or month and train the same model to predict differently depending on the season. We could also investigate time-series classification algorithms.
- **If it's something that is going to change back**, we might either train a new model for this particular period in case the current data distrubution changes were temporary. Otherwise, if we expect the data distribution change here and back from time to time (and we know these periods in advance), we could create a new feature that would help model understand which period it is.
> E.g. if we had a task of predicting beer consumption and had a city that has a lot of football matches, we might add a feature like **football_championship** and make the model predict differently for this occasions.
- **If the data distribution has simply changed and we know that it's never going to come back**, we can simply retrain the model.
> But in some cases we have no idea why some changes appeared (e.g. in this case of departments having more contraband).
- In this case it might be a good idea to train a new model on the new datast and create some monitoring for these features distribution, so we could react when things change.
> So, in our case we don't know what was the reason of data distribution changes, so we'd like to train a model on the new dataset.
> The only thing is the size of the dataset. Original dataset had around 50k rows, and our new set has only 2000. It's not enough to train a good model, so this time we're going to combine both the datasets and add a new feature helping model to distinguish between them. If we had more data, it would be probably better to train a completely new model.
And we're done!
<img src="./media/end.jpg" alt="drawing" width="400"/>
| true | code | 0.507141 | null | null | null | null |
|
# Profiling TensorFlow Multi GPU Multi Node Training Job with Amazon SageMaker Debugger
This notebook will walk you through creating a TensorFlow training job with the SageMaker Debugger profiling feature enabled. It will create a multi GPU multi node training using Horovod.
### (Optional) Install SageMaker and SMDebug Python SDKs
To use the new Debugger profiling features released in December 2020, ensure that you have the latest versions of SageMaker and SMDebug SDKs installed. Use the following cell to update the libraries and restarts the Jupyter kernel to apply the updates.
```
import sys
import IPython
install_needed = False # should only be True once
if install_needed:
print("installing deps and restarting kernel")
!{sys.executable} -m pip install -U sagemaker smdebug
IPython.Application.instance().kernel.do_shutdown(True)
```
## 1. Create a Training Job with Profiling Enabled<a class="anchor" id="option-1"></a>
You will use the standard [SageMaker Estimator API for Tensorflow](https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html#tensorflow-estimator) to create training jobs. To enable profiling, create a `ProfilerConfig` object and pass it to the `profiler_config` parameter of the `TensorFlow` estimator.
### Define parameters for distributed training
This parameter tells SageMaker how to configure and run horovod. If you want to use more than 4 GPUs per node then change the process_per_host paramter accordingly.
```
distributions = {
"mpi": {
"enabled": True,
"processes_per_host": 4,
"custom_mpi_options": "-verbose -x HOROVOD_TIMELINE=./hvd_timeline.json -x NCCL_DEBUG=INFO -x OMPI_MCA_btl_vader_single_copy_mechanism=none",
}
}
```
### Configure rules
We specify the following rules:
- loss_not_decreasing: checks if loss is decreasing and triggers if the loss has not decreased by a certain persentage in the last few iterations
- LowGPUUtilization: checks if GPU is under-utilizated
- ProfilerReport: runs the entire set of performance rules and create a final output report with further insights and recommendations.
```
from sagemaker.debugger import Rule, ProfilerRule, rule_configs
rules = [
Rule.sagemaker(rule_configs.loss_not_decreasing()),
ProfilerRule.sagemaker(rule_configs.LowGPUUtilization()),
ProfilerRule.sagemaker(rule_configs.ProfilerReport()),
]
```
### Specify a profiler configuration
The following configuration will capture system metrics at 500 milliseconds. The system metrics include utilization per CPU, GPU, memory utilization per CPU, GPU as well I/O and network.
Debugger will capture detailed profiling information from step 5 to step 15. This information includes Horovod metrics, dataloading, preprocessing, operators running on CPU and GPU.
```
from sagemaker.debugger import ProfilerConfig, FrameworkProfile
profiler_config = ProfilerConfig(
system_monitor_interval_millis=500,
framework_profile_params=FrameworkProfile(
local_path="/opt/ml/output/profiler/", start_step=5, num_steps=10
),
)
```
### Get the image URI
The image that we will is dependent on the region that you are running this notebook in.
```
import boto3
session = boto3.session.Session()
region = session.region_name
image_uri = f"763104351884.dkr.ecr.{region}.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04"
```
### Define estimator
To enable profiling, you need to pass the Debugger profiling configuration (`profiler_config`), a list of Debugger rules (`rules`), and the image URI (`image_uri`) to the estimator. Debugger enables monitoring and profiling while the SageMaker estimator requests a training job.
```
import sagemaker
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(
role=sagemaker.get_execution_role(),
image_uri=image_uri,
instance_count=2,
instance_type="ml.p3.8xlarge",
entry_point="tf-hvd-train.py",
source_dir="entry_point",
profiler_config=profiler_config,
distribution=distributions,
rules=rules,
)
```
### Start training job
The following `estimator.fit()` with `wait=False` argument initiates the training job in the background. You can proceed to run the dashboard or analysis notebooks.
```
estimator.fit(wait=False)
```
## 2. Analyze Profiling Data
Copy outputs of the following cell (`training_job_name` and `region`) to run the analysis notebooks `profiling_generic_dashboard.ipynb`, `analyze_performance_bottlenecks.ipynb`, and `profiling_interactive_analysis.ipynb`.
```
training_job_name = estimator.latest_training_job.name
print(f"Training jobname: {training_job_name}")
print(f"Region: {region}")
```
While the training is still in progress you can visualize the performance data in SageMaker Studio or in the notebook.
Debugger provides utilities to plot system metrics in form of timeline charts or heatmaps. Checkout out the notebook
[profiling_interactive_analysis.ipynb](analysis_tools/profiling_interactive_analysis.ipynb) for more details. In the following code cell we plot the total CPU and GPU utilization as timeseries charts. To visualize other metrics such as I/O, memory, network you simply need to extend the list passed to `select_dimension` and `select_events`.
### Install the SMDebug client library to use Debugger analysis tools
```
import pip
def import_or_install(package):
try:
__import__(package)
except ImportError:
pip.main(["install", package])
import_or_install("smdebug")
```
### Access the profiling data using the SMDebug `TrainingJob` utility class
```
from smdebug.profiler.analysis.notebook_utils.training_job import TrainingJob
tj = TrainingJob(training_job_name, region)
tj.wait_for_sys_profiling_data_to_be_available()
```
### Plot time line charts
The following code shows how to use the SMDebug `TrainingJob` object, refresh the object if new event files are available, and plot time line charts of CPU and GPU usage.
```
from smdebug.profiler.analysis.notebook_utils.timeline_charts import TimelineCharts
system_metrics_reader = tj.get_systems_metrics_reader()
system_metrics_reader.refresh_event_file_list()
view_timeline_charts = TimelineCharts(
system_metrics_reader,
framework_metrics_reader=None,
select_dimensions=["CPU", "GPU"],
select_events=["total"],
)
```
## 3. Download Debugger Profiling Report
The `ProfilerReport()` rule creates an html report `profiler-report.html` with a summary of builtin rules and recommenades of next steps. You can find this report in your S3 bucket.
```
rule_output_path = estimator.output_path + estimator.latest_training_job.job_name + "/rule-output"
print(f"You will find the profiler report in {rule_output_path}")
```
For more information about how to download and open the Debugger profiling report, see [SageMaker Debugger Profiling Report](https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-profiling-report.html) in the SageMaker developer guide.
| true | code | 0.353875 | null | null | null | null |
|
# 7.6 Transformerモデル(分類タスク用)の実装
- 本ファイルでは、クラス分類のTransformerモデルを実装します。
※ 本章のファイルはすべてUbuntuでの動作を前提としています。Windowsなど文字コードが違う環境での動作にはご注意下さい。
# 7.6 学習目標
1. Transformerのモジュール構成を理解する
2. LSTMやRNNを使用せずCNNベースのTransformerで自然言語処理が可能な理由を理解する
3. Transformerを実装できるようになる
# 事前準備
書籍の指示に従い、本章で使用するデータを用意します
```
import math
import numpy as np
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchtext
# Setup seeds
torch.manual_seed(1234)
np.random.seed(1234)
random.seed(1234)
class Embedder(nn.Module):
'''idで示されている単語をベクトルに変換します'''
def __init__(self, text_embedding_vectors):
super(Embedder, self).__init__()
self.embeddings = nn.Embedding.from_pretrained(
embeddings=text_embedding_vectors, freeze=True)
# freeze=Trueによりバックプロパゲーションで更新されず変化しなくなります
def forward(self, x):
x_vec = self.embeddings(x)
return x_vec
# 動作確認
# 前節のDataLoaderなどを取得
from utils.dataloader import get_IMDb_DataLoaders_and_TEXT
train_dl, val_dl, test_dl, TEXT = get_IMDb_DataLoaders_and_TEXT(
max_length=256, batch_size=24)
# ミニバッチの用意
batch = next(iter(train_dl))
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
# 入出力
x = batch.Text[0]
x1 = net1(x) # 単語をベクトルに
print("入力のテンソルサイズ:", x.shape)
print("出力のテンソルサイズ:", x1.shape)
class PositionalEncoder(nn.Module):
'''入力された単語の位置を示すベクトル情報を付加する'''
def __init__(self, d_model=300, max_seq_len=256):
super().__init__()
self.d_model = d_model # 単語ベクトルの次元数
# 単語の順番(pos)と埋め込みベクトルの次元の位置(i)によって一意に定まる値の表をpeとして作成
pe = torch.zeros(max_seq_len, d_model)
# GPUが使える場合はGPUへ送る、ここでは省略。実際に学習時には使用する
# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# pe = pe.to(device)
for pos in range(max_seq_len):
for i in range(0, d_model, 2):
pe[pos, i] = math.sin(pos / (10000 ** ((2 * i)/d_model)))
pe[pos, i + 1] = math.cos(pos /
(10000 ** ((2 * (i + 1))/d_model)))
# 表peの先頭に、ミニバッチ次元となる次元を足す
self.pe = pe.unsqueeze(0)
# 勾配を計算しないようにする
self.pe.requires_grad = False
def forward(self, x):
# 入力xとPositonal Encodingを足し算する
# xがpeよりも小さいので、大きくする
ret = math.sqrt(self.d_model)*x + self.pe
return ret
# 動作確認
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
net2 = PositionalEncoder(d_model=300, max_seq_len=256)
# 入出力
x = batch.Text[0]
x1 = net1(x) # 単語をベクトルに
x2 = net2(x1)
print("入力のテンソルサイズ:", x1.shape)
print("出力のテンソルサイズ:", x2.shape)
class Attention(nn.Module):
'''Transformerは本当はマルチヘッドAttentionですが、
分かりやすさを優先しシングルAttentionで実装します'''
def __init__(self, d_model=300):
super().__init__()
# SAGANでは1dConvを使用したが、今回は全結合層で特徴量を変換する
self.q_linear = nn.Linear(d_model, d_model)
self.v_linear = nn.Linear(d_model, d_model)
self.k_linear = nn.Linear(d_model, d_model)
# 出力時に使用する全結合層
self.out = nn.Linear(d_model, d_model)
# Attentionの大きさ調整の変数
self.d_k = d_model
def forward(self, q, k, v, mask):
# 全結合層で特徴量を変換
k = self.k_linear(k)
q = self.q_linear(q)
v = self.v_linear(v)
# Attentionの値を計算する
# 各値を足し算すると大きくなりすぎるので、root(d_k)で割って調整
weights = torch.matmul(q, k.transpose(1, 2)) / math.sqrt(self.d_k)
# ここでmaskを計算
mask = mask.unsqueeze(1)
weights = weights.masked_fill(mask == 0, -1e9)
# softmaxで規格化をする
normlized_weights = F.softmax(weights, dim=-1)
# AttentionをValueとかけ算
output = torch.matmul(normlized_weights, v)
# 全結合層で特徴量を変換
output = self.out(output)
return output, normlized_weights
class FeedForward(nn.Module):
def __init__(self, d_model, d_ff=1024, dropout=0.1):
'''Attention層から出力を単純に全結合層2つで特徴量を変換するだけのユニットです'''
super().__init__()
self.linear_1 = nn.Linear(d_model, d_ff)
self.dropout = nn.Dropout(dropout)
self.linear_2 = nn.Linear(d_ff, d_model)
def forward(self, x):
x = self.linear_1(x)
x = self.dropout(F.relu(x))
x = self.linear_2(x)
return x
class TransformerBlock(nn.Module):
def __init__(self, d_model, dropout=0.1):
super().__init__()
# LayerNormalization層
# https://pytorch.org/docs/stable/nn.html?highlight=layernorm
self.norm_1 = nn.LayerNorm(d_model)
self.norm_2 = nn.LayerNorm(d_model)
# Attention層
self.attn = Attention(d_model)
# Attentionのあとの全結合層2つ
self.ff = FeedForward(d_model)
# Dropout
self.dropout_1 = nn.Dropout(dropout)
self.dropout_2 = nn.Dropout(dropout)
def forward(self, x, mask):
# 正規化とAttention
x_normlized = self.norm_1(x)
output, normlized_weights = self.attn(
x_normlized, x_normlized, x_normlized, mask)
x2 = x + self.dropout_1(output)
# 正規化と全結合層
x_normlized2 = self.norm_2(x2)
output = x2 + self.dropout_2(self.ff(x_normlized2))
return output, normlized_weights
# 動作確認
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
net2 = PositionalEncoder(d_model=300, max_seq_len=256)
net3 = TransformerBlock(d_model=300)
# maskの作成
x = batch.Text[0]
input_pad = 1 # 単語のIDにおいて、'<pad>': 1 なので
input_mask = (x != input_pad)
print(input_mask[0])
# 入出力
x1 = net1(x) # 単語をベクトルに
x2 = net2(x1) # Positon情報を足し算
x3, normlized_weights = net3(x2, input_mask) # Self-Attentionで特徴量を変換
print("入力のテンソルサイズ:", x2.shape)
print("出力のテンソルサイズ:", x3.shape)
print("Attentionのサイズ:", normlized_weights.shape)
class ClassificationHead(nn.Module):
'''Transformer_Blockの出力を使用し、最後にクラス分類させる'''
def __init__(self, d_model=300, output_dim=2):
super().__init__()
# 全結合層
self.linear = nn.Linear(d_model, output_dim) # output_dimはポジ・ネガの2つ
# 重み初期化処理
nn.init.normal_(self.linear.weight, std=0.02)
nn.init.normal_(self.linear.bias, 0)
def forward(self, x):
x0 = x[:, 0, :] # 各ミニバッチの各文の先頭の単語の特徴量(300次元)を取り出す
out = self.linear(x0)
return out
# 動作確認
# ミニバッチの用意
batch = next(iter(train_dl))
# モデル構築
net1 = Embedder(TEXT.vocab.vectors)
net2 = PositionalEncoder(d_model=300, max_seq_len=256)
net3 = TransformerBlock(d_model=300)
net4 = ClassificationHead(output_dim=2, d_model=300)
# 入出力
x = batch.Text[0]
x1 = net1(x) # 単語をベクトルに
x2 = net2(x1) # Positon情報を足し算
x3, normlized_weights = net3(x2, input_mask) # Self-Attentionで特徴量を変換
x4 = net4(x3) # 最終出力の0単語目を使用して、分類0-1のスカラーを出力
print("入力のテンソルサイズ:", x3.shape)
print("出力のテンソルサイズ:", x4.shape)
# 最終的なTransformerモデルのクラス
class TransformerClassification(nn.Module):
'''Transformerでクラス分類させる'''
def __init__(self, text_embedding_vectors, d_model=300, max_seq_len=256, output_dim=2):
super().__init__()
# モデル構築
self.net1 = Embedder(text_embedding_vectors)
self.net2 = PositionalEncoder(d_model=d_model, max_seq_len=max_seq_len)
self.net3_1 = TransformerBlock(d_model=d_model)
self.net3_2 = TransformerBlock(d_model=d_model)
self.net4 = ClassificationHead(output_dim=output_dim, d_model=d_model)
def forward(self, x, mask):
x1 = self.net1(x) # 単語をベクトルに
x2 = self.net2(x1) # Positon情報を足し算
x3_1, normlized_weights_1 = self.net3_1(
x2, mask) # Self-Attentionで特徴量を変換
x3_2, normlized_weights_2 = self.net3_2(
x3_1, mask) # Self-Attentionで特徴量を変換
x4 = self.net4(x3_2) # 最終出力の0単語目を使用して、分類0-1のスカラーを出力
return x4, normlized_weights_1, normlized_weights_2
# 動作確認
# ミニバッチの用意
batch = next(iter(train_dl))
# モデル構築
net = TransformerClassification(
text_embedding_vectors=TEXT.vocab.vectors, d_model=300, max_seq_len=256, output_dim=2)
# 入出力
x = batch.Text[0]
input_mask = (x != input_pad)
out, normlized_weights_1, normlized_weights_2 = net(x, input_mask)
print("出力のテンソルサイズ:", out.shape)
print("出力テンソルのsigmoid:", F.softmax(out, dim=1))
```
ここまでの内容をフォルダ「utils」のtransformer.pyに別途保存しておき、次節からはこちらから読み込むようにします
以上
| true | code | 0.763043 | null | null | null | null |
|
# Zircon model training notebook; (extensively) modified from Detectron2 training tutorial
This Colab Notebook will allow users to train new models to detect and segment detrital zircon from RL images using Detectron2 and the training dataset provided in the colab_zirc_dims repo. It is set up to train a Mask RCNN model (ResNet depth=101), but could be modified for other instance segmentation models provided that they are supported by Detectron2.
The training dataset should be uploaded to the user's Google Drive before running this notebook.
## Install detectron2
```
!pip install pyyaml==5.1
import torch
TORCH_VERSION = ".".join(torch.__version__.split(".")[:2])
CUDA_VERSION = torch.__version__.split("+")[-1]
print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION)
# Install detectron2 that matches the above pytorch version
# See https://detectron2.readthedocs.io/tutorials/install.html for instructions
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/$CUDA_VERSION/torch$TORCH_VERSION/index.html
exit(0) # Automatically restarts runtime after installation
# Some basic setup:
# Setup detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from google.colab.patches import cv2_imshow
import copy
import time
import datetime
import logging
import random
import shutil
import torch
# import some common detectron2 utilities
from detectron2.engine.hooks import HookBase
from detectron2 import model_zoo
from detectron2.evaluation import inference_context, COCOEvaluator
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.utils.logger import log_every_n_seconds
from detectron2.data import MetadataCatalog, DatasetCatalog, build_detection_train_loader, DatasetMapper, build_detection_test_loader
import detectron2.utils.comm as comm
from detectron2.data import detection_utils as utils
from detectron2.config import LazyConfig
import detectron2.data.transforms as T
```
## Define Augmentations
The cell below defines augmentations used while training to ensure that models never see the same exact image twice during training. This mitigates overfitting and allows models to achieve substantially higher accuracy in their segmentations/measurements.
```
custom_transform_list = [T.ResizeShortestEdge([800,800]), #resize shortest edge of image to 800 pixels
T.RandomCrop('relative', (0.95, 0.95)), #randomly crop an area (95% size of original) from image
T.RandomLighting(100), #minor lighting randomization
T.RandomContrast(.85, 1.15), #minor contrast randomization
T.RandomFlip(prob=.5, horizontal=False, vertical=True), #random vertical flipping
T.RandomFlip(prob=.5, horizontal=True, vertical=False), #and horizontal flipping
T.RandomApply(T.RandomRotation([-30, 30], False), prob=.8), #random (80% probability) rotation up to 30 degrees; \
# more rotation does not seem to improve results
T.ResizeShortestEdge([800,800])] # resize img again for uniformity
```
## Mount Google Drive, set paths to dataset, model saving directories
```
from google.colab import drive
drive.mount('/content/drive')
#@markdown ### Add path to training dataset directory
dataset_dir = '/content/drive/MyDrive/training_dataset' #@param {type:"string"}
#@markdown ### Add path to model saving directory (automatically created if it does not yet exist)
model_save_dir = '/content/drive/MyDrive/NAME FOR MODEL SAVING FOLDER HERE' #@param {type:"string"}
os.makedirs(model_save_dir, exist_ok=True)
```
## Define dataset mapper, training, loss eval functions
```
from detectron2.engine import DefaultTrainer
from detectron2.data import DatasetMapper
from detectron2.structures import BoxMode
# a function to convert Via image annotation .json dict format to Detectron2 \
# training input dict format
def get_zircon_dicts(img_dir):
json_file = os.path.join(img_dir, "via_region_data.json")
with open(json_file) as f:
imgs_anns = json.load(f)['_via_img_metadata']
dataset_dicts = []
for idx, v in enumerate(imgs_anns.values()):
record = {}
filename = os.path.join(img_dir, v["filename"])
height, width = cv2.imread(filename).shape[:2]
record["file_name"] = filename
record["image_id"] = idx
record["height"] = height
record["width"] = width
#annos = v["regions"]
annos = {}
for n, eachitem in enumerate(v['regions']):
annos[str(n)] = eachitem
objs = []
for _, anno in annos.items():
#assert not anno["region_attributes"]
anno = anno["shape_attributes"]
px = anno["all_points_x"]
py = anno["all_points_y"]
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
poly = [p for x in poly for p in x]
obj = {
"bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
"bbox_mode": BoxMode.XYXY_ABS,
"segmentation": [poly],
"category_id": 0,
}
objs.append(obj)
record["annotations"] = objs
dataset_dicts.append(record)
return dataset_dicts
# loss eval hook for getting vaidation loss, copying to metrics.json; \
# from https://gist.github.com/ortegatron/c0dad15e49c2b74de8bb09a5615d9f6b
class LossEvalHook(HookBase):
def __init__(self, eval_period, model, data_loader):
self._model = model
self._period = eval_period
self._data_loader = data_loader
def _do_loss_eval(self):
# Copying inference_on_dataset from evaluator.py
total = len(self._data_loader)
num_warmup = min(5, total - 1)
start_time = time.perf_counter()
total_compute_time = 0
losses = []
for idx, inputs in enumerate(self._data_loader):
if idx == num_warmup:
start_time = time.perf_counter()
total_compute_time = 0
start_compute_time = time.perf_counter()
if torch.cuda.is_available():
torch.cuda.synchronize()
total_compute_time += time.perf_counter() - start_compute_time
iters_after_start = idx + 1 - num_warmup * int(idx >= num_warmup)
seconds_per_img = total_compute_time / iters_after_start
if idx >= num_warmup * 2 or seconds_per_img > 5:
total_seconds_per_img = (time.perf_counter() - start_time) / iters_after_start
eta = datetime.timedelta(seconds=int(total_seconds_per_img * (total - idx - 1)))
log_every_n_seconds(
logging.INFO,
"Loss on Validation done {}/{}. {:.4f} s / img. ETA={}".format(
idx + 1, total, seconds_per_img, str(eta)
),
n=5,
)
loss_batch = self._get_loss(inputs)
losses.append(loss_batch)
mean_loss = np.mean(losses)
self.trainer.storage.put_scalar('validation_loss', mean_loss)
comm.synchronize()
return losses
def _get_loss(self, data):
# How loss is calculated on train_loop
metrics_dict = self._model(data)
metrics_dict = {
k: v.detach().cpu().item() if isinstance(v, torch.Tensor) else float(v)
for k, v in metrics_dict.items()
}
total_losses_reduced = sum(loss for loss in metrics_dict.values())
return total_losses_reduced
def after_step(self):
next_iter = self.trainer.iter + 1
is_final = next_iter == self.trainer.max_iter
if is_final or (self._period > 0 and next_iter % self._period == 0):
self._do_loss_eval()
#trainer for zircons which incorporates augmentation, hooks for eval
class ZirconTrainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
#return a custom train loader with augmentations; recompute_boxes \
# is important given cropping, rotation augs
return build_detection_train_loader(cfg, mapper=
DatasetMapper(cfg, is_train=True, recompute_boxes = True,
augmentations = custom_transform_list
),
)
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
output_folder = os.path.join(cfg.OUTPUT_DIR, "inference")
return COCOEvaluator(dataset_name, cfg, True, output_folder)
#set up validation loss eval hook
def build_hooks(self):
hooks = super().build_hooks()
hooks.insert(-1,LossEvalHook(
cfg.TEST.EVAL_PERIOD,
self.model,
build_detection_test_loader(
self.cfg,
self.cfg.DATASETS.TEST[0],
DatasetMapper(self.cfg,True)
)
))
return hooks
```
## Import train, val catalogs
```
#registers training, val datasets (converts annotations using get_zircon_dicts)
for d in ["train", "val"]:
DatasetCatalog.register("zircon_" + d, lambda d=d: get_zircon_dicts(dataset_dir + "/" + d))
MetadataCatalog.get("zircon_" + d).set(thing_classes=["zircon"])
zircon_metadata = MetadataCatalog.get("zircon_train")
train_cat = DatasetCatalog.get("zircon_train")
```
## Visualize train dataset
```
# visualize random sample from training dataset
dataset_dicts = get_zircon_dicts(os.path.join(dataset_dir, 'train'))
for d in random.sample(dataset_dicts, 4): #change int here to change sample size
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=zircon_metadata, scale=0.5)
out = visualizer.draw_dataset_dict(d)
cv2_imshow(out.get_image()[:, :, ::-1])
```
# Define save to Drive function
```
# a function to save models (with iteration number in name), metrics to drive; \
# important in case training crashes or is left unattended and disconnects. \
def save_outputs_to_drive(model_name, iters):
root_output_dir = os.path.join(model_save_dir, model_name) #output_dir = save dir from user input
#creates individual model output directory if it does not already exist
os.makedirs(root_output_dir, exist_ok=True)
#creates a name for this version of model; include iteration number
curr_iters_str = str(round(iters/1000, 1)) + 'k'
curr_model_name = model_name + '_' + curr_iters_str + '.pth'
model_save_pth = os.path.join(root_output_dir, curr_model_name)
#get most recent model, current metrics, copy to drive
model_path = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
metrics_path = os.path.join(cfg.OUTPUT_DIR, 'metrics.json')
shutil.copy(model_path, model_save_pth)
shutil.copy(metrics_path, root_output_dir)
```
## Build, train model
### Set some parameters for training
```
#@markdown ### Add a base name for the model
model_save_name = 'your model name here' #@param {type:"string"}
#@markdown ### Final iteration before training stops
final_iteration = 8000 #@param {type:"slider", min:3000, max:15000, step:1000}
```
### Actually build and train model
```
#train from a pre-trained Mask RCNN model
cfg = get_cfg()
# train from base model: Default Mask RCNN
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x.yaml"))
# Load starting weights (COCO trained) from Detectron2 model zoo.
cfg.MODEL.WEIGHTS = "https://dl.fbaipublicfiles.com/detectron2/COCO-InstanceSegmentation/mask_rcnn_R_101_FPN_3x/138205316/model_final_a3ec72.pkl"
cfg.DATASETS.TRAIN = ("zircon_train",) #load training dataset
cfg.DATASETS.TEST = ("zircon_val",) # load validation dataset
cfg.DATALOADER.NUM_WORKERS = 2
cfg.SOLVER.IMS_PER_BATCH = 2 #2 ims per batch seems to be good for model generalization
cfg.SOLVER.BASE_LR = 0.00025 # low but reasonable learning rate given pre-training; \
# by default initializes with a 1000 iteration warmup
cfg.SOLVER.MAX_ITER = 2000 #train for 2000 iterations before 1st save
cfg.SOLVER.GAMMA = 0.5
#decay learning rate by factor of GAMMA every 1000 iterations after 2000 iterations \
# and until 10000 iterations This works well for current version of training \
# dataset but should be modified (probably a longer interval) if dataset is ever\
# extended.
cfg.SOLVER.STEPS = (1999, 2999, 3999, 4999, 5999, 6999, 7999, 8999, 9999)
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # use default ROI heads batch size
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only class here is zircon
cfg.MODEL.RPN.NMS_THRESH = 0.1 #sets NMS threshold lower than default; should(?) eliminate overlapping regions
cfg.TEST.EVAL_PERIOD = 200 # validation eval every 200 iterations
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = ZirconTrainer(cfg) #our zircon trainer, w/ built-in augs and val loss eval
trainer.resume_or_load(resume=False)
trainer.train() #start training
# stop training and save for the 1st time after 2000 iterations
save_outputs_to_drive(model_save_name, 2000)
# Saves, cold restarts training from saved model weights every 1000 iterations \
# until final iteration. This should probably be done via hooks without stopping \
# training but *seems* to produce faster decrease in validation loss.
for each_iters in [iter*1000 for iter in list(range(3,
int(final_iteration/1000) + 1,
1))]:
#reload model with last iteration model weights
resume_model_path = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.WEIGHTS = resume_model_path
cfg.SOLVER.MAX_ITER = each_iters #increase max iterations
trainer = ZirconTrainer(cfg)
trainer.resume_or_load(resume=True)
trainer.train() #restart training
#save again
save_outputs_to_drive(model_save_name, each_iters)
# open tensorboard training metrics curves (metrics.json):
%load_ext tensorboard
%tensorboard --logdir output
```
## Inference & evaluation with final trained model
Initialize model from saved weights:
```
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # final model; modify path to other non-final model to view their segmentations
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set a custom testing threshold
cfg.MODEL.RPN.NMS_THRESH = 0.1
predictor = DefaultPredictor(cfg)
```
View model segmentations for random sample of images from zircon validation dataset:
```
from detectron2.utils.visualizer import ColorMode
dataset_dicts = get_zircon_dicts(os.path.join(dataset_dir, 'val'))
for d in random.sample(dataset_dicts, 5):
im = cv2.imread(d["file_name"])
outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format
v = Visualizer(im[:, :, ::-1],
metadata=zircon_metadata,
scale=1.5,
instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])
```
Validation eval with COCO API metric:
```
from detectron2.evaluation import COCOEvaluator, inference_on_dataset
from detectron2.data import build_detection_test_loader
evaluator = COCOEvaluator("zircon_val", ("bbox", "segm"), False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, "zircon_val")
print(inference_on_dataset(trainer.model, val_loader, evaluator))
```
## Final notes:
To use newly-trained models in colab_zirc_dims:
#### Option A:
Modify the cell that initializes model(s) in colab_zirc_dims processing notebooks:
```
cfg.merge_from_file(model_zoo.get_config_file(DETECTRON2 BASE CONFIG FILE LINK FOR YOUR MODEL HERE))
cfg.MODEL.RESNETS.DEPTH = RESNET DEPTH FOR YOUR MODEL (E.G., 101) HERE
cfg.MODEL.WEIGHTS = PATH TO YOUR MODEL IN YOUR GOOGLE DRIVE HERE
```
#### Option B (more complicated but potentially useful for many models):
The dynamic model selection tool in colab_zirc_dims is populated from a .json file model library dictionary, which is by default [the current version on the GitHub repo.](https://github.com/MCSitar/colab_zirc_dims/blob/main/czd_model_library.json) The 'url' key in the dict will work with either an AWS download link for the model or the path to model in your Google Drive.
To use a custom model library dictionary:
Modify a copy of the colab_zirc_dims [.json file model library dictionary](https://github.com/MCSitar/colab_zirc_dims/blob/main/czd_model_library.json) to include download link(s)/Drive path(s) and metadata (e.g., resnet depth and config file) for your model(s). Upload this .json file to your Google Drive and change the 'model_lib_loc' variable in a processing Notebook to the .json's path for dynamic download and loading of this and other models within the Notebook.
| true | code | 0.58433 | null | null | null | null |
|
```
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
```
# Pytorch: An automatic differentiation tool
`Pytorch`를 활용하면 복잡한 함수의 미분을 손쉽게 + 효율적으로 계산할 수 있습니다!
`Pytorch`를 활용해서 복잡한 심층 신경망을 훈련할 때, 오차함수에 대한 파라미터의 편미분치를 계산을 손쉽게 수행할수 있습니다!
## Pytorch 첫만남
우리에게 아래와 같은 간단한 선형식이 주어져있다고 생각해볼까요?
$$ y = wx $$
그러면 $\frac{\partial y}{\partial w}$ 을 어떻게 계산 할 수 있을까요?
일단 직접 미분을 해보면$\frac{\partial y}{\partial w} = x$ 이 되니, 간단한
예제에서 `pytorch`로 해당 값을 계산하는 방법을 알아보도록 합시다!
```
# 랭크1 / 사이즈1 이며 값은 1*2 인 pytorch tensor를 하나 만듭니다.
x = torch.ones(1) * 2
# 랭크1 / 사이즈1 이며 값은 1 인 pytorch tensor를 하나 만듭니다.
w = torch.ones(1, requires_grad=True)
y = w * x
y
```
## 편미분 계산하기!
pytorch에서는 미분값을 계산하고 싶은 텐서에 `.backward()` 를 붙여주는 것으로, 해당 텐서 계산에 연결 되어있는 텐서 중 `gradient`를 계산해야하는 텐서(들)에 대한 편미분치들을 계산할수 있습니다. `requires_grad=True`를 통해서 어떤 텐서에 미분값을 계산할지 할당해줄 수 있습니다.
```
y.backward()
```
## 편미분값 확인하기!
`텐서.grad` 를 활용해서 특정 텐서의 gradient 값을 확인해볼 수 있습니다. 한번 `w.grad`를 활용해서 `y` 에 대한 `w`의 편미분값을 확인해볼까요?
```
w.grad
```
## 그러면 requires_grad = False 인 경우는?
```
x.grad
```
## `torch.nn`, Neural Network 패키지
`pytorch`에는 이미 다양한 neural network들의 모듈들을 구현해 놓았습니다. 그 중에 가장 간단하지만 정말 자주 쓰이는 `nn.Linear` 에 대해 알아보면서 `pytorch`의 `nn.Module`에 대해서 알아보도록 합시다.
## `nn.Linear` 돌아보기
`nn.Linear` 은 앞서 배운 선형회귀 및 다층 퍼셉트론 모델의 한 층에 해당하는 파라미터 $w$, $b$ 를 가지고 있습니다. 예시로 입력의 dimension 이 10이고 출력의 dimension 이 1인 `nn.Linear` 모듈을 만들어 봅시다!
```
lin = nn.Linear(in_features=10, out_features=1)
for p in lin.parameters():
print(p)
print(p.shape)
print('\n')
```
## `Linear` 모듈로 $y = Wx+b$ 계산하기
선형회귀식도 그랬지만, 다층 퍼셉트론 모델도 하나의 레이어는 아래의 수식을 계산했던 것을 기억하시죠?
$$y = Wx+b$$
`nn.Linear`를 활용해서 저 수식을 계산해볼까요?
검산을 쉽게 하기 위해서 W의 값은 모두 1.0 으로 b 는 5.0 으로 만들어두겠습니다.
```
lin.weight.data = torch.ones_like(lin.weight.data)
lin.bias.data = torch.ones_like(lin.bias.data) * 5.0
for p in lin.parameters():
print(p)
print(p.shape)
print('\n')
x = torch.ones(3, 10) # rank2 tensor를 만듭니다. : mini batch size = 3
y_hat = lin(x)
print(y_hat.shape)
print(y_hat)
```
## 지금 무슨일이 일어난거죠?
>Q1. 왜 Rank 2 tensor 를 입력으로 사용하나요? <br>
>A1. 파이토치의 `nn` 에 정의되어있는 클래스들은 입력의 가장 첫번째 디멘젼을 `배치 사이즈`로 해석합니다.
>Q2. lin(x) 는 도대체 무엇인가요? <br>
>A2. 파이썬에 익숙하신 분들은 `object()` 는 `object.__call__()`에 정의되어있는 함수를 실행시키신다는 것을 아실텐데요. 파이토치의 `nn.Module`은 `__call__()`을 오버라이드하는 함수인 `forward()`를 구현하는 것을 __권장__ 하고 있습니다. 일반적으로, `forward()`안에서 실제로 파라미터와 인풋을 가지고 특정 레이어의 연산과 정을 구현하게 됩니다.
여러가지 이유가 있겠지만, 파이토치가 내부적으로 foward() 의 실행의 전/후로 사용자 친화적인 환경을 제공하기위해서 추가적인 작업들을 해줍니다. 이 부분은 다음 실습에서 다층 퍼셉트론 모델을 만들면서 조금 더 자세히 설명해볼게요!
## Pytorch 로 간단히! 선형회귀 구현하기
저번 실습에서 numpy 로 구현했던 Linear regression 모델을 다시 한번 파이토치로 구현해볼까요? <br>
몇 줄이면 끝날 정도로 간단합니다 :)
```
def generate_samples(n_samples: int,
w: float = 1.0,
b: float = 0.5,
x_range=[-1.0,1.0]):
xs = np.random.uniform(low=x_range[0], high=x_range[1], size=n_samples)
ys = w * xs + b
xs = torch.tensor(xs).view(-1,1).float() # 파이토치 nn.Module 은 배치가 첫 디멘젼!
ys = torch.tensor(ys).view(-1,1).float()
return xs, ys
w = 1.0
b = 0.5
xs, ys = generate_samples(30, w=w, b=b)
lin_model = nn.Linear(in_features=1, out_features=1) # lim_model 생성
for p in lin_model.parameters():
print(p)
print(p.grad)
ys_hat = lin_model(xs) # lin_model 로 예측하기
```
## Loss 함수는? MSE!
`pytorch`에서는 자주 쓰이는 loss 함수들에 대해서도 미리 구현을 해두었습니다.
이번 실습에서는 __numpy로 선형회귀 모델 만들기__ 에서 사용됐던 MSE 를 오차함수로 사용해볼까요?
```
criteria = nn.MSELoss()
loss = criteria(ys_hat, ys)
```
## 경사하강법을 활용해서 파라미터 업데이트하기!
`pytorch`는 여러분들을 위해서 다양한 optimizer들을 구현해 두었습니다. 일단은 가장 간단한 stochastic gradient descent (SGD)를 활용해 볼까요? optimizer에 따라서 다양한 인자들을 활용하지만 기본적으로 `params` 와 `lr`을 지정해주면 나머지는 optimizer 마다 잘되는 것으로 알려진 인자들로 optimizer을 손쉽게 생성할수 있습니다.
```
opt = torch.optim.SGD(params=lin_model.parameters(), lr=0.01)
```
## 잊지마세요! opt.zero_grad()
`pytorch`로 편미분을 계산하기전에, 꼭 `opt.zero_grad()` 함수를 이용해서 편미분 계산이 필요한 텐서들의 편미분값을 초기화 해주는 것을 권장드립니다.
```
opt.zero_grad()
for p in lin_model.parameters():
print(p)
print(p.grad)
loss.backward()
opt.step()
for p in lin_model.parameters():
print(p)
print(p.grad)
```
## 경사하강법을 활용해서 최적 파라미터를 찾아봅시다!
```
def run_sgd(n_steps: int = 1000,
report_every: int = 100,
verbose=True):
lin_model = nn.Linear(in_features=1, out_features=1)
opt = torch.optim.SGD(params=lin_model.parameters(), lr=0.01)
sgd_losses = []
for i in range(n_steps):
ys_hat = lin_model(xs)
loss = criteria(ys_hat, ys)
opt.zero_grad()
loss.backward()
opt.step()
if i % report_every == 0:
if verbose:
print('\n')
print("{}th update: {}".format(i,loss))
for p in lin_model.parameters():
print(p)
sgd_losses.append(loss.log10().detach().numpy())
return sgd_losses
_ = run_sgd()
```
## 다른 Optimizer도 사용해볼까요?
수업시간에 배웠던 Adam 으로 최적화를 하면 어떤결과가 나올까요?
```
def run_adam(n_steps: int = 1000,
report_every: int = 100,
verbose=True):
lin_model = nn.Linear(in_features=1, out_features=1)
opt = torch.optim.Adam(params=lin_model.parameters(), lr=0.01)
adam_losses = []
for i in range(n_steps):
ys_hat = lin_model(xs)
loss = criteria(ys_hat, ys)
opt.zero_grad()
loss.backward()
opt.step()
if i % report_every == 0:
if verbose:
print('\n')
print("{}th update: {}".format(i,loss))
for p in lin_model.parameters():
print(p)
adam_losses.append(loss.log10().detach().numpy())
return adam_losses
_ = run_adam()
```
## 좀 더 상세하게 비교해볼까요?
`pytorch`에서 `nn.Linear`를 비롯한 많은 모듈들은 특별한 경우가 아닌이상,
모듈내에 파라미터가 임의의 값으로 __잘!__ 초기화 됩니다.
> "잘!" 에 대해서는 수업에서 다루지 않았지만, 확실히 현대 딥러닝이 잘 작동하게 하는 중요한 요소중에 하나입니다. Parameter initialization 이라고 부르는 기법들이며, 대부분의 `pytorch` 모듈들은 각각의 모듈에 따라서 일반적으로 잘 작동하는것으로 알려져있는 방식으로 파라미터들이 초기화 되게 코딩되어 있습니다.
그래서 매 번 모듈을 생성할때마다 파라미터의 초기값이 달라지게 됩니다. 이번에는 조금 공정한 비교를 위해서 위에서 했던 실험을 여러번 반복해서 평균적으로도 Adam이 좋은지 확인해볼까요?
```
sgd_losses = [run_sgd(verbose=False) for _ in range(50)]
sgd_losses = np.stack(sgd_losses)
sgd_loss_mean = np.mean(sgd_losses, axis=0)
sgd_loss_std = np.std(sgd_losses, axis=-0)
adam_losses = [run_adam(verbose=False) for _ in range(50)]
adam_losses = np.stack(adam_losses)
adam_loss_mean = np.mean(adam_losses, axis=0)
adam_loss_std = np.std(adam_losses, axis=-0)
fig, ax = plt.subplots(1,1, figsize=(10,5))
ax.grid()
ax.fill_between(x=range(sgd_loss_mean.shape[0]),
y1=sgd_loss_mean + sgd_loss_std,
y2=sgd_loss_mean - sgd_loss_std,
alpha=0.3)
ax.plot(sgd_loss_mean, label='SGD')
ax.fill_between(x=range(adam_loss_mean.shape[0]),
y1=adam_loss_mean + adam_loss_std,
y2=adam_loss_mean - adam_loss_std,
alpha=0.3)
ax.plot(adam_loss_mean, label='Adam')
ax.legend()
```
| true | code | 0.773548 | null | null | null | null |
|
# Callbacks and Multiple inputs
```
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from keras.optimizers import SGD
from keras.layers import Dense, Input, concatenate, BatchNormalization
from keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint
from keras.models import Model
import keras.backend as K
df = pd.read_csv("../data/titanic-train.csv")
Y = df['Survived']
df.info()
df.head()
num_features = df[['Age', 'Fare', 'SibSp', 'Parch']].fillna(0)
num_features.head()
cat_features = pd.get_dummies(df[['Pclass', 'Sex', 'Embarked']].astype('str'))
cat_features.head()
X1 = scale(num_features.values)
X2 = cat_features.values
K.clear_session()
# Numerical features branch
inputs1 = Input(shape = (X1.shape[1],))
b1 = BatchNormalization()(inputs1)
b1 = Dense(3, kernel_initializer='normal', activation = 'tanh')(b1)
b1 = BatchNormalization()(b1)
# Categorical features branch
inputs2 = Input(shape = (X2.shape[1],))
b2 = Dense(8, kernel_initializer='normal', activation = 'relu')(inputs2)
b2 = BatchNormalization()(b2)
b2 = Dense(4, kernel_initializer='normal', activation = 'relu')(b2)
b2 = BatchNormalization()(b2)
b2 = Dense(2, kernel_initializer='normal', activation = 'relu')(b2)
b2 = BatchNormalization()(b2)
merged = concatenate([b1, b2])
preds = Dense(1, activation = 'sigmoid')(merged)
# final model
model = Model([inputs1, inputs2], preds)
model.compile(loss = 'binary_crossentropy',
optimizer = 'rmsprop',
metrics = ['accuracy'])
model.summary()
outpath='/tmp/tensorflow_logs/titanic/'
early_stopper = EarlyStopping(monitor='val_acc', patience=10)
tensorboard = TensorBoard(outpath+'tensorboard/', histogram_freq=1)
checkpointer = ModelCheckpoint(outpath+'weights_epoch_{epoch:02d}_val_acc_{val_acc:.2f}.hdf5',
monitor='val_acc')
# You may have to run this a couple of times if stuck on local minimum
np.random.seed(2017)
h = model.fit([X1, X2],
Y.values,
batch_size = 32,
epochs = 40,
verbose = 1,
validation_split=0.2,
callbacks=[early_stopper,
tensorboard,
checkpointer])
import os
sorted(os.listdir(outpath))
```
Now check the tensorboard.
- If using provided aws instance, just browse to: `http://<your-ip>:6006`
- If using local, open a terminal, activate the environment and run:
```
tensorboard --logdir=/tmp/tensorflow_logs/titanic/tensorboard/
```
then open a browser at `localhost:6006`
You should see something like this:

## Exercise 1
- try modifying the parameters of the 3 callbacks provided. What are they for? What do they do?
*Copyright © 2017 CATALIT LLC. All rights reserved.*
| true | code | 0.639483 | null | null | null | null |
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/extract_value_to_points.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/extract_value_to_points.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/extract_value_to_points.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/extract_value_to_points.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Input imagery is a cloud-free Landsat 8 composite.
l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1')
image = ee.Algorithms.Landsat.simpleComposite(**{
'collection': l8.filterDate('2018-01-01', '2018-12-31'),
'asFloat': True
})
# Use these bands for prediction.
bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11']
# Load training points. The numeric property 'class' stores known labels.
points = ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels')
# This property of the table stores the land cover labels.
label = 'landcover'
# Overlay the points on the imagery to get training.
training = image.select(bands).sampleRegions(**{
'collection': points,
'properties': [label],
'scale': 30
})
# Define visualization parameters in an object literal.
vizParams = {'bands': ['B5', 'B4', 'B3'],
'min': 0, 'max': 1, 'gamma': 1.3}
Map.centerObject(points, 10)
Map.addLayer(image, vizParams, 'Image')
Map.addLayer(points, {'color': "yellow"}, 'Training points')
first = training.first()
print(first.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| true | code | 0.601886 | null | null | null | null |
|
# AutoGluon Tabular with SageMaker
[AutoGluon](https://github.com/awslabs/autogluon) automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data.
This notebook shows how to use AutoGluon-Tabular with Amazon SageMaker by creating custom containers.
## Prerequisites
If using a SageMaker hosted notebook, select kernel `conda_mxnet_p36`.
```
# Make sure docker compose is set up properly for local mode
!./setup.sh
# Imports
import os
import boto3
import sagemaker
from time import sleep
from collections import Counter
import numpy as np
import pandas as pd
from sagemaker import get_execution_role, local, Model, utils, fw_utils, s3
from sagemaker.estimator import Estimator
from sagemaker.predictor import RealTimePredictor, csv_serializer, StringDeserializer
from sklearn.metrics import accuracy_score, classification_report
from IPython.core.display import display, HTML
from IPython.core.interactiveshell import InteractiveShell
# Print settings
InteractiveShell.ast_node_interactivity = "all"
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_rows', 10)
# Account/s3 setup
session = sagemaker.Session()
local_session = local.LocalSession()
bucket = session.default_bucket()
prefix = 'sagemaker/autogluon-tabular'
region = session.boto_region_name
role = get_execution_role()
client = session.boto_session.client(
"sts", region_name=region, endpoint_url=utils.sts_regional_endpoint(region)
)
account = client.get_caller_identity()['Account']
ecr_uri_prefix = utils.get_ecr_image_uri_prefix(account, region)
registry_id = fw_utils._registry_id(region, 'mxnet', 'py3', account, '1.6.0')
registry_uri = utils.get_ecr_image_uri_prefix(registry_id, region)
```
### Build docker images
First, build autogluon package to copy into docker image.
```
if not os.path.exists('package'):
!pip install PrettyTable -t package
!pip install --upgrade boto3 -t package
!pip install bokeh -t package
!pip install --upgrade matplotlib -t package
!pip install autogluon -t package
```
Now build the training/inference image and push to ECR
```
training_algorithm_name = 'autogluon-sagemaker-training'
inference_algorithm_name = 'autogluon-sagemaker-inference'
!./container-training/build_push_training.sh {account} {region} {training_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
!./container-inference/build_push_inference.sh {account} {region} {inference_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
```
### Get the data
In this example we'll use the direct-marketing dataset to build a binary classification model that predicts whether customers will accept or decline a marketing offer.
First we'll download the data and split it into train and test sets. AutoGluon does not require a separate validation set (it uses bagged k-fold cross-validation).
```
# Download and unzip the data
!aws s3 cp --region {region} s3://sagemaker-sample-data-{region}/autopilot/direct_marketing/bank-additional.zip .
!unzip -qq -o bank-additional.zip
!rm bank-additional.zip
local_data_path = './bank-additional/bank-additional-full.csv'
data = pd.read_csv(local_data_path)
# Split train/test data
train = data.sample(frac=0.7, random_state=42)
test = data.drop(train.index)
# Split test X/y
label = 'y'
y_test = test[label]
X_test = test.drop(columns=[label])
```
##### Check the data
```
train.head(3)
train.shape
test.head(3)
test.shape
X_test.head(3)
X_test.shape
```
Upload the data to s3
```
train_file = 'train.csv'
train.to_csv(train_file,index=False)
train_s3_path = session.upload_data(train_file, key_prefix='{}/data'.format(prefix))
test_file = 'test.csv'
test.to_csv(test_file,index=False)
test_s3_path = session.upload_data(test_file, key_prefix='{}/data'.format(prefix))
X_test_file = 'X_test.csv'
X_test.to_csv(X_test_file,index=False)
X_test_s3_path = session.upload_data(X_test_file, key_prefix='{}/data'.format(prefix))
```
## Hyperparameter Selection
The minimum required settings for training is just a target label, `fit_args['label']`.
Additional optional hyperparameters can be passed to the `autogluon.task.TabularPrediction.fit` function via `fit_args`.
Below shows a more in depth example of AutoGluon-Tabular hyperparameters from the example [Predicting Columns in a Table - In Depth](https://autogluon.mxnet.io/tutorials/tabular_prediction/tabular-indepth.html#model-ensembling-with-stacking-bagging). Please see [fit parameters](https://autogluon.mxnet.io/api/autogluon.task.html?highlight=eval_metric#autogluon.task.TabularPrediction.fit) for further information. Note that in order for hyperparameter ranges to work in SageMaker, values passed to the `fit_args['hyperparameters']` must be represented as strings.
```python
nn_options = {
'num_epochs': "10",
'learning_rate': "ag.space.Real(1e-4, 1e-2, default=5e-4, log=True)",
'activation': "ag.space.Categorical('relu', 'softrelu', 'tanh')",
'layers': "ag.space.Categorical([100],[1000],[200,100],[300,200,100])",
'dropout_prob': "ag.space.Real(0.0, 0.5, default=0.1)"
}
gbm_options = {
'num_boost_round': "100",
'num_leaves': "ag.space.Int(lower=26, upper=66, default=36)"
}
model_hps = {'NN': nn_options, 'GBM': gbm_options}
fit_args = {
'label': 'y',
'presets': ['best_quality', 'optimize_for_deployment'],
'time_limits': 60*10,
'hyperparameters': model_hps,
'hyperparameter_tune': True,
'search_strategy': 'skopt'
}
hyperparameters = {
'fit_args': fit_args,
'feature_importance': True
}
```
**Note:** Your hyperparameter choices may affect the size of the model package, which could result in additional time taken to upload your model and complete training. Including `'optimize_for_deployment'` in the list of `fit_args['presets']` is recommended to greatly reduce upload times.
<br>
```
# Define required label and optional additional parameters
fit_args = {
'label': 'y',
# Adding 'best_quality' to presets list will result in better performance (but longer runtime)
'presets': ['optimize_for_deployment'],
}
# Pass fit_args to SageMaker estimator hyperparameters
hyperparameters = {
'fit_args': fit_args,
'feature_importance': True
}
```
## Train
For local training set `train_instance_type` to `local` .
For non-local training the recommended instance type is `ml.m5.2xlarge`.
**Note:** Depending on how many underlying models are trained, `train_volume_size` may need to be increased so that they all fit on disk.
```
%%time
instance_type = 'ml.m5.2xlarge'
#instance_type = 'local'
ecr_image = f'{ecr_uri_prefix}/{training_algorithm_name}:latest'
estimator = Estimator(image_name=ecr_image,
role=role,
train_instance_count=1,
train_instance_type=instance_type,
hyperparameters=hyperparameters,
train_volume_size=100)
# Set inputs. Test data is optional, but requires a label column.
inputs = {'training': train_s3_path, 'testing': test_s3_path}
estimator.fit(inputs)
```
### Create Model
```
# Create predictor object
class AutoGluonTabularPredictor(RealTimePredictor):
def __init__(self, *args, **kwargs):
super().__init__(*args, content_type='text/csv',
serializer=csv_serializer,
deserializer=StringDeserializer(), **kwargs)
ecr_image = f'{ecr_uri_prefix}/{inference_algorithm_name}:latest'
if instance_type == 'local':
model = estimator.create_model(image=ecr_image, role=role)
else:
model_uri = os.path.join(estimator.output_path, estimator._current_job_name, "output", "model.tar.gz")
model = Model(model_uri, ecr_image, role=role, sagemaker_session=session, predictor_cls=AutoGluonTabularPredictor)
```
### Batch Transform
For local mode, either `s3://<bucket>/<prefix>/output/` or `file:///<absolute_local_path>` can be used as outputs.
By including the label column in the test data, you can also evaluate prediction performance (In this case, passing `test_s3_path` instead of `X_test_s3_path`).
```
output_path = f's3://{bucket}/{prefix}/output/'
# output_path = f'file://{os.getcwd()}'
transformer = model.transformer(instance_count=1,
instance_type=instance_type,
strategy='MultiRecord',
max_payload=6,
max_concurrent_transforms=1,
output_path=output_path)
transformer.transform(test_s3_path, content_type='text/csv', split_type='Line')
transformer.wait()
```
### Endpoint
##### Deploy remote or local endpoint
```
instance_type = 'ml.m5.2xlarge'
#instance_type = 'local'
predictor = model.deploy(initial_instance_count=1,
instance_type=instance_type)
```
##### Attach to endpoint (or reattach if kernel was restarted)
```
# Select standard or local session based on instance_type
if instance_type == 'local':
sess = local_session
else:
sess = session
# Attach to endpoint
predictor = AutoGluonTabularPredictor(predictor.endpoint, sagemaker_session=sess)
```
##### Predict on unlabeled test data
```
results = predictor.predict(X_test.to_csv(index=False)).splitlines()
# Check output
print(Counter(results))
```
##### Predict on data that includes label column
Prediction performance metrics will be printed to endpoint logs.
```
results = predictor.predict(test.to_csv(index=False)).splitlines()
# Check output
print(Counter(results))
```
##### Check that classification performance metrics match evaluation printed to endpoint logs as expected
```
y_results = np.array(results)
print("accuracy: {}".format(accuracy_score(y_true=y_test, y_pred=y_results)))
print(classification_report(y_true=y_test, y_pred=y_results, digits=6))
```
##### Clean up endpoint
```
predictor.delete_endpoint()
```
| true | code | 0.455138 | null | null | null | null |
|
# Neural Networks
In the previous part of this exercise, you implemented multi-class logistic re gression to recognize handwritten digits. However, logistic regression cannot form more complex hypotheses as it is only a linear classifier.<br><br>
In this part of the exercise, you will implement a neural network to recognize handwritten digits using the same training set as before. The <strong>neural network</strong> will be able to represent complex models that form <strong>non-linear hypotheses</strong>. For this week, you will be using parameters from <strong>a neural network that we have already trained</strong>. Your goal is to implement the <strong>feedforward propagation algorithm to use our weights for prediction</strong>. In next week’s exercise, you will write the backpropagation algorithm for learning the neural network parameters.<br><br>
The file <strong><em>ex3data1</em></strong> contains a training set.<br>
The structure of the dataset described blow:<br>
1. X array = <strong>400 columns describe the values of pixels of 20*20 images in flatten format for 5000 samples</strong>
2. y array = <strong>Value of image (number between 0-9)</strong>
<br><br>
<strong>
Our assignment has these sections:
1. Visualizing the Data
1. Converting .mat to .csv
2. Loading Dataset and Trained Neural Network Weights
3. Ploting Data
2. Model Representation
3. Feedforward Propagation and Prediction
</strong>
In each section full description provided.
## 1. Visualizing the Dataset
Before starting on any task, it is often useful to understand the data by visualizing it.<br>
### 1.A Converting .mat to .csv
In this specific assignment, the instructor added a .mat file as training set and weights of trained neural network. But we have to convert it to .csv to use in python.<br>
After all we now ready to import our new csv files to pandas dataframes and do preprocessing on it and make it ready for next steps.
```
# import libraries
import scipy.io
import numpy as np
data = scipy.io.loadmat("ex3data1")
weights = scipy.io.loadmat('ex3weights')
```
Now we extract X and y variables from the .mat file and save them into .csv file for further usage. After running the below code <strong>you should see X.csv and y.csv files</strong> in your directory.
```
for i in data:
if '__' not in i and 'readme' not in i:
np.savetxt((i+".csv"),data[i],delimiter=',')
for i in weights:
if '__' not in i and 'readme' not in i:
np.savetxt((i+".csv"),weights[i],delimiter=',')
```
### 1.B Loading Dataset and Trained Neural Network Weights
First we import .csv files into pandas dataframes then save them into numpy arrays.<br><br>
There are <strong>5000 training examples</strong> in ex3data1.mat, where each training example is a <strong>20 pixel by 20 pixel <em>grayscale</em> image of the digit</strong>. Each pixel is represented by a floating point number indicating the <strong>grayscale intensity</strong> at that location. The 20 by 20 grid of pixels is <strong>"flatten" into a 400-dimensional vector</strong>. <strong>Each of these training examples becomes a single row in our data matrix X</strong>. This gives us a 5000 by 400 matrix X where every row is a training example for a handwritten digit image.<br><br>
The second part of the training set is a <strong>5000-dimensional vector y that contains labels</strong> for the training set.<br><br>
<strong>Notice: In dataset, the digit zero mapped to the value ten. Therefore, a "0" digit is labeled as "10", while the digits "1" to "9" are labeled as "1" to "9" in their natural order.<br></strong>
But this make thing harder so we bring it back to natural order for 0!
```
# import library
import pandas as pd
# saving .csv files to pandas dataframes
x_df = pd.read_csv('X.csv',names= np.arange(0,400))
y_df = pd.read_csv('y.csv',names=['label'])
# saving .csv files to pandas dataframes
Theta1_df = pd.read_csv('Theta1.csv',names = np.arange(0,401))
Theta2_df = pd.read_csv('Theta2.csv',names = np.arange(0,26))
# saving x_df and y_df into numpy arrays
x = x_df.iloc[:,:].values
y = y_df.iloc[:,:].values
m, n = x.shape
# bring back 0 to 0 !!!
y = y.reshape(m,)
y[y==10] = 0
y = y.reshape(m,1)
print('#{} Number of training samples, #{} features per sample'.format(m,n))
# saving Theta1_df and Theta2_df into numpy arrays
theta1 = Theta1_df.iloc[:,:].values
theta2 = Theta2_df.iloc[:,:].values
```
### 1.C Plotting Data
You will begin by visualizing a subset of the training set. In first part, the code <strong>randomly selects selects 100 rows from X</strong> and passes those rows to the <strong>display_data</strong> function. This function maps each row to a 20 pixel by 20 pixel grayscale image and displays the images together.<br>
After plotting, you should see an image like this:<img src='img/plot.jpg'>
```
import numpy as np
import matplotlib.pyplot as plt
import random
amount = 100
lines = 10
columns = 10
image = np.zeros((amount, 20, 20))
number = np.zeros(amount)
for i in range(amount):
rnd = random.randint(0,4999)
image[i] = x[rnd].reshape(20, 20)
y_temp = y.reshape(m,)
number[i] = y_temp[rnd]
fig = plt.figure(figsize=(8,8))
for i in range(amount):
ax = fig.add_subplot(lines, columns, 1 + i)
# Turn off tick labels
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.imshow(image[i], cmap='binary')
plt.show()
print(number)
```
# 2. Model Representation
Our neural network is shown in below figure. It has <strong>3 layers an input layer, a hidden layer and an output layer</strong>. Recall that our <strong>inputs are pixel</strong> values of digit images. Since the images are of <strong>size 20×20</strong>, this gives us <strong>400 input layer units</strong> (excluding the extra bias unit which always outputs +1).<br><br><img src='img/nn.jpg'><br>
You have been provided with a set of <strong>network parameters (Θ<sup>(1)</sup>; Θ<sup>(2)</sup>)</strong> already trained by instructor.<br><br>
<strong>Theta1 and Theta2 The parameters have dimensions that are sized for a neural network with 25 units in the second layer and 10 output units (corresponding to the 10 digit classes).</strong>
```
print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape))
```
It seems our weights are transposed, so we transpose them to have them in a way our neural network is.
```
theta1 = theta1.transpose()
theta2 = theta2.transpose()
print('theta1 shape = {}, theta2 shape = {}'.format(theta1.shape,theta2.shape))
```
# 3. Feedforward Propagation and Prediction
Now you will implement feedforward propagation for the neural network.<br>
You should implement the <strong>feedforward computation</strong> that computes <strong>h<sub>θ</sub>(x<sup>(i)</sup>)</strong> for every example i and returns the associated predictions. Similar to the one-vs-all classification strategy, the prediction from the neural network will be the <strong>label</strong> that has the <strong>largest output <strong>h<sub>θ</sub>(x)<sub>k</sub></strong></strong>.
<strong>Implementation Note:</strong> The matrix X contains the examples in rows. When you complete the code, <strong>you will need to add the column of 1’s</strong> to the matrix. The matrices <strong>Theta1 and Theta2 contain the parameters for each unit in rows.</strong> Specifically, the first row of Theta1 corresponds to the first hidden unit in the second layer. <br>
You must get <strong>a<sup>(l)</sup></strong> as a column vector.<br><br>
You should see that the <strong>accuracy is about 97.5%</strong>.
```
# adding column of 1's to x
x = np.append(np.ones(shape=(m,1)),x,axis = 1)
```
<strong>h = hypothesis(x,theta)</strong> will compute <strong>sigmoid</strong> function on <strong>θ<sup>T</sup>X</strong> and return a number which <strong>0<=h<=1</strong>.<br>
You can use <a href='https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.special.expit.html'>this</a> library for calculating sigmoid.
```
def sigmoid(z):
return 1/(1+np.exp(-z))
def lr_hypothesis(x,theta):
return np.dot(x,theta)
```
<strong>predict(theta1, theta2, x):</strong> outputs the predicted label of x given the trained weights of a neural network (theta1, theta2).
```
layers = 3
num_labels = 10
```
<strong>Becuase the initial dataset has changed and mapped 0 to "10", so the weights also are changed. So we just rotate columns one step to right, to predict correct values.<br>
Recall we have changed mapping 0 to "10" to 0 to "0" but we cannot detect this mapping in weights of neural netwrok. So we have to this rotation on final output of probabilities.</strong>
```
def rotate_column(array):
array_ = np.zeros(shape=(m,num_labels))
temp = np.zeros(num_labels,)
temp= array[:,9]
array_[:,1:10] = array[:,0:9]
array_[:,0] = temp
return array_
def predict(theta1,theta2,x):
z2 = np.dot(x,theta1) # hidden layer
a2 = sigmoid(z2) # hidden layer
# adding column of 1's to a2
a2 = np.append(np.ones(shape=(m,1)),a2,axis = 1)
z3 = np.dot(a2,theta2)
a3 = sigmoid(z3)
# mapping problem. Rotate left one step
y_prob = rotate_column(a3)
# prediction on activation a2
y_pred = np.argmax(y_prob, axis=1).reshape(-1,1)
return y_pred
y_pred = predict(theta1,theta2,x)
y_pred.shape
```
Now we will compare our predicted result to the true one with <a href='http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html'>confusion_matrix</a> of numpy library.
```
from sklearn.metrics import confusion_matrix
# Function for accuracy
def acc(confusion_matrix):
t = 0
for i in range(num_labels):
t += confusion_matrix[i][i]
f = m-t
ac = t/(m)
return (t,f,ac)
#import library
from sklearn.metrics import confusion_matrix
cm_train = confusion_matrix(y.reshape(m,),y_pred.reshape(m,))
t,f,ac = acc(cm_train)
print('With #{} correct, #{} wrong ==========> accuracy = {}%'
.format(t,f,ac*100))
cm_train
```
| true | code | 0.34659 | null | null | null | null |
|
```
# This cell is added by sphinx-gallery
!pip install mrsimulator --quiet
%matplotlib inline
import mrsimulator
print(f'You are using mrsimulator v{mrsimulator.__version__}')
```
# ²⁹Si 1D MAS spinning sideband (CSA)
After acquiring an NMR spectrum, we often require a least-squares analysis to
determine site populations and nuclear spin interaction parameters. Generally, this
comprises of two steps:
- create a fitting model, and
- determine the model parameters that give the best fit to the spectrum.
Here, we will use the mrsimulator objects to create a fitting model, and use the
`LMFIT <https://lmfit.github.io/lmfit-py/>`_ library for performing the least-squares
fitting optimization.
In this example, we use a synthetic $^{29}\text{Si}$ NMR spectrum of cuspidine,
generated from the tensor parameters reported by Hansen `et al.` [#f1]_, to
demonstrate a simple fitting procedure.
We will begin by importing relevant modules and establishing figure size.
```
import csdmpy as cp
import matplotlib.pyplot as plt
from lmfit import Minimizer, Parameters
from mrsimulator import Simulator, SpinSystem, Site
from mrsimulator.methods import BlochDecaySpectrum
from mrsimulator import signal_processing as sp
from mrsimulator.utils import spectral_fitting as sf
```
## Import the dataset
Use the `csdmpy <https://csdmpy.readthedocs.io/en/stable/index.html>`_
module to load the synthetic dataset as a CSDM object.
```
file_ = "https://sandbox.zenodo.org/record/835664/files/synthetic_cuspidine_test.csdf?"
synthetic_experiment = cp.load(file_).real
# standard deviation of noise from the dataset
sigma = 0.03383338
# convert the dimension coordinates from Hz to ppm
synthetic_experiment.x[0].to("ppm", "nmr_frequency_ratio")
# Plot of the synthetic dataset.
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.plot(synthetic_experiment, "k", alpha=0.5)
ax.set_xlim(50, -200)
plt.grid()
plt.tight_layout()
plt.show()
```
## Create a fitting model
Before you can fit a simulation to an experiment, in this case, the synthetic dataset,
you will first need to create a fitting model. We will use the ``mrsimulator`` objects
as tools in creating a model for the least-squares fitting.
**Step 1:** Create initial guess sites and spin systems.
The initial guess is often based on some prior knowledge about the system under
investigation. For the current example, we know that Cuspidine is a crystalline silica
polymorph with one crystallographic Si site. Therefore, our initial guess model is a
single $^{29}\text{Si}$ site spin system. For non-linear fitting algorithms, as
a general recommendation, the initial guess model parameters should be a good starting
point for the algorithms to converge.
```
# the guess model comprising of a single site spin system
site = Site(
isotope="29Si",
isotropic_chemical_shift=-82.0, # in ppm,
shielding_symmetric={"zeta": -63, "eta": 0.4}, # zeta in ppm
)
spin_system = SpinSystem(
name="Si Site",
description="A 29Si site in cuspidine",
sites=[site], # from the above code
abundance=100,
)
```
**Step 2:** Create the method object.
The method should be the same as the one used
in the measurement. In this example, we use the `BlochDecaySpectrum` method. Note,
when creating the method object, the value of the method parameters must match the
respective values used in the experiment.
```
MAS = BlochDecaySpectrum(
channels=["29Si"],
magnetic_flux_density=7.1, # in T
rotor_frequency=780, # in Hz
spectral_dimensions=[
{
"count": 2048,
"spectral_width": 25000, # in Hz
"reference_offset": -5000, # in Hz
}
],
experiment=synthetic_experiment, # add the measurement to the method.
)
```
**Step 3:** Create the Simulator object, add the method and spin system objects, and
run the simulation.
```
sim = Simulator(spin_systems=[spin_system], methods=[MAS])
sim.run()
```
**Step 4:** Create a SignalProcessor class and apply post simulation processing.
```
processor = sp.SignalProcessor(
operations=[
sp.IFFT(), # inverse FFT to convert frequency based spectrum to time domain.
sp.apodization.Exponential(FWHM="200 Hz"), # apodization of time domain signal.
sp.FFT(), # forward FFT to convert time domain signal to frequency spectrum.
sp.Scale(factor=3), # scale the frequency spectrum.
]
)
processed_data = processor.apply_operations(data=sim.methods[0].simulation).real
```
**Step 5:** The plot the spectrum. We also plot the synthetic dataset for comparison.
```
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment")
ax.plot(processed_data, "r", alpha=0.75, linewidth=1, label="guess spectrum")
ax.set_xlim(50, -200)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
```
## Setup a Least-squares minimization
Now that our model is ready, the next step is to set up a least-squares minimization.
You may use any optimization package of choice, here we show an application using
LMFIT. You may read more on the LMFIT
`documentation page <https://lmfit.github.io/lmfit-py/index.html>`_.
### Create fitting parameters
Next, you will need a list of parameters that will be used in the fit. The *LMFIT*
library provides a `Parameters <https://lmfit.github.io/lmfit-py/parameters.html>`_
class to create a list of parameters.
```
site1 = spin_system.sites[0]
params = Parameters()
params.add(name="iso", value=site1.isotropic_chemical_shift)
params.add(name="eta", value=site1.shielding_symmetric.eta, min=0, max=1)
params.add(name="zeta", value=site1.shielding_symmetric.zeta)
params.add(name="FWHM", value=processor.operations[1].FWHM)
params.add(name="factor", value=processor.operations[3].factor)
```
### Create a minimization function
Note, the above set of parameters does not know about the model. You will need to
set up a function that will
- update the parameters of the `Simulator` and `SignalProcessor` object based on the
LMFIT parameter updates,
- re-simulate the spectrum based on the updated values, and
- return the difference between the experiment and simulation.
```
def minimization_function(params, sim, processor, sigma=1):
values = params.valuesdict()
# the experiment data as a Numpy array
intensity = sim.methods[0].experiment.y[0].components[0].real
# Here, we update simulation parameters iso, eta, and zeta for the site object
site = sim.spin_systems[0].sites[0]
site.isotropic_chemical_shift = values["iso"]
site.shielding_symmetric.eta = values["eta"]
site.shielding_symmetric.zeta = values["zeta"]
# run the simulation
sim.run()
# update the SignalProcessor parameter and apply line broadening.
# update the scaling factor parameter at index 3 of operations list.
processor.operations[3].factor = values["factor"]
# update the exponential apodization FWHM parameter at index 1 of operations list.
processor.operations[1].FWHM = values["FWHM"]
# apply signal processing
processed_data = processor.apply_operations(sim.methods[0].simulation)
# return the difference vector.
diff = intensity - processed_data.y[0].components[0].real
return diff / sigma
```
<div class="alert alert-info"><h4>Note</h4><p>To automate the fitting process, we provide a function to parse the
``Simulator`` and ``SignalProcessor`` objects for parameters and construct an
*LMFIT* ``Parameters`` object. Similarly, a minimization function, analogous to
the above `minimization_function`, is also included in the *mrsimulator*
library. See the next example for usage instructions.</p></div>
### Perform the least-squares minimization
With the synthetic dataset, simulation, and the initial guess parameters, we are ready
to perform the fit. To fit, we use the *LMFIT*
`Minimizer <https://lmfit.github.io/lmfit-py/fitting.html>`_ class.
```
minner = Minimizer(minimization_function, params, fcn_args=(sim, processor, sigma))
result = minner.minimize()
result
```
The plot of the fit, measurement and the residuals is shown below.
```
best_fit = sf.bestfit(sim, processor)[0]
residuals = sf.residuals(sim, processor)[0]
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
ax.plot(synthetic_experiment, "k", linewidth=1, label="Experiment")
ax.plot(best_fit, "r", alpha=0.75, linewidth=1, label="Best Fit")
ax.plot(residuals, alpha=0.75, linewidth=1, label="Residuals")
ax.set_xlabel("Frequency / Hz")
ax.set_xlim(50, -200)
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
```
.. [#f1] Hansen, M. R., Jakobsen, H. J., Skibsted, J., $^{29}\text{Si}$
Chemical Shift Anisotropies in Calcium Silicates from High-Field
$^{29}\text{Si}$ MAS NMR Spectroscopy, Inorg. Chem. 2003,
**42**, *7*, 2368-2377.
`DOI: 10.1021/ic020647f <https://doi.org/10.1021/ic020647f>`_
| true | code | 0.790166 | null | null | null | null |
|
```
import torch
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from statsmodels.discrete.discrete_model import Probit
import patsy
import matplotlib.pylab as plt
import tqdm
import itertools
ax = np.newaxis
```
Make sure you have installed the pygfe package. You can simply call `pip install pygrpfe` in the terminal or call the magic command `!pip install pygrpfe` from within the notebook. If you are using the binder link, then `pygrpfe` is already installed. You can import the package directly.
```
import pygrpfe as gfe
```
# A simple model of wage and participation
\begin{align*}
Y^*_{it} & = \alpha_i + \epsilon_{it} \\
D_{it} &= 1\big[ u(\alpha_i) \geq c(D_{it-1}) + V_{it} \big] \\
Y_{it} &= D_{it} Y^*_{it} \\
\end{align*}
where we use
$$u(\alpha) = \frac{e^{(1-\gamma) \alpha } -1}{1-\gamma}$$
and use as initial conditions $D_{i1} = 1\big[ u(\alpha_i) \geq c(1) + V_{i1} \big]$.
```
def dgp_simulate(ni,nt,gamma=2.0,eps_sd=1.0):
""" simulates according to the model """
alpha = np.random.normal(size=(ni))
eps = np.random.normal(size=(ni,nt))
v = np.random.normal(size=(ni,nt))
# non-censored outcome
W = alpha[:,ax] + eps*eps_sd
# utility
U = (np.exp( alpha * (1-gamma)) - 1)/(1-gamma)
U = U - U.mean()
# costs
C1 = -1; C0=0;
# binary decision
Y = np.ones((ni,nt))
Y[:,0] = U.squeeze() > C1 + v[:,0]
for t in range(1,nt):
Y[:,t] = U > C1*Y[:,t-1] + C0*(1-Y[:,t-1]) + v[:,t]
W = W * Y
return(W,Y)
```
# Estimating the model
We show the steps to estimating the model. Later on, we will run a Monte-Carlo Simulation.
We simulate from the DGP we have defined.
```
ni = 1000
nt = 50
Y,D = dgp_simulate(ni,nt,2.0)
```
## Step 1: grouping observations
We group individuals based on their outcomes. We consider as moments the average value of $Y$ and the average value of $D$. We give our gfe function the $t$ sepcific values so that it can compute the within individual variation. This is a measure used to pick the nubmer of groups.
The `group` function chooses the number of groups based on the rule described in the paper.
```
# we create the moments
# this has dimension ni x nt x nm
M_itm = np.stack([Y,D],axis=2)
# we use our sugar function to get the groups
G_i,_ = gfe.group(M_itm)
print("Number of groups = {:d}".format(G_i.max()))
```
We can plot the grouping:
```
dd = pd.DataFrame({'Y':Y.mean(1),'G':G_i,'D':D.mean(1)})
plt.scatter(dd.Y,dd.D,c=dd.G*1.0)
plt.show()
```
## Step 2: Estimate the likelihood model with group specific parameters
In the model we proposed, this second step is a probit. We can then directly use python probit routine with group dummies.
```
ni,nt = D.shape
# next we minimize using groups as FE
dd = pd.DataFrame({
'd': D[:,range(1,nt)].flatten(),
'dl':D[:,range(nt-1)].flatten(),
'gi':np.broadcast_to(G_i[:,ax], (ni,nt-1)).flatten()})
yv,Xv = patsy.dmatrices("d ~ 0 + dl + C(gi)", dd, return_type='matrix')
mod = Probit(dd['d'], Xv)
res = mod.fit(maxiter=2000,method='bfgs')
print("Estimated cost parameters = {:.3f}".format(res.params[-1]))
```
## Step 2 (alternative implementation): Pytorch and auto-diff
We next write down a likelihood that we want to optimize. Instead of using the Python routine for the Probit, we make use of automatic differentiation from PyTorch. This makes it easy to modify the estimating model to accomodate for less standard likelihoods!
We create a class which initializes the parameters in the `__init__` method and computes the loss in the `loss` method. We will see later how we can use this to define a fixed effect estimator.
```
class GrpProbit:
# initialize parameters and data
def __init__(self,D,G_i):
# define parameters and tell PyTorch to keep track of gradients
self.alpha = torch.tensor( np.ones(G_i.max()+1), requires_grad=True)
self.cost = torch.tensor( np.random.normal(1), requires_grad=True)
self.params = [self.alpha,self.cost]
# predefine some components
ni,nt = D.shape
self.ni = ni
self.G_i = G_i
self.Dlag = torch.tensor(D[:,range(0,nt-1)])
self.Dout = torch.tensor(D[:,range(1,nt)])
self.N = torch.distributions.normal.Normal(0,1)
# define our loss function
def loss(self):
Id = self.alpha[self.G_i].reshape(self.ni,1) + self.cost * self.Dlag
lik_it = self.Dout * torch.log( torch.clamp( self.N.cdf( Id ), min=1e-7)) + \
(1-self.Dout)*torch.log( torch.clamp( self.N.cdf( -Id ), min=1e-7) )
return(- lik_it.mean())
# initialize the model with groups and estimate it
model = GrpProbit(D,G_i)
gfe.train(model)
print("Estimated cost parameters = {:.3f}".format(model.params[1]))
```
## Use PyTorch to estimate Fixed Effect version
Since Pytorch makes use of efficient automatic differentiation, we can use it with many variables. This allows us to give each individual their own group, effectivily estimating a fixed-effect model.
```
model_fe = GrpProbit(D,np.arange(ni))
gfe.train(model_fe)
print("Estimated cost parameters FE = {:.3f}".format(model_fe.params[1]))
```
# Monte-Carlo
We finish with running a short Monte-Carlo exercise.
```
all = []
import itertools
ll = list(itertools.product(range(50), [10,20,30,40]))
for r, nt in tqdm.tqdm(ll):
ni = 1000
gamma =2.0
Y,D = dgp_simulate(ni,nt,gamma)
M_itm = np.stack([Y,D],axis=2)
G_i,_ = blm2.group(M_itm,scale=True)
model_fe = GrpProbit(D,np.arange(ni))
gfe.train(model_fe)
model_gfe = GrpProbit(D,G_i)
gfe.train(model_gfe)
all.append({
'c_fe' : model_fe.params[1].item(),
'c_gfe': model_gfe.params[1].item(),
'ni':ni,
'nt':nt,
'gamma':gamma,
'ng':G_i.max()+1})
df = pd.DataFrame(all)
df2 = df.groupby(['ni','nt','gamma']).mean().reset_index()
plt.plot(df2['nt'],df2['c_gfe'],label="gfe",color="orange")
plt.plot(df2['nt'],df2['c_fe'],label="fe",color="red")
plt.axhline(1.0,label="true",color="black",linestyle=":")
plt.xlabel("T")
plt.legend()
plt.show()
df.groupby(['ni','nt','gamma']).mean()
```
| true | code | 0.661704 | null | null | null | null |
|
# GDP and life expectancy
Richer countries can afford to invest more on healthcare, on work and road safety, and other measures that reduce mortality. On the other hand, richer countries may have less healthy lifestyles. Is there any relation between the wealth of a country and the life expectancy of its inhabitants?
The following analysis checks whether there is any correlation between the total gross domestic product (GDP) of a country in 2013 and the life expectancy of people born in that country in 2013.
Getting the data
Two datasets of the World Bank are considered. One dataset, available at http://data.worldbank.org/indicator/NY.GDP.MKTP.CD, lists the GDP of the world's countries in current US dollars, for various years. The use of a common currency allows us to compare GDP values across countries. The other dataset, available at http://data.worldbank.org/indicator/SP.DYN.LE00.IN, lists the life expectancy of the world's countries. The datasets were downloaded as CSV files in March 2016.
```
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
YEAR = 2018
GDP_INDICATOR = 'NY.GDP.MKTP.CD'
gdpReset = pd.read_csv('WB 2018 GDP.csv')
LIFE_INDICATOR = 'SP.DYN.LE00.IN_'
lifeReset = pd.read_csv('WB 2018 LE.csv')
lifeReset.head()
```
## Cleaning the data
Inspecting the data with `head()` and `tail()` shows that:
1. the first 34 rows are aggregated data, for the Arab World, the Caribbean small states, and other country groups used by the World Bank;
- GDP and life expectancy values are missing for some countries.
The data is therefore cleaned by:
1. removing the first 34 rows;
- removing rows with unavailable values.
```
gdpCountries = gdpReset.dropna()
lifeCountries = lifeReset.dropna()
```
## Transforming the data
The World Bank reports GDP in US dollars and cents. To make the data easier to read, the GDP is converted to millions of British pounds (the author's local currency) with the following auxiliary functions, using the average 2013 dollar-to-pound conversion rate provided by <http://www.ukforex.co.uk/forex-tools/historical-rate-tools/yearly-average-rates>.
```
def roundToMillions (value):
return round(value / 1000000)
def usdToGBP (usd):
return usd / 1.334801
GDP = 'GDP (£m)'
gdpCountries[GDP] = gdpCountries[GDP_INDICATOR].apply(usdToGBP).apply(roundToMillions)
gdpCountries.head()
COUNTRY = 'Country Name'
headings = [COUNTRY, GDP]
gdpClean = gdpCountries[headings]
gdpClean.head()
LIFE = 'Life expectancy (years)'
lifeCountries[LIFE] = lifeCountries[LIFE_INDICATOR].apply(round)
headings = [COUNTRY, LIFE]
lifeClean = lifeCountries[headings]
lifeClean.head()
gdpVsLife = pd.merge(gdpClean, lifeClean, on=COUNTRY, how='inner')
gdpVsLife.head()
```
## Calculating the correlation
To measure if the life expectancy and the GDP grow together, the Spearman rank correlation coefficient is used. It is a number from -1 (perfect inverse rank correlation: if one indicator increases, the other decreases) to 1 (perfect direct rank correlation: if one indicator increases, so does the other), with 0 meaning there is no rank correlation. A perfect correlation doesn't imply any cause-effect relation between the two indicators. A p-value below 0.05 means the correlation is statistically significant.
```
from scipy.stats import spearmanr
gdpColumn = gdpVsLife[GDP]
lifeColumn = gdpVsLife[LIFE]
(correlation, pValue) = spearmanr(gdpColumn, lifeColumn)
print('The correlation is', correlation)
if pValue < 0.05:
print('It is statistically significant.')
else:
print('It is not statistically significant.')
```
The value shows a direct correlation, i.e. richer countries tend to have longer life expectancy.
## Showing the data
Measures of correlation can be misleading, so it is best to see the overall picture with a scatterplot. The GDP axis uses a logarithmic scale to better display the vast range of GDP values, from a few million to several billion (million of million) pounds.
```
%matplotlib inline
gdpVsLife.plot(x=GDP, y=LIFE, kind='scatter', grid=True, logx=True, figsize=(10, 4))
```
The plot shows there is no clear correlation: there are rich countries with low life expectancy, poor countries with high expectancy, and countries with around 10 thousand (104) million pounds GDP have almost the full range of values, from below 50 to over 80 years. Towards the lower and higher end of GDP, the variation diminishes. Above 40 thousand million pounds of GDP (3rd tick mark to the right of 104), most countries have an expectancy of 70 years or more, whilst below that threshold most countries' life expectancy is below 70 years.
Comparing the 10 poorest countries and the 10 countries with the lowest life expectancy shows that total GDP is a rather crude measure. The population size should be taken into account for a more precise definiton of what 'poor' and 'rich' means. Furthermore, looking at the countries below, droughts and internal conflicts may also play a role in life expectancy.
```
# the 10 countries with lowest GDP
gdpVsLife.sort_values(GDP).head(10)
# the 10 countries with lowest life expectancy
gdpVsLife.sort_values(LIFE).head(10)
```
## Conclusions
To sum up, there is no strong correlation between a country's wealth and the life expectancy of its inhabitants: there is often a wide variation of life expectancy for countries with similar GDP, countries with the lowest life expectancy are not the poorest countries, and countries with the highest expectancy are not the richest countries. Nevertheless there is some relationship, because the vast majority of countries with a life expectancy below 70 years is on the left half of the scatterplot.
| true | code | 0.274838 | null | null | null | null |
|
# American Gut Project example
This notebook was created from a question we recieved from a user of MGnify.
The question was:
```
I am attempting to retrieve some of the MGnify results from samples that are part of the American Gut Project based on sample location.
However latitude and longitude do not appear to be searchable fields.
Is it possible to query these fields myself or to work with someone to retrieve a list of samples from a specific geographic range? I am interested in samples from people in Hawaii, so 20.5 - 20.7 and -154.0 - -161.2.
```
Let's decompose the question:
- project "American Gut Project"
- Metadata filtration using the geographic location of a sample.
- Get samples for Hawai: 20.5 - 20.7 ; -154.0 - -161.2
Each sample if MGnify it's obtained from [ENA](https://www.ebi.ac.uk/ena).
## Get samples
The first step is to obtain the samples using [ENA advanced search API](https://www.ebi.ac.uk/ena/browser/advanced-search).
```
from pandas import DataFrame
import requests
base_url = 'https://www.ebi.ac.uk/ena/portal/api/search'
# parameters
params = {
'result': 'sample',
'query': ' AND '.join([
'geo_box1(16.9175,-158.4687,21.6593,-152.7969)',
'description="*American Gut Project*"'
]),
'fields': ','.join(['secondary_sample_accession', 'lat', 'lon']),
'format': 'json',
}
response = requests.post(base_url, data=params)
agp_samples = response.json()
df = DataFrame(columns=('secondary_sample_accession', 'lat', 'lon'))
df.index.name = 'accession'
for s in agp_samples:
df.loc[s.get('accession')] = [
s.get('secondary_sample_accession'),
s.get('lat'),
s.get('lon')
]
df
```
Now we can use EMG API to get the information.
```
#!/bin/usr/env python
import requests
import sys
def get_links(data):
return data["links"]["related"]
if __name__ == "__main__":
samples_url = "https://www.ebi.ac.uk/metagenomics/api/v1/samples/"
tsv = sys.argv[1] if len(sys.argv) == 2 else None
if not tsv:
print("The first arg is the tsv file")
exit(1)
tsv_fh = open(tsv, "r")
# header
next(tsv_fh)
for record in tsv_fh:
# get the runs first
# mgnify references the secondary accession
_, sec_acc, *_ = record.split("\t")
samples_res = requests.get(samples_url + sec_acc)
if samples_res.status_code == 404:
print(sec_acc + " not found in MGnify")
continue
# then the analysis for that run
runs_url = get_links(samples_res.json()["data"]["relationships"]["runs"])
if not runs_url:
print("No runs for sample " + sec_acc)
continue
print("Getting the runs: " + runs_url)
run_res = requests.get(runs_url)
if run_res.status_code != 200:
print(run_url + " failed", file=sys.stderr)
continue
# iterate over the sample runs
run_data = run_res.json()
# this script doesn't consider pagination, it's just an example
# there could be more that one page of runs
# use links -> next to get the next page
for run in run_data["data"]:
analyses_url = get_links(run["relationships"]["analyses"])
if not analyses_url:
print("No analyses for run " + run)
continue
analyses_res = requests.get(analyses_url)
if analyses_res.status_code != 200:
print(analyses_url + " failed", file=sys.stderr)
continue
# dump
print("Raw analyses data")
print(analyses_res.json())
print("=" * 30)
tsv_fh.close()
```
| true | code | 0.447038 | null | null | null | null |
|
# LassoLars Regression with Robust Scaler
This Code template is for the regression analysis using a simple LassoLars Regression. It is a lasso model implemented using the LARS algorithm and feature scaling using Robust Scaler in a Pipeline
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import LassoLars
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path= ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Model
LassoLars is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients.
### Tuning parameters
> **fit_intercept** -> whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations
> **alpha** -> Constant that multiplies the penalty term. Defaults to 1.0. alpha = 0 is equivalent to an ordinary least square, solved by LinearRegression. For numerical reasons, using alpha = 0 with the LassoLars object is not advised and you should prefer the LinearRegression object.
> **eps** -> The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the tol parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization.
> **max_iter** -> Maximum number of iterations to perform.
> **positive** -> Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. Under the positive restriction the model coefficients will not converge to the ordinary-least-squares solution for small values of alpha. Only coefficients up to the smallest alpha value (alphas_[alphas_ > 0.].min() when fit_path=True) reached by the stepwise Lars-Lasso algorithm are typically in congruence with the solution of the coordinate descent Lasso estimator.
> **precompute** -> Whether to use a precomputed Gram matrix to speed up calculations.
### Feature Scaling
Robust Scaler scale features using statistics that are robust to outliers.
This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range). The IQR is the range between the 1st quartile (25th quantile) and the 3rd quartile (75th quantile).<br>
For more information... [click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)
```
model=make_pipeline(RobustScaler(),LassoLars())
model.fit(x_train,y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
score: The score function returns the coefficient of determination R2 of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(x_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Anu Rithiga , Github: [Profile](https://github.com/iamgrootsh7)
| true | code | 0.471527 | null | null | null | null |
|
# 2章 微分積分
## 2.1 関数
```
# 必要ライブラリの宣言
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# PDF出力用
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png', 'pdf')
def f(x):
return x**2 +1
f(1)
f(2)
```
### 図2-2 点(x, f(x))のプロットとy=f(x)のグラフ
```
x = np.linspace(-3, 3, 601)
y = f(x)
x1 = np.linspace(-3, 3, 7)
y1 = f(x1)
plt.figure(figsize=(6,6))
plt.ylim(-2,10)
plt.plot([-3,3],[0,0],c='k')
plt.plot([0,0],[-2,10],c='k')
plt.scatter(x1,y1,c='k',s=50)
plt.grid()
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.show()
x2 = np.linspace(-3, 3, 31)
y2 = f(x2)
plt.figure(figsize=(6,6))
plt.ylim(-2,10)
plt.plot([-3,3],[0,0],c='k')
plt.plot([0,0],[-2,10],c='k')
plt.scatter(x2,y2,c='k',s=50)
plt.grid()
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.show()
plt.figure(figsize=(6,6))
plt.plot(x,y,c='k')
plt.ylim(-2,10)
plt.plot([-3,3],[0,0],c='k')
plt.plot([0,0],[-2,10],c='k')
plt.scatter([1,2],[2,5],c='k',s=50)
plt.grid()
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.show()
```
## 2.2 合成関数・逆関数
### 図2.6 逆関数のグラフ
```
def f(x):
return(x**2 + 1)
def g(x):
return(np.sqrt(x - 1))
xx1 = np.linspace(0.0, 4.0, 200)
xx2 = np.linspace(1.0, 4.0, 200)
yy1 = f(xx1)
yy2 = g(xx2)
plt.figure(figsize=(6,6))
plt.xlabel('$x$',fontsize=14)
plt.ylabel('$y$',fontsize=14)
plt.ylim(-2.0, 4.0)
plt.xlim(-2.0, 4.0)
plt.grid()
plt.plot(xx1,yy1, linestyle='-', c='k', label='$y=x^2+1$')
plt.plot(xx2,yy2, linestyle='-.', c='k', label='$y=\sqrt{x-1}$')
plt.plot([-2,4],[-2,4], color='black')
plt.plot([-2,4],[0,0], color='black')
plt.plot([0,0],[-2,4],color='black')
plt.legend(fontsize=14)
plt.show()
```
## 2.3 微分と極限
### 図2-7 関数のグラフを拡大したときの様子
```
from matplotlib import pyplot as plt
import numpy as np
def f(x):
return(x**3 - x)
delta = 2.0
x = np.linspace(0.5-delta, 0.5+delta, 200)
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)
plt.xlim(0.5-delta, 0.5+delta)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter([0.5], [-3.0/8.0])
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.grid()
plt.title('delta = %.4f' % delta, fontsize=14)
plt.show()
delta = 0.2
x = np.linspace(0.5-delta, 0.5+delta, 200)
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)
plt.xlim(0.5-delta, 0.5+delta)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter([0.5], [-3.0/8.0])
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.grid()
plt.title('delta = %.4f' % delta, fontsize=14)
plt.show()
delta = 0.01
x = np.linspace(0.5-delta, 0.5+delta, 200)
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-3.0/8.0-delta, -3.0/8.0+delta)
plt.xlim(0.5-delta, 0.5+delta)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter(0.5, -3.0/8.0)
plt.xlabel('x',fontsize=14)
plt.ylabel('y',fontsize=14)
plt.grid()
plt.title('delta = %.4f' % delta, fontsize=14)
plt.show()
```
### 図2-8 関数のグラフ上の2点を結んだ直線の傾き
```
delta = 2.0
x = np.linspace(0.5-delta, 0.5+delta, 200)
x1 = 0.6
x2 = 1.0
y = f(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-1, 0.5)
plt.xlim(0, 1.5)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.scatter([x1, x2], [f(x1), f(x2)], c='k', lw=1)
plt.plot([x1, x2], [f(x1), f(x2)], c='k', lw=1)
plt.plot([x1, x2, x2], [f(x1), f(x1), f(x2)], c='k', lw=1)
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.show()
```
### 図2-10 接線の方程式
```
def f(x):
return(x**2 - 4*x)
def g(x):
return(-2*x -1)
x = np.linspace(-2, 6, 500)
fig = plt.figure(figsize=(6,6))
plt.scatter([1],[-3],c='k')
plt.plot(x, f(x), 'b-', lw=1, c='k')
plt.plot(x, g(x), 'b-', lw=1, c='b')
plt.plot([x.min(), x.max()], [0, 0], lw=2, c='k')
plt.plot([0, 0], [g(x).min(), f(x).max()], lw=2, c='k')
plt.grid(lw=2)
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.xlabel('X')
plt.show()
```
## 2.4 極大・極小
### 図2-11 y= x3-3xのグラフと極大・極小
```
def f1(x):
return(x**3 - 3*x)
x = np.linspace(-3, 3, 500)
y = f1(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-4, 4)
plt.xlim(-3, 3)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.plot([0,0],[-4,4],c='k')
plt.plot([-3,3],[0,0],c='k')
plt.grid()
plt.show()
```
### 図2-12 極大でも極小でもない例 (y=x3のグラフ)
```
def f2(x):
return(x**3)
x = np.linspace(-3, 3, 500)
y = f2(x)
fig = plt.figure(figsize=(6,6))
plt.ylim(-4, 4)
plt.xlim(-3, 3)
plt.plot(x, y, 'b-', lw=1, c='k')
plt.plot([0,0],[-4,4],c='k')
plt.plot([-3,3],[0,0],c='k')
plt.grid()
plt.show()
```
## 2.7 合成関数の微分
### 図2-14 逆関数の微分
```
#逆関数の微分
def f(x):
return(x**2 + 1)
def g(x):
return(np.sqrt(x - 1))
xx1 = np.linspace(0.0, 4.0, 200)
xx2 = np.linspace(1.0, 4.0, 200)
yy1 = f(xx1)
yy2 = g(xx2)
plt.figure(figsize=(6,6))
plt.xlabel('$x$',fontsize=14)
plt.ylabel('$y$',fontsize=14)
plt.ylim(-2.0, 4.0)
plt.xlim(-2.0, 4.0)
plt.grid()
plt.plot(xx1,yy1, linestyle='-', color='blue')
plt.plot(xx2,yy2, linestyle='-', color='blue')
plt.plot([-2,4],[-2,4], color='black')
plt.plot([-2,4],[0,0], color='black')
plt.plot([0,0],[-2,4],color='black')
plt.show()
```
## 2.9 積分
### 図2-15 面積を表す関数S(x)とf(x)の関係
```
def f(x) :
return x**2 + 1
xx = np.linspace(-4.0, 4.0, 200)
yy = f(xx)
plt.figure(figsize=(6,6))
plt.xlim(-2,2)
plt.ylim(-1,4)
plt.plot(xx, yy)
plt.plot([-2,2],[0,0],c='k',lw=1)
plt.plot([0,0],[-1,4],c='k',lw=1)
plt.plot([0,0],[0,f(0)],c='b')
plt.plot([1,1],[0,f(1)],c='b')
plt.plot([1.5,1.5],[0,f(1.5)],c='b')
plt.plot([1,1.5],[f(1),f(1)],c='b')
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.show()
```
### 図2-16 グラフの面積と定積分
```
plt.figure(figsize=(6,6))
plt.xlim(-2,2)
plt.ylim(-1,4)
plt.plot(xx, yy)
plt.plot([-2,2],[0,0],c='k',lw=1)
plt.plot([0,0],[-1,4],c='k',lw=1)
plt.plot([0,0],[0,f(0)],c='b')
plt.plot([1,1],[0,f(1)],c='b')
plt.plot([1.5,1.5],[0,f(1.5)],c='b')
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.show()
```
### 図2-17 積分と面積の関係
```
def f(x) :
return x**2 + 1
x = np.linspace(-1.0, 2.0, 200)
y = f(x)
N = 10
xx = np.linspace(0.5, 1.5, N+1)
yy = f(xx)
print(xx)
plt.figure(figsize=(6,6))
plt.xlim(-1,2)
plt.ylim(-1,4)
plt.plot(x, y)
plt.plot([-1,2],[0,0],c='k',lw=2)
plt.plot([0,0],[-1,4],c='k',lw=2)
plt.plot([0.5,0.5],[0,f(0.5)],c='b')
plt.plot([1.5,1.5],[0,f(1.5)],c='b')
plt.bar(xx[:-1], yy[:-1], align='edge', width=1/N*0.9)
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.tick_params(color='white')
plt.grid()
plt.show()
```
| true | code | 0.378258 | null | null | null | null |
|
# ORF recognition by CNN
Compare to ORF_CNN_101.
Use 2-layer CNN.
Run on Mac.
```
PC_SEQUENCES=20000 # how many protein-coding sequences
NC_SEQUENCES=20000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
BASES=1000 # how long is each sequence
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (BASES,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (BASES,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 32 # how many different patterns the model looks for
NEURONS = 16
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=10 # how many times to train on all the data
SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=5 # train the model this many times (range 1 to SPLITS)
import sys
try:
from google.colab import drive
IN_COLAB = True
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import *
except:
print("CoLab not working. On my PC, use relative paths.")
IN_COLAB = False
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_gen import *
from SimTools.RNA_describe import *
from SimTools.RNA_prep import *
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
if not assert_imported_RNA_gen():
print("ERROR: Cannot use RNA_gen.")
if not assert_imported_RNA_prep():
print("ERROR: Cannot use RNA_prep.")
from os import listdir
import time # datetime
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
# Use code from our SimTools library.
def make_generators(seq_len):
pcgen = Collection_Generator()
pcgen.get_len_oracle().set_mean(seq_len)
pcgen.set_seq_oracle(Transcript_Oracle())
ncgen = Collection_Generator()
ncgen.get_len_oracle().set_mean(seq_len)
return pcgen,ncgen
pc_sim,nc_sim = make_generators(BASES)
pc_train = pc_sim.get_sequences(PC_SEQUENCES)
nc_train = nc_sim.get_sequences(NC_SEQUENCES)
print("Train on",len(pc_train),"PC seqs")
print("Train on",len(nc_train),"NC seqs")
# Use code from our LearnTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
pc_test = pc_sim.get_sequences(PC_TESTS)
nc_test = nc_sim.get_sequences(NC_TESTS)
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc))
```
| true | code | 0.580293 | null | null | null | null |
|
# Use BlackJAX with Numpyro
BlackJAX can take any log-probability function as long as it is compatible with JAX's JIT. In this notebook we show how we can use Numpyro as a modeling language and BlackJAX as an inference library.
We reproduce the Eight Schools example from the [Numpyro documentation](https://github.com/pyro-ppl/numpyro) (all credit for the model goes to the Numpyro team). For this notebook to run you will need to install Numpyro:
```bash
pip install numpyro
```
```
import jax
import numpy as np
import numpyro
import numpyro.distributions as dist
from numpyro.infer.reparam import TransformReparam
from numpyro.infer.util import initialize_model
import blackjax
num_warmup = 1000
# We can use this notebook for simple benchmarking by setting
# below to True and run from Terminal.
# $ipython examples/use_with_numpyro.ipynb
RUN_BENCHMARK = False
if RUN_BENCHMARK:
num_sample = 5_000_000
print(f"Benchmark with {num_warmup} warmup steps and {num_sample} sampling steps.")
else:
num_sample = 10_000
```
## Data
```
# Data of the Eight Schools Model
J = 8
y = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])
sigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])
```
## Model
We use the non-centered version of the model described towards the end of the README on Numpyro's repository:
```
# Eight Schools example - Non-centered Reparametrization
def eight_schools_noncentered(J, sigma, y=None):
mu = numpyro.sample("mu", dist.Normal(0, 5))
tau = numpyro.sample("tau", dist.HalfCauchy(5))
with numpyro.plate("J", J):
with numpyro.handlers.reparam(config={"theta": TransformReparam()}):
theta = numpyro.sample(
"theta",
dist.TransformedDistribution(
dist.Normal(0.0, 1.0), dist.transforms.AffineTransform(mu, tau)
),
)
numpyro.sample("obs", dist.Normal(theta, sigma), obs=y)
```
We need to translate the model into a log-probability function that will be used by BlackJAX to perform inference. For that we use the `initialize_model` function in Numpyro's internals. We will also use the initial position it returns:
```
rng_key = jax.random.PRNGKey(0)
init_params, potential_fn_gen, *_ = initialize_model(
rng_key,
eight_schools_noncentered,
model_args=(J, sigma, y),
dynamic_args=True,
)
```
Now we create the potential using the `potential_fn_gen` provided by Numpyro and initialize the NUTS state with BlackJAX:
```
if RUN_BENCHMARK:
print("\nBlackjax:")
print("-> Running warmup.")
```
We now run the window adaptation in BlackJAX:
```
%%time
initial_position = init_params.z
logprob = lambda position: -potential_fn_gen(J, sigma, y)(position)
adapt = blackjax.window_adaptation(
blackjax.nuts, logprob, num_warmup, target_acceptance_rate=0.8
)
last_state, kernel, _ = adapt.run(rng_key, initial_position)
```
Let us now perform inference using the previously computed step size and inverse mass matrix. We also time the sampling to give you an idea of how fast BlackJAX can be on simple models:
```
if RUN_BENCHMARK:
print("-> Running sampling.")
%%time
def inference_loop(rng_key, kernel, initial_state, num_samples):
@jax.jit
def one_step(state, rng_key):
state, info = kernel(rng_key, state)
return state, (state, info)
keys = jax.random.split(rng_key, num_samples)
_, (states, infos) = jax.lax.scan(one_step, initial_state, keys)
return states, (
infos.acceptance_probability,
infos.is_divergent,
infos.integration_steps,
)
# Sample from the posterior distribution
states, infos = inference_loop(rng_key, kernel, last_state, num_sample)
_ = states.position["mu"].block_until_ready()
```
Let us compute the average acceptance probability and check the number of divergences (to make sure that the model sampled correctly, and that the sampling time is not a result of a majority of divergent transitions):
```
acceptance_rate = np.mean(infos[0])
num_divergent = np.mean(infos[1])
print(f"\nAcceptance rate: {acceptance_rate:.2f}")
print(f"{100*num_divergent:.2f}% divergent transitions")
```
Let us now plot the distribution of the parameters. Note that since we use a transformed variable, Numpyro does not output the school treatment effect directly:
```
if not RUN_BENCHMARK:
import seaborn as sns
from matplotlib import pyplot as plt
samples = states.position
fig, axes = plt.subplots(ncols=2)
fig.set_size_inches(12, 5)
sns.kdeplot(samples["mu"], ax=axes[0])
sns.kdeplot(samples["tau"], ax=axes[1])
axes[0].set_xlabel("mu")
axes[1].set_xlabel("tau")
fig.tight_layout()
if not RUN_BENCHMARK:
fig, axes = plt.subplots(8, 2, sharex="col", sharey="col")
fig.set_size_inches(12, 10)
for i in range(J):
axes[i][0].plot(samples["theta_base"][:, i])
axes[i][0].title.set_text(f"School {i} relative treatment effect chain")
sns.kdeplot(samples["theta_base"][:, i], ax=axes[i][1], shade=True)
axes[i][1].title.set_text(f"School {i} relative treatment effect distribution")
axes[J - 1][0].set_xlabel("Iteration")
axes[J - 1][1].set_xlabel("School effect")
fig.tight_layout()
plt.show()
if not RUN_BENCHMARK:
for i in range(J):
print(
f"Relative treatment effect for school {i}: {np.mean(samples['theta_base'][:, i]):.2f}"
)
```
## Compare sampling time with Numpyro
We compare the time it took BlackJAX to do the warmup for 1,000 iterations and then taking 100,000 samples with Numpyro's:
```
from numpyro.infer import MCMC, NUTS
if RUN_BENCHMARK:
print("\nNumpyro:")
print("-> Running warmup+sampling.")
%%time
nuts_kernel = NUTS(eight_schools_noncentered, target_accept_prob=0.8)
mcmc = MCMC(
nuts_kernel, num_warmup=num_warmup, num_samples=num_sample, progress_bar=False
)
rng_key = jax.random.PRNGKey(0)
mcmc.run(rng_key, J, sigma, y=y, extra_fields=("num_steps", "accept_prob"))
samples = mcmc.get_samples()
_ = samples["mu"].block_until_ready()
print(f"\nAcceptance rate: {mcmc.get_extra_fields()['accept_prob'].mean():.2f}")
print(f"{100*mcmc.get_extra_fields()['diverging'].mean():.2f}% divergent transitions")
print(f"\nBlackjax average {infos[2].mean():.2f} leapfrog per iteration.")
print(
f"Numpyro average {mcmc.get_extra_fields()['num_steps'].mean():.2f} leapfrog per iteration."
)
```
| true | code | 0.7347 | null | null | null | null |
|
# Machine Translation English-German Example Using SageMaker Seq2Seq
1. [Introduction](#Introduction)
2. [Setup](#Setup)
3. [Download dataset and preprocess](#Download-dataset-and-preprocess)
3. [Training the Machine Translation model](#Training-the-Machine-Translation-model)
4. [Inference](#Inference)
## Introduction
Welcome to our Machine Translation end-to-end example! In this demo, we will train a English-German translation model and will test the predictions on a few examples.
SageMaker Seq2Seq algorithm is built on top of [Sockeye](https://github.com/awslabs/sockeye), a sequence-to-sequence framework for Neural Machine Translation based on MXNet. SageMaker Seq2Seq implements state-of-the-art encoder-decoder architectures which can also be used for tasks like Abstractive Summarization in addition to Machine Translation.
To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.
## Setup
Let's start by specifying:
- The S3 bucket and prefix that you want to use for training and model data. **This should be within the same region as the Notebook Instance, training, and hosting.**
- The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp in the cell below with a the appropriate full IAM role arn string(s).
```
# S3 bucket and prefix
bucket = '<your_s3_bucket_name_here>'
prefix = 'sagemaker/<your_s3_prefix_here>' # E.g.'sagemaker/seq2seq/eng-german'
import boto3
import re
from sagemaker import get_execution_role
role = get_execution_role()
```
Next, we'll import the Python libraries we'll need for the remainder of the exercise.
```
from time import gmtime, strftime
import time
import numpy as np
import os
import json
# For plotting attention matrix later on
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
```
## Download dataset and preprocess
In this notebook, we will train a English to German translation model on a dataset from the
[Conference on Machine Translation (WMT) 2017](http://www.statmt.org/wmt17/).
```
%%bash
wget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.de.gz & \
wget http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/corpus.tc.en.gz & wait
gunzip corpus.tc.de.gz & \
gunzip corpus.tc.en.gz & wait
mkdir validation
curl http://data.statmt.org/wmt17/translation-task/preprocessed/de-en/dev.tgz | tar xvzf - -C validation
```
Please note that it is a common practise to split words into subwords using Byte Pair Encoding (BPE). Please refer to [this](https://github.com/awslabs/sockeye/tree/master/tutorials/wmt) tutorial if you are interested in performing BPE.
Since training on the whole dataset might take several hours/days, for this demo, let us train on the **first 10,000 lines only**. Don't run the next cell if you want to train on the complete dataset.
```
!head -n 10000 corpus.tc.en > corpus.tc.en.small
!head -n 10000 corpus.tc.de > corpus.tc.de.small
```
Now, let's use the preprocessing script `create_vocab_proto.py` (provided with this notebook) to create vocabulary mappings (strings to integers) and convert these files to x-recordio-protobuf as required for training by SageMaker Seq2Seq.
Uncomment the cell below and run to see check the arguments this script expects.
```
%%bash
# python3 create_vocab_proto.py -h
```
The cell below does the preprocessing. If you are using the complete dataset, the script might take around 10-15 min on an m4.xlarge notebook instance. Remove ".small" from the file names for training on full datasets.
```
%%time
%%bash
python3 create_vocab_proto.py \
--train-source corpus.tc.en.small \
--train-target corpus.tc.de.small \
--val-source validation/newstest2014.tc.en \
--val-target validation/newstest2014.tc.de
```
The script will output 4 files, namely:
- train.rec : Contains source and target sentences for training in protobuf format
- val.rec : Contains source and target sentences for validation in protobuf format
- vocab.src.json : Vocabulary mapping (string to int) for source language (English in this example)
- vocab.trg.json : Vocabulary mapping (string to int) for target language (German in this example)
Let's upload the pre-processed dataset and vocabularies to S3
```
def upload_to_s3(bucket, prefix, channel, file):
s3 = boto3.resource('s3')
data = open(file, "rb")
key = prefix + "/" + channel + '/' + file
s3.Bucket(bucket).put_object(Key=key, Body=data)
upload_to_s3(bucket, prefix, 'train', 'train.rec')
upload_to_s3(bucket, prefix, 'validation', 'val.rec')
upload_to_s3(bucket, prefix, 'vocab', 'vocab.src.json')
upload_to_s3(bucket, prefix, 'vocab', 'vocab.trg.json')
region_name = boto3.Session().region_name
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/seq2seq:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/seq2seq:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/seq2seq:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/seq2seq:latest'}
container = containers[region_name]
print('Using SageMaker Seq2Seq container: {} ({})'.format(container, region_name))
```
## Training the Machine Translation model
```
job_name = 'seq2seq-en-de-p2-xlarge-' + strftime("%Y-%m-%d-%H", gmtime())
print("Training job", job_name)
create_training_params = \
{
"AlgorithmSpecification": {
"TrainingImage": container,
"TrainingInputMode": "File"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": "s3://{}/{}/".format(bucket, prefix)
},
"ResourceConfig": {
# Seq2Seq does not support multiple machines. Currently, it only supports single machine, multiple GPUs
"InstanceCount": 1,
"InstanceType": "ml.p2.xlarge", # We suggest one of ["ml.p2.16xlarge", "ml.p2.8xlarge", "ml.p2.xlarge"]
"VolumeSizeInGB": 50
},
"TrainingJobName": job_name,
"HyperParameters": {
# Please refer to the documentation for complete list of parameters
"max_seq_len_source": "60",
"max_seq_len_target": "60",
"optimized_metric": "bleu",
"batch_size": "64", # Please use a larger batch size (256 or 512) if using ml.p2.8xlarge or ml.p2.16xlarge
"checkpoint_frequency_num_batches": "1000",
"rnn_num_hidden": "512",
"num_layers_encoder": "1",
"num_layers_decoder": "1",
"num_embed_source": "512",
"num_embed_target": "512",
"checkpoint_threshold": "3",
"max_num_batches": "2100"
# Training will stop after 2100 iterations/batches.
# This is just for demo purposes. Remove the above parameter if you want a better model.
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 48 * 3600
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/train/".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
},
{
"ChannelName": "vocab",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/vocab/".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": "s3://{}/{}/validation/".format(bucket, prefix),
"S3DataDistributionType": "FullyReplicated"
}
},
}
]
}
sagemaker_client = boto3.Session().client(service_name='sagemaker')
sagemaker_client.create_training_job(**create_training_params)
status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
status = sagemaker_client.describe_training_job(TrainingJobName=job_name)['TrainingJobStatus']
print(status)
# if the job failed, determine why
if status == 'Failed':
message = sage.describe_training_job(TrainingJobName=job_name)['FailureReason']
print('Training failed with the following error: {}'.format(message))
raise Exception('Training job failed')
```
> Now wait for the training job to complete and proceed to the next step after you see model artifacts in your S3 bucket.
You can jump to [Use a pretrained model](#Use-a-pretrained-model) as training might take some time.
## Inference
A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means translating sentence(s) from English to German.
This section involves several steps,
- Create model - Create a model using the artifact (model.tar.gz) produced by training
- Create Endpoint Configuration - Create a configuration defining an endpoint, using the above model
- Create Endpoint - Use the configuration to create an inference endpoint.
- Perform Inference - Perform inference on some input data using the endpoint.
### Create model
We now create a SageMaker Model from the training output. Using the model, we can then create an Endpoint Configuration.
```
use_pretrained_model = False
```
### Use a pretrained model
#### Please uncomment and run the cell below if you want to use a pretrained model, as training might take several hours/days to complete.
```
# use_pretrained_model = True
# model_name = "pretrained-en-de-model"
# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/model.tar.gz > model.tar.gz
# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.src.json > vocab.src.json
# !curl https://s3-us-west-2.amazonaws.com/gsaur-seq2seq-data/seq2seq/eng-german/full-nb-translation-eng-german-p2-16x-2017-11-24-22-25-53/output/vocab.trg.json > vocab.trg.json
# upload_to_s3(bucket, prefix, 'pretrained_model', 'model.tar.gz')
# model_data = "s3://{}/{}/pretrained_model/model.tar.gz".format(bucket, prefix)
%%time
sage = boto3.client('sagemaker')
if not use_pretrained_model:
info = sage.describe_training_job(TrainingJobName=job_name)
model_name=job_name
model_data = info['ModelArtifacts']['S3ModelArtifacts']
print(model_name)
print(model_data)
primary_container = {
'Image': container,
'ModelDataUrl': model_data
}
create_model_response = sage.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
```
### Create endpoint configuration
Use the model to create an endpoint configuration. The endpoint configuration also contains information about the type and number of EC2 instances to use when hosting the model.
Since SageMaker Seq2Seq is based on Neural Nets, we could use an ml.p2.xlarge (GPU) instance, but for this example we will use a free tier eligible ml.m4.xlarge.
```
from time import gmtime, strftime
endpoint_config_name = 'Seq2SeqEndpointConfig-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = sage.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.m4.xlarge',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print("Endpoint Config Arn: " + create_endpoint_config_response['EndpointConfigArn'])
```
### Create endpoint
Lastly, we create the endpoint that serves up model, through specifying the name and configuration defined above. The end result is an endpoint that can be validated and incorporated into production applications. This takes 10-15 minutes to complete.
```
%%time
import time
endpoint_name = 'Seq2SeqEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_name)
create_endpoint_response = sage.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print(create_endpoint_response['EndpointArn'])
resp = sage.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Status: " + status)
# wait until the status has changed
sage.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
# print the status of the endpoint
endpoint_response = sage.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
print('Endpoint creation ended with EndpointStatus = {}'.format(status))
if status != 'InService':
raise Exception('Endpoint creation failed.')
```
If you see the message,
> Endpoint creation ended with EndpointStatus = InService
then congratulations! You now have a functioning inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console.
We will finally create a runtime object from which we can invoke the endpoint.
```
runtime = boto3.client(service_name='runtime.sagemaker')
```
# Perform Inference
### Using JSON format for inference (Suggested for a single or small number of data instances)
#### Note that you don't have to convert string to text using the vocabulary mapping for inference using JSON mode
```
sentences = ["you are so good !",
"can you drive a car ?",
"i want to watch a movie ."
]
payload = {"instances" : []}
for sent in sentences:
payload["instances"].append({"data" : sent})
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/json',
Body=json.dumps(payload))
response = response["Body"].read().decode("utf-8")
response = json.loads(response)
print(response)
```
### Retrieving the Attention Matrix
Passing `"attention_matrix":"true"` in `configuration` of the data instance will return the attention matrix.
```
sentence = 'can you drive a car ?'
payload = {"instances" : [{
"data" : sentence,
"configuration" : {"attention_matrix":"true"}
}
]}
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/json',
Body=json.dumps(payload))
response = response["Body"].read().decode("utf-8")
response = json.loads(response)['predictions'][0]
source = sentence
target = response["target"]
attention_matrix = np.array(response["matrix"])
print("Source: %s \nTarget: %s" % (source, target))
# Define a function for plotting the attentioan matrix
def plot_matrix(attention_matrix, target, source):
source_tokens = source.split()
target_tokens = target.split()
assert attention_matrix.shape[0] == len(target_tokens)
plt.imshow(attention_matrix.transpose(), interpolation="nearest", cmap="Greys")
plt.xlabel("target")
plt.ylabel("source")
plt.gca().set_xticks([i for i in range(0, len(target_tokens))])
plt.gca().set_yticks([i for i in range(0, len(source_tokens))])
plt.gca().set_xticklabels(target_tokens)
plt.gca().set_yticklabels(source_tokens)
plt.tight_layout()
plot_matrix(attention_matrix, target, source)
```
### Using Protobuf format for inference (Suggested for efficient bulk inference)
Reading the vocabulary mappings as this mode of inference accepts list of integers and returns list of integers.
```
import io
import tempfile
from record_pb2 import Record
from create_vocab_proto import vocab_from_json, reverse_vocab, write_recordio, list_to_record_bytes, read_next
source = vocab_from_json("vocab.src.json")
target = vocab_from_json("vocab.trg.json")
source_rev = reverse_vocab(source)
target_rev = reverse_vocab(target)
sentences = ["this is so cool",
"i am having dinner .",
"i am sitting in an aeroplane .",
"come let us go for a long drive ."]
```
Converting the string to integers, followed by protobuf encoding:
```
# Convert strings to integers using source vocab mapping. Out-of-vocabulary strings are mapped to 1 - the mapping for <unk>
sentences = [[source.get(token, 1) for token in sentence.split()] for sentence in sentences]
f = io.BytesIO()
for sentence in sentences:
record = list_to_record_bytes(sentence, [])
write_recordio(f, record)
response = runtime.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-recordio-protobuf',
Body=f.getvalue())
response = response["Body"].read()
```
Now, parse the protobuf response and convert list of integers back to strings
```
def _parse_proto_response(received_bytes):
output_file = tempfile.NamedTemporaryFile()
output_file.write(received_bytes)
output_file.flush()
target_sentences = []
with open(output_file.name, 'rb') as datum:
next_record = True
while next_record:
next_record = read_next(datum)
if next_record:
rec = Record()
rec.ParseFromString(next_record)
target = list(rec.features["target"].int32_tensor.values)
target_sentences.append(target)
else:
break
return target_sentences
targets = _parse_proto_response(response)
resp = [" ".join([target_rev.get(token, "<unk>") for token in sentence]) for
sentence in targets]
print(resp)
```
# Stop / Close the Endpoint (Optional)
Finally, we should delete the endpoint before we close the notebook.
```
sage.delete_endpoint(EndpointName=endpoint_name)
```
| true | code | 0.428413 | null | null | null | null |
|
# Let's Grow your Own Inner Core!
### Choose a model in the list:
- geodyn_trg.TranslationGrowthRotation()
- geodyn_static.Hemispheres()
### Choose a proxy type:
- age
- position
- phi
- theta
- growth rate
### set the parameters for the model : geodynModel.set_parameters(parameters)
### set the units : geodynModel.define_units()
### Choose a data set:
- data.SeismicFromFile(filename) # Lauren's data set
- data.RandomData(numbers_of_points)
- data.PerfectSamplingEquator(numbers_of_points)
organized on a cartesian grid. numbers_of_points is the number of points along the x or y axis. The total number of points is numbers_of_points**2*pi/4
- as a special plot function to show streamlines: plot_c_vec(self,modelgeodyn)
- data.PerfectSamplingEquatorRadial(Nr, Ntheta)
same than below, but organized on a polar grid, not a cartesian grid.
### Extract the info:
- calculate the proxy value for all points of the data set: geodyn.evaluate_proxy(data_set, geodynModel)
- extract the positions as numpy arrays: extract_rtp or extract_xyz
- calculate other variables: positions.angular_distance_to_point(t,p, t_point, p_point)
```
%matplotlib inline
# import statements
import numpy as np
import matplotlib.pyplot as plt #for figures
from mpl_toolkits.basemap import Basemap #to render maps
import math
import json #to write dict with parameters
from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data
plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures
cm = plt.cm.get_cmap('viridis')
cm2 = plt.cm.get_cmap('winter')
```
## Define the geodynamical model
Un-comment one of the model
```
## un-comment one of them
geodynModel = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
# geodynModel = geodyn_static.Hemispheres() #this is a static model, only hemispheres.
```
Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation())
```
age_ic_dim = 1e9 #in years
rICB_dim = 1221. #in km
v_g_dim = rICB_dim/age_ic_dim # in km/years #growth rate
print("Growth rate is {:.2e} km/years".format(v_g_dim))
v_g_dim_seconds = v_g_dim*1e3/(np.pi*1e7)
translation_velocity_dim = 0.8*v_g_dim_seconds#4e-10 #0.8*v_g_dim_seconds#4e-10 #m.s, value for today's Earth with Q_cmb = 10TW (see Alboussiere et al. 2010)
time_translation = rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)
maxAge = 2.*time_translation/1e6
print("The translation recycles the inner core material in {0:.2e} million years".format(maxAge))
print("Translation velocity is {0:.2e} km/years".format(translation_velocity_dim*np.pi*1e7/1e3))
units = None #we give them already dimensionless parameters.
rICB = 1.
age_ic = 1.
omega = 0.#0.5*np.pi/200e6*age_ic_dim#0.5*np.pi #0. #0.5*np.pi/200e6*age_ic_dim# 0.#0.5*np.pi#0.#0.5*np.pi/200e6*age_ic_dim #0. #-0.5*np.pi # Rotation rates has to be in ]-np.pi, np.pi[
print("Rotation rate is {:.2e}".format(omega))
velocity_amplitude = translation_velocity_dim*age_ic_dim*np.pi*1e7/rICB_dim/1e3
velocity_center = [0., 100.]#center of the eastern hemisphere
velocity = geodyn_trg.translation_velocity(velocity_center, velocity_amplitude)
exponent_growth = 1.#0.1#1
print(v_g_dim, velocity_amplitude, omega/age_ic_dim*180/np.pi*1e6)
```
Define a proxy type, and a proxy name (to be used in the figures to annotate the axes)
You can re-define it later if you want (or define another proxy_type2 if needed)
```
proxy_type = "age"#"growth rate"
proxy_name = "age (Myears)" #growth rate (km/Myears)"
proxy_lim = [0, maxAge] #or None
#proxy_lim = None
fig_name = "figures/test_" #to name the figures
print(rICB, age_ic, velocity_amplitude, omega, exponent_growth, proxy_type)
print(velocity)
```
### Parameters for the geodynamical model
This will input the different parameters in the model.
```
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity,
'exponent_growth': exponent_growth,
'omega': omega,
'proxy_type': proxy_type})
geodynModel.set_parameters(parameters)
geodynModel.define_units()
param = parameters
param['vt'] = parameters['vt'].tolist() #for json serialization
# write file with parameters, readable with json, byt also human-readable
with open(fig_name+'parameters.json', 'w') as f:
json.dump(param, f)
print(parameters)
```
## Different data set and visualisations
### Perfect sampling at the equator (to visualise the flow lines)
You can add more points to get a better precision.
```
npoints = 10 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingEquator(npoints, rICB = 1.)
data_set.method = "bt_point"
proxy = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="age", verbose = False)
data_set.plot_c_vec(geodynModel, proxy=proxy, cm=cm, nameproxy="age (Myears)")
plt.savefig(fig_name+"equatorial_plot.pdf", bbox_inches='tight')
```
### Perfect sampling in the first 100km (to visualise the depth evolution)
```
data_meshgrid = data.Equator_upperpart(10,10)
data_meshgrid.method = "bt_point"
proxy_meshgrid = geodyn.evaluate_proxy(data_meshgrid, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_meshgrid.extract_rtp("bottom_turning_point")
fig3, ax3 = plt.subplots(figsize=(8, 2))
X, Y, Z = data_meshgrid.mesh_RPProxy(proxy_meshgrid)
sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm)
sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k")
ax3.set_ylim(-0, 120)
fig3.gca().invert_yaxis()
ax3.set_xlim(-180,180)
cbar = fig3.colorbar(sc)
#cbar.set_clim(0, maxAge)
cbar.set_label(proxy_name)
ax3.set_xlabel("longitude")
ax3.set_ylabel("depth below ICB (km)")
plt.savefig(fig_name+"meshgrid.pdf", bbox_inches='tight')
npoints = 20 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01)
data_set.method = "bt_point"
proxy_surface = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_set.extract_rtp("bottom_turning_point")
X, Y, Z = data_set.mesh_TPProxy(proxy_surface)
## map
m, fig = plot_data.setting_map()
y, x = m(Y, X)
sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+"map_surface.pdf", bbox_inches='tight')
```
### Random data set, in the first 100km - bottom turning point only
#### Calculate the data
```
# random data set
data_set_random = data.RandomData(300)
data_set_random.method = "bt_point"
proxy_random = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=proxy_type, verbose=False)
data_path = "../GrowYourIC/data/"
geodynModel.data_path = data_path
if proxy_type == "age":
# ## domain size and Vp
proxy_random_size = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="domain_size", verbose=False)
proxy_random_dV = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="dV_V", verbose=False)
r, t, p = data_set_random.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy_random,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set_random.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy_random, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy_random, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_random_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_random_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name +data_set_random.shortname+ '_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy_random, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_depth.pdf", bbox_inches='tight')
```
### Real Data set from Waszek paper
```
## real data set
data_set = data.SeismicFromFile("../GrowYourIC/data/WD11.dat")
data_set.method = "bt_point"
proxy2 = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose=False)
if proxy_type == "age":
## domain size and DV/V
proxy_size = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="domain_size", verbose=False)
proxy_dV = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="dV_V", verbose=False)
r, t, p = data_set.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy2,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy2, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy2, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name + data_set.shortname+'_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy2, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_depth.pdf", bbox_inches='tight')
```
| true | code | 0.62681 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/danzerzine/seospider-colab/blob/main/Running_screamingfrog_SEO_spider_in_Colab_notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Запуск SEO бота Screaming Frog SEO spider в облаке через Google Colab
-------------
> *Protip: под задачу для крупного сайта лучше всего подходят High RAM (25GB) инстансы без GPU/TPU, доступные в PRO подписке*
###Косметическое улучшение: добавляем перенос строки для длинных однострочных команд
```
from IPython.display import HTML, display
def set_css():
display(HTML('''
<style>
pre {
white-space: pre-wrap;
}
</style>
'''))
get_ipython().events.register('pre_run_cell', set_css)
```
###Подключаем Google Drive в котором хранятся конфиги бота и куда будут сохраняться результаты обхода
```
from google.colab import drive
drive.mount('/content/drive')
```
###Узнаем внешний IP инстанса
чтобы затем ручками добавить его в исключения файерволла cloudflare -- иначе очень быстро упремся в rate limit и нам начнут показывать страницу с проверкой на человекообразность
```
!wget -qO- http://ipecho.net/plain | xargs echo && wget -qO - icanhazip.com
```
###Устанавливаем последнюю версию seo spider, делаем мелкие дела по хозяйству
* Обновляем установленные linux пакеты
* Копируем настройки с десктопной версии SEO spider в локальную папку инстанса (это нужно чтобы передать токены авторизации к google search console, GA и так далее)
```
#@title Settings directory on GDrive { vertical-output: true, display-mode: "both" }
settings_path = "" #@param {type:"string"}
!wget https://download.screamingfrog.co.uk/products/seo-spider/screamingfrogseospider_16.3_all.deb
!apt-get install screamingfrogseospider_16.3_all.deb
!sudo apt-get update && sudo apt-get upgrade -y
!mkdir -p ~/.ScreamingFrogSEOSpider
!cp -r $settings_path/* ~/.ScreamingFrogSEOSpider
```
### Запускаем bash скрипт для донастройки инстанса и бота
Он добавит виртуальный дисплей для вывода из JAVA, переключит бота в режим сохранения результатов на диске вместо RAM и т.д.
```
!wget https://raw.githubusercontent.com/fili/screaming-frog-on-google-compute-engine/master/gce-sf.sh -O install.sh && chmod +x install.sh && source ./install.sh
```
###Делаем симлинк скрытой папки с временными файлами и настройками бота
на случай если придется что-то редактировать или вынимать оттуда наживую, иначе ее не будет видно в браузере файлов слева
```
!ln -s ~/.ScreamingFrogSEOSpider ~/ScreamingFrogSEOSpider
```
###Даем команду боту в headless режиме
прописываем все нужные флаги для экспорта, настроек, отчетов, выгрузок и так далее
```
#@title Crawl settings { vertical-output: true }
url_start = "" #@param {type:"string"}
use_gcs = "" #@param ["", "--use-google-search-console \"account \""] {allow-input: true}
config_path = "" #@param {type:"string"}
output_folder = "" #@param {type:"string"}
!screamingfrogseospider --crawl "$url_start" $use_gcs --headless --config "$config_path" --output-folder "$output_folder" --timestamped-output --save-crawl --export-tabs "Internal:All,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB" --bulk-export "Canonicals:Contains Canonical Inlinks,Canonicals:Self Referencing Inlinks,Canonicals:Canonicalised Inlinks,Canonicals:Missing Inlinks,Canonicals:Multiple Inlinks,Canonicals:Non-Indexable Canonical Inlinks,AMP:All Inlinks,AMP:Non-200 Response Inlinks,AMP:Missing Non-AMP Return Link Inlinks,AMP:Missing Canonical to Non-AMP Inlinks,AMP:Non-Indexable Canonical Inlinks,AMP:Indexable Inlinks,AMP:Non-Indexable Inlinks,Structured Data:Contains Structured Data,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:JSON-LD URLs,Structured Data:Microdata URLs,Structured Data:RDFa URLs,Sitemaps:URLs in Sitemap Inlinks,Sitemaps:Orphan URLs Inlinks,Sitemaps:Non-Indexable URLs in Sitemap Inlinks,Sitemaps:URLs in Multiple Sitemaps Inlinks" --save-report "Crawl Overview,Redirects:All Redirects,Redirects:Redirect Chains,Redirects:Redirect & Canonical Chains,Canonicals:Canonical Chains,Canonicals:Non-Indexable Canonicals,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Hreflang:All hreflang URLs,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non Canonical Return Links,Hreflang:Noindex Return Links,Insecure Content,SERP Summary,Orphan Pages,Structured Data:Validation Errors & Warnings Summary,Structured Data:Validation Errors & Warnings,Structured Data:Google Rich Results Features Summary,Structured Data:Google Rich Results Features,HTTP Headers:HTTP Header Summary,Cookies:Cookie Summary" --export-format xlsx --export-custom-summary "Site Crawled,Date,Time,Total URLs Encountered,Total URLs Crawled,Total Internal blocked by robots.txt,Total External blocked by robots.txt,URLs Displayed,Total Internal URLs,Total External URLs,Total Internal Indexable URLs,Total Internal Non-Indexable URLs,JavaScript:All,JavaScript:Uses Old AJAX Crawling Scheme URLs,JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag,JavaScript:Page Title Only in Rendered HTML,JavaScript:Page Title Updated by JavaScript,JavaScript:H1 Only in Rendered HTML,JavaScript:H1 Updated by JavaScript,JavaScript:Meta Description Only in Rendered HTML,JavaScript:Meta Description Updated by JavaScript,JavaScript:Canonical Only in Rendered HTML,JavaScript:Canonical Mismatch,JavaScript:Noindex Only in Original HTML,JavaScript:Nofollow Only in Original HTML,JavaScript:Contains JavaScript Links,JavaScript:Contains JavaScript Content,JavaScript:Pages with Blocked Resources,H1:All,H1:Missing,H1:Duplicate,H1:Over X Characters,H1:Multiple,H2:All,H2:Missing,H2:Duplicate,H2:Over X Characters,H2:Multiple,Internal:All,Internal:HTML,Internal:JavaScript,Internal:CSS,Internal:Images,Internal:PDF,Internal:Flash,Internal:Other,Internal:Unknown,External:All,External:HTML,External:JavaScript,External:CSS,External:Images,External:PDF,External:Flash,External:Other,External:Unknown,AMP:All,AMP:Non-200 Response,AMP:Missing Non-AMP Return Link,AMP:Missing Canonical to Non-AMP,AMP:Non-Indexable Canonical,AMP:Indexable,AMP:Non-Indexable,AMP:Missing <html amp> Tag,AMP:Missing/Invalid <!doctype html> Tag,AMP:Missing <head> Tag,AMP:Missing <body> Tag,AMP:Missing Canonical,AMP:Missing/Invalid <meta charset> Tag,AMP:Missing/Invalid <meta viewport> Tag,AMP:Missing/Invalid AMP Script,AMP:Missing/Invalid AMP Boilerplate,AMP:Contains Disallowed HTML,AMP:Other Validation Errors,Canonicals:All,Canonicals:Contains Canonical,Canonicals:Self Referencing,Canonicals:Canonicalised,Canonicals:Missing,Canonicals:Multiple,Canonicals:Non-Indexable Canonical,Content:All,Content:Spelling Errors,Content:Grammar Errors,Content:Near Duplicates,Content:Exact Duplicates,Content:Low Content Pages,Custom Extraction:All,Custom Search:All,Directives:All,Directives:Index,Directives:Noindex,Directives:Follow,Directives:Nofollow,Directives:None,Directives:NoArchive,Directives:NoSnippet,Directives:Max-Snippet,Directives:Max-Image-Preview,Directives:Max-Video-Preview,Directives:NoODP,Directives:NoYDIR,Directives:NoImageIndex,Directives:NoTranslate,Directives:Unavailable_After,Directives:Refresh,Analytics:All,Analytics:Sessions Above 0,Analytics:Bounce Rate Above 70%,Analytics:No GA Data,Analytics:Non-Indexable with GA Data,Analytics:Orphan URLs,Search Console:All,Search Console:Clicks Above 0,Search Console:No GSC Data,Search Console:Non-Indexable with GSC Data,Search Console:Orphan URLs,Hreflang:All,Hreflang:Contains hreflang,Hreflang:Non-200 hreflang URLs,Hreflang:Unlinked hreflang URLs,Hreflang:Missing Return Links,Hreflang:Inconsistent Language & Region Return Links,Hreflang:Non-Canonical Return Links,Hreflang:Noindex Return Links,Hreflang:Incorrect Language & Region Codes,Hreflang:Multiple Entries,Hreflang:Missing Self Reference,Hreflang:Not Using Canonical,Hreflang:Missing X-Default,Hreflang:Missing,Images:All,Images:Over X KB,Images:Missing Alt Text,Images:Missing Alt Attribute,Images:Alt Text Over X Characters,Link Metrics:All,Meta Description:All,Meta Description:Missing,Meta Description:Duplicate,Meta Description:Over X Characters,Meta Description:Below X Characters,Meta Description:Over X Pixels,Meta Description:Below X Pixels,Meta Description:Multiple,Meta Keywords:All,Meta Keywords:Missing,Meta Keywords:Duplicate,Meta Keywords:Multiple,PageSpeed:All,PageSpeed:Eliminate Render-Blocking Resources,PageSpeed:Defer Offscreen Images,PageSpeed:Efficiently Encode Images,PageSpeed:Properly Size Images,PageSpeed:Minify CSS,PageSpeed:Minify JavaScript,PageSpeed:Reduce Unused CSS,PageSpeed:Reduce Unused JavaScript,PageSpeed:Serve Images in Next-Gen Formats,PageSpeed:Enable Text Compression,PageSpeed:Preconnect to Required Origins,PageSpeed:Reduce Server Response Times (TTFB),PageSpeed:Avoid Multiple Page Redirects,PageSpeed:Preload Key Requests,PageSpeed:Use Video Formats for Animated Content,PageSpeed:Avoid Excessive DOM Size,PageSpeed:Reduce JavaScript Execution Time,PageSpeed:Serve Static Assets with an Efficient Cache Policy,PageSpeed:Minimize Main-Thread Work,PageSpeed:Ensure Text Remains Visible During Webfont Load,PageSpeed:Image Elements Do Not Have Explicit Width & Height,PageSpeed:Avoid Large Layout Shifts,PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers,PageSpeed:Request Errors,Pagination:All,Pagination:Contains Pagination,Pagination:First Page,Pagination:Paginated 2+ Pages,Pagination:Pagination URL Not in Anchor Tag,Pagination:Non-200 Pagination URLs,Pagination:Unlinked Pagination URLs,Pagination:Non-Indexable,Pagination:Multiple Pagination URLs,Pagination:Pagination Loop,Pagination:Sequence Error,Response Codes:All,Response Codes:Blocked by Robots.txt,Response Codes:Blocked Resource,Response Codes:No Response,Response Codes:Success (2xx),Response Codes:Redirection (3xx),Response Codes:Redirection (JavaScript),Response Codes:Redirection (Meta Refresh),Response Codes:Client Error (4xx),Response Codes:Server Error (5xx),Security:All,Security:HTTP URLs,Security:HTTPS URLs,Security:Mixed Content,Security:Form URL Insecure,Security:Form on HTTP URL,Security:Unsafe Cross-Origin Links,Security:Missing HSTS Header,Security:Bad Content Type,Security:Missing X-Content-Type-Options Header,Security:Missing X-Frame-Options Header,Security:Protocol-Relative Resource Links,Security:Missing Content-Security-Policy Header,Security:Missing Secure Referrer-Policy Header,Sitemaps:All,Sitemaps:URLs in Sitemap,Sitemaps:URLs not in Sitemap,Sitemaps:Orphan URLs,Sitemaps:Non-Indexable URLs in Sitemap,Sitemaps:URLs in Multiple Sitemaps,Sitemaps:XML Sitemap with over 50k URLs,Sitemaps:XML Sitemap over 50MB,Structured Data:All,Structured Data:Contains Structured Data,Structured Data:Missing,Structured Data:Validation Errors,Structured Data:Validation Warnings,Structured Data:Parse Errors,Structured Data:Microdata URLs,Structured Data:JSON-LD URLs,Structured Data:RDFa URLs,Page Titles:All,Page Titles:Missing,Page Titles:Duplicate,Page Titles:Over X Characters,Page Titles:Below X Characters,Page Titles:Over X Pixels,Page Titles:Below X Pixels,Page Titles:Same as H1,Page Titles:Multiple,URL:All,URL:Non ASCII Characters,URL:Underscores,URL:Uppercase,URL:Parameters,URL:Over X Characters,URL:Multiple Slashes,URL:Repetitive Path,URL:Contains Space,URL:Broken Bookmark,URL:Internal Search,Depth 1,Depth 2,Depth 3,Depth 4,Depth 5,Depth 6,Depth 7,Depth 8,Depth 9,Depth 10+,Top Inlinks 1 URL,Top Inlinks 1 Number of Inlinks,Top Inlinks 2 URL,Top Inlinks 2 Number of Inlinks,Top Inlinks 3 URL,Top Inlinks 3 Number of Inlinks,Top Inlinks 4 URL,Top Inlinks 4 Number of Inlinks,Top Inlinks 5 URL,Top Inlinks 5 Number of Inlinks,Top Inlinks 6 URL,Top Inlinks 6 Number of Inlinks,Top Inlinks 7 URL,Top Inlinks 7 Number of Inlinks,Top Inlinks 8 URL,Top Inlinks 8 Number of Inlinks,Top Inlinks 9 URL,Top Inlinks 9 Number of Inlinks,Top Inlinks 10 URL,Top Inlinks 10 Number of Inlinks,Top Inlinks 11 URL,Top Inlinks 11 Number of Inlinks,Top Inlinks 12 URL,Top Inlinks 12 Number of Inlinks,Top Inlinks 13 URL,Top Inlinks 13 Number of Inlinks,Top Inlinks 14 URL,Top Inlinks 14 Number of Inlinks,Top Inlinks 15 URL,Top Inlinks 15 Number of Inlinks,Top Inlinks 16 URL,Top Inlinks 16 Number of Inlinks,Top Inlinks 17 URL,Top Inlinks 17 Number of Inlinks,Top Inlinks 18 URL,Top Inlinks 18 Number of Inlinks,Top Inlinks 19 URL,Top Inlinks 19 Number of Inlinks,Top Inlinks 20 URL,Top Inlinks 20 Number of Inlinks,Response Times 0s to 1s,Response Times 1s to 2s,Response Times 2s to 3s,Response Times 3s to 4s,Response Times 4s to 5s,Response Times 5s to 6s,Response Times 6s to 7s,Response Times 7s to 8s,Response Times 8s to 9s,Response Times 10s or more"
```
# ✦ *Colab Still Alive Console Script:*
<p><font size=2px ><font color="red"> Tip - Set a javascript interval to click on the connect button every 60 seconds. Open developer-settings (in your web-browser) with Ctrl+Shift+I then click on console tab and type this on the console prompt. (for mac press Option+Command+I)</font></p><b>Copy script in hidden cell and paste at your browser console !!! DO NOT CLOSE YOUR BROWSER IN ORDER TO STILL RUNNING SCRIPT</b>
<code>function ClickConnect(){
console.log("Working");
document.querySelector("colab-connect-button").click()
}setInterval(ClickConnect,6000)</code>
# *Что в итоге*
На выходе в идеале получаем
папку с датой обхода и следующими выгрузками в формате Excel
**Tabs**:
```
Internal:All
Response Codes:All
Response Codes:Blocked by Robots.txt
Response Codes:Blocked Resource
Response Codes:No Response
Response Codes:Redirection (3xx)
Response Codes:Redirection (JavaScript)
Response Codes:Redirection (Meta Refresh)
Response Codes:Client Error (4xx)
Response Codes:Server Error (5xx)
Page Titles:All
Page Titles:Missing
Page Titles:Duplicate
Page Titles:Over X Characters
Page Titles:Below X Characters
Page Titles:Over X Pixels
Page Titles:Below X Pixels
Page Titles:Same as H1
Page Titles:Multiple
Meta Description:All
Meta Description:Missing
Meta Description:Duplicate
Meta Description:Over X Characters
Meta Description:Below X Characters
Meta Description:Over X Pixels
Meta Description:Below X Pixels
Meta Description:Multiple
Meta Keywords:All
Meta Keywords:Missing
Meta Keywords:Duplicate
Meta Keywords:Multiple
Canonicals:All
Canonicals:Contains Canonical
Canonicals:Self Referencing
Canonicals:Canonicalised
Canonicals:Missing
Canonicals:Multiple
Canonicals:Non-Indexable Canonical
Directives:All
Directives:Index
Directives:Noindex
Directives:Follow
Directives:Nofollow
Directives:None
Directives:NoArchive
Directives:NoSnippet
Directives:Max-Snippet
Directives:Max-Image-Preview
Directives:Max-Video-Preview
Directives:NoODP
Directives:NoYDIR
Directives:NoImageIndex
Directives:NoTranslate
Directives:Unavailable_After
Directives:Refresh
AMP:All
AMP:Non-200 Response
AMP:Missing Non-AMP Return Link
AMP:Missing Canonical to Non-AMP
AMP:Non-Indexable Canonical
AMP:Indexable
AMP:Non-Indexable
AMP:Missing <html amp> Tag
AMP:Missing/Invalid <!doctype html> Tag
AMP:Missing <head> Tag
AMP:Missing <body> Tag
AMP:Missing Canonical
AMP:Missing/Invalid <meta charset> Tag
AMP:Missing/Invalid <meta viewport> Tag
AMP:Missing/Invalid AMP Script
AMP:Missing/Invalid AMP Boilerplate
AMP:Contains Disallowed HTML
AMP:Other Validation Errors
Structured Data:All
Structured Data:Contains Structured Data
Structured Data:Missing
Structured Data:Validation Errors
Structured Data:Validation Warnings
Structured Data:Parse Errors
Structured Data:Microdata URLs
Structured Data:JSON-LD URLs
Structured Data:RDFa URLs
Sitemaps:All
Sitemaps:URLs in Sitemap
Sitemaps:URLs not in Sitemap
Sitemaps:Orphan URLs
Sitemaps:Non-Indexable URLs in Sitemap
Sitemaps:URLs in Multiple Sitemaps
Sitemaps:XML Sitemap with over 50k URLs
Sitemaps:XML Sitemap over 50MB" --bulk-export "Canonicals:Contains Canonical Inlinks
Canonicals:Self Referencing Inlinks
Canonicals:Canonicalised Inlinks
Canonicals:Missing Inlinks
Canonicals:Multiple Inlinks
Canonicals:Non-Indexable Canonical Inlinks
AMP:All Inlinks
AMP:Non-200 Response Inlinks
AMP:Missing Non-AMP Return Link Inlinks
AMP:Missing Canonical to Non-AMP Inlinks
AMP:Non-Indexable Canonical Inlinks
AMP:Indexable Inlinks
AMP:Non-Indexable Inlinks
Structured Data:Contains Structured Data
Structured Data:Validation Errors
Structured Data:Validation Warnings
Structured Data:JSON-LD URLs
Structured Data:Microdata URLs
Structured Data:RDFa URLs
Sitemaps:URLs in Sitemap Inlinks
Sitemaps:Orphan URLs Inlinks
Sitemaps:Non-Indexable URLs in Sitemap Inlinks
Sitemaps:URLs in Multiple Sitemaps Inlinks" --save-report "Crawl Overview
Redirects:All Redirects
Redirects:Redirect Chains
Redirects:Redirect & Canonical Chains
Canonicals:Canonical Chains
Canonicals:Non-Indexable Canonicals
Pagination:Non-200 Pagination URLs
Pagination:Unlinked Pagination URLs
Hreflang:All hreflang URLs
Hreflang:Non-200 hreflang URLs
Hreflang:Unlinked hreflang URLs
Hreflang:Missing Return Links
Hreflang:Inconsistent Language & Region Return Links
Hreflang:Non Canonical Return Links
Hreflang:Noindex Return Links
Insecure Content
SERP Summary
Orphan Pages
Structured Data:Validation Errors & Warnings Summary
Structured Data:Validation Errors & Warnings
Structured Data:Google Rich Results Features Summary
Structured Data:Google Rich Results Features
HTTP Headers:HTTP Header Summary
Cookies:Cookie Summary
```
**Summary**:
```
Site Crawled
Date
Time
Total URLs Encountered
Total URLs Crawled
Total Internal blocked by robots.txt
Total External blocked by robots.txt
URLs Displayed
Total Internal URLs
Total External URLs
Total Internal Indexable URLs
Total Internal Non-Indexable URLs
JavaScript:All
JavaScript:Uses Old AJAX Crawling Scheme URLs
JavaScript:Uses Old AJAX Crawling Scheme Meta Fragment Tag
JavaScript:Page Title Only in Rendered HTML
JavaScript:Page Title Updated by JavaScript
JavaScript:H1 Only in Rendered HTML
JavaScript:H1 Updated by JavaScript
JavaScript:Meta Description Only in Rendered HTML
JavaScript:Meta Description Updated by JavaScript
JavaScript:Canonical Only in Rendered HTML
JavaScript:Canonical Mismatch
JavaScript:Noindex Only in Original HTML
JavaScript:Nofollow Only in Original HTML
JavaScript:Contains JavaScript Links
JavaScript:Contains JavaScript Content
JavaScript:Pages with Blocked Resources
H1:All
H1:Missing
H1:Duplicate
H1:Over X Characters
H1:Multiple
H2:All
H2:Missing
H2:Duplicate
H2:Over X Characters
H2:Multiple
Internal:All
Internal:HTML
Internal:JavaScript
Internal:CSS
Internal:Images
Internal:PDF
Internal:Flash
Internal:Other
Internal:Unknown
External:All
External:HTML
External:JavaScript
External:CSS
External:Images
External:PDF
External:Flash
External:Other
External:Unknown
AMP:All
AMP:Non-200 Response
AMP:Missing Non-AMP Return Link
AMP:Missing Canonical to Non-AMP
AMP:Non-Indexable Canonical
AMP:Indexable
AMP:Non-Indexable
AMP:Missing <html amp> Tag
AMP:Missing/Invalid <!doctype html> Tag
AMP:Missing <head> Tag
AMP:Missing <body> Tag
AMP:Missing Canonical
AMP:Missing/Invalid <meta charset> Tag
AMP:Missing/Invalid <meta viewport> Tag
AMP:Missing/Invalid AMP Script
AMP:Missing/Invalid AMP Boilerplate
AMP:Contains Disallowed HTML
AMP:Other Validation Errors
Canonicals:All
Canonicals:Contains Canonical
Canonicals:Self Referencing
Canonicals:Canonicalised
Canonicals:Missing
Canonicals:Multiple
Canonicals:Non-Indexable Canonical
Content:All
Content:Spelling Errors
Content:Grammar Errors
Content:Near Duplicates
Content:Exact Duplicates
Content:Low Content Pages
Custom Extraction:All
Custom Search:All
Directives:All
Directives:Index
Directives:Noindex
Directives:Follow
Directives:Nofollow
Directives:None
Directives:NoArchive
Directives:NoSnippet
Directives:Max-Snippet
Directives:Max-Image-Preview
Directives:Max-Video-Preview
Directives:NoODP
Directives:NoYDIR
Directives:NoImageIndex
Directives:NoTranslate
Directives:Unavailable_After
Directives:Refresh
Analytics:All
Analytics:Sessions Above 0
Analytics:Bounce Rate Above 70%
Analytics:No GA Data
Analytics:Non-Indexable with GA Data
Analytics:Orphan URLs
Search Console:All
Search Console:Clicks Above 0
Search Console:No GSC Data
Search Console:Non-Indexable with GSC Data
Search Console:Orphan URLs
Hreflang:All
Hreflang:Contains hreflang
Hreflang:Non-200 hreflang URLs
Hreflang:Unlinked hreflang URLs
Hreflang:Missing Return Links
Hreflang:Inconsistent Language & Region Return Links
Hreflang:Non-Canonical Return Links
Hreflang:Noindex Return Links
Hreflang:Incorrect Language & Region Codes
Hreflang:Multiple Entries
Hreflang:Missing Self Reference
Hreflang:Not Using Canonical
Hreflang:Missing X-Default
Hreflang:Missing
Images:All
Images:Over X KB
Images:Missing Alt Text
Images:Missing Alt Attribute
Images:Alt Text Over X Characters
Link Metrics:All
Meta Description:All
Meta Description:Missing
Meta Description:Duplicate
Meta Description:Over X Characters
Meta Description:Below X Characters
Meta Description:Over X Pixels
Meta Description:Below X Pixels
Meta Description:Multiple
Meta Keywords:All
Meta Keywords:Missing
Meta Keywords:Duplicate
Meta Keywords:Multiple
PageSpeed:All
PageSpeed:Eliminate Render-Blocking Resources
PageSpeed:Defer Offscreen Images
PageSpeed:Efficiently Encode Images
PageSpeed:Properly Size Images
PageSpeed:Minify CSS
PageSpeed:Minify JavaScript
PageSpeed:Reduce Unused CSS
PageSpeed:Reduce Unused JavaScript
PageSpeed:Serve Images in Next-Gen Formats
PageSpeed:Enable Text Compression
PageSpeed:Preconnect to Required Origins
PageSpeed:Reduce Server Response Times (TTFB)
PageSpeed:Avoid Multiple Page Redirects
PageSpeed:Preload Key Requests
PageSpeed:Use Video Formats for Animated Content
PageSpeed:Avoid Excessive DOM Size
PageSpeed:Reduce JavaScript Execution Time
PageSpeed:Serve Static Assets with an Efficient Cache Policy
PageSpeed:Minimize Main-Thread Work
PageSpeed:Ensure Text Remains Visible During Webfont Load
PageSpeed:Image Elements Do Not Have Explicit Width & Height
PageSpeed:Avoid Large Layout Shifts
PageSpeed:Avoid Serving Legacy JavaScript to Modern Browsers
PageSpeed:Request Errors
Pagination:All
Pagination:Contains Pagination
Pagination:First Page
Pagination:Paginated 2+ Pages
Pagination:Pagination URL Not in Anchor Tag
Pagination:Non-200 Pagination URLs
Pagination:Unlinked Pagination URLs
Pagination:Non-Indexable
Pagination:Multiple Pagination URLs
Pagination:Pagination Loop
Pagination:Sequence Error
Response Codes:All
Response Codes:Blocked by Robots.txt
Response Codes:Blocked Resource
Response Codes:No Response
Response Codes:Success (2xx)
Response Codes:Redirection (3xx)
Response Codes:Redirection (JavaScript)
Response Codes:Redirection (Meta Refresh)
Response Codes:Client Error (4xx)
Response Codes:Server Error (5xx)
Security:All
Security:HTTP URLs
Security:HTTPS URLs
Security:Mixed Content
Security:Form URL Insecure
Security:Form on HTTP URL
Security:Unsafe Cross-Origin Links
Security:Missing HSTS Header
Security:Bad Content Type
Security:Missing X-Content-Type-Options Header
Security:Missing X-Frame-Options Header
Security:Protocol-Relative Resource Links
Security:Missing Content-Security-Policy Header
Security:Missing Secure Referrer-Policy Header
Sitemaps:All
Sitemaps:URLs in Sitemap
Sitemaps:URLs not in Sitemap
Sitemaps:Orphan URLs
Sitemaps:Non-Indexable URLs in Sitemap
Sitemaps:URLs in Multiple Sitemaps
Sitemaps:XML Sitemap with over 50k URLs
Sitemaps:XML Sitemap over 50MB
Structured Data:All
Structured Data:Contains Structured Data
Structured Data:Missing
Structured Data:Validation Errors
Structured Data:Validation Warnings
Structured Data:Parse Errors
Structured Data:Microdata URLs
Structured Data:JSON-LD URLs
Structured Data:RDFa URLs
Page Titles:All
Page Titles:Missing
Page Titles:Duplicate
Page Titles:Over X Characters
Page Titles:Below X Characters
Page Titles:Over X Pixels
Page Titles:Below X Pixels
Page Titles:Same as H1
Page Titles:Multiple
URL:All
URL:Non ASCII Characters
URL:Underscores
URL:Uppercase
URL:Parameters
URL:Over X Characters
URL:Multiple Slashes
URL:Repetitive Path
URL:Contains Space
URL:Broken Bookmark
URL:Internal Search
Depth 1
Depth 2
Depth 3
Depth 4
Depth 5
Depth 6
Depth 7
Depth 8
Depth 9
Depth 10+
Top Inlinks 1 URL
Top Inlinks 1 Number of Inlinks
Top Inlinks 2 URL
Top Inlinks 2 Number of Inlinks
Top Inlinks 3 URL
Top Inlinks 3 Number of Inlinks
Top Inlinks 4 URL
Top Inlinks 4 Number of Inlinks
Top Inlinks 5 URL
Top Inlinks 5 Number of Inlinks
Top Inlinks 6 URL
Top Inlinks 6 Number of Inlinks
Top Inlinks 7 URL
Top Inlinks 7 Number of Inlinks
Top Inlinks 8 URL
Top Inlinks 8 Number of Inlinks
Top Inlinks 9 URL
Top Inlinks 9 Number of Inlinks
Top Inlinks 10 URL
Top Inlinks 10 Number of Inlinks
Top Inlinks 11 URL
Top Inlinks 11 Number of Inlinks
Top Inlinks 12 URL
Top Inlinks 12 Number of Inlinks
Top Inlinks 13 URL
Top Inlinks 13 Number of Inlinks
Top Inlinks 14 URL
Top Inlinks 14 Number of Inlinks
Top Inlinks 15 URL
Top Inlinks 15 Number of Inlinks
Top Inlinks 16 URL
Top Inlinks 16 Number of Inlinks
Top Inlinks 17 URL
Top Inlinks 17 Number of Inlinks
Top Inlinks 18 URL
Top Inlinks 18 Number of Inlinks
Top Inlinks 19 URL
Top Inlinks 19 Number of Inlinks
Top Inlinks 20 URL
Top Inlinks 20 Number of Inlinks
Response Times 0s to 1s
Response Times 1s to 2s
Response Times 2s to 3s
Response Times 3s to 4s
Response Times 4s to 5s
Response Times 5s to 6s
Response Times 6s to 7s
Response Times 7s to 8s
Response Times 8s to 9s
Response Times 10s or more" ```
| true | code | 0.467696 | null | null | null | null |
|
## _*Using Qiskit Aqua for clique problems*_
This Qiskit Aqua Optimization notebook demonstrates how to use the VQE quantum algorithm to compute the clique of a given graph.
The problem is defined as follows. A clique in a graph $G$ is a complete subgraph of $G$. That is, it is a subset $K$ of the vertices such that every two vertices in $K$ are the two endpoints of an edge in $G$. A maximal clique is a clique to which no more vertices can be added. A maximum clique is a clique that includes the largest possible number of vertices.
We will go through three examples to show (1) how to run the optimization in the non-programming way, (2) how to run the optimization in the programming way, (3) how to run the optimization with the VQE.
We will omit the details for the support of CPLEX, which are explained in other notebooks such as maxcut.
Note that the solution may not be unique.
### The problem and a brute-force method.
```
import numpy as np
from qiskit import Aer
from qiskit_aqua import run_algorithm
from qiskit_aqua.input import EnergyInput
from qiskit_aqua.translators.ising import clique
from qiskit_aqua.algorithms import ExactEigensolver
```
first, let us have a look at the graph, which is in the adjacent matrix form.
```
K = 3 # K means the size of the clique
np.random.seed(100)
num_nodes = 5
w = clique.random_graph(num_nodes, edge_prob=0.8, weight_range=10)
print(w)
```
Let us try a brute-force method. Basically, we exhaustively try all the binary assignments. In each binary assignment, the entry of a vertex is either 0 (meaning the vertex is not in the clique) or 1 (meaning the vertex is in the clique). We print the binary assignment that satisfies the definition of the clique (Note the size is specified as K).
```
def brute_force():
# brute-force way: try every possible assignment!
def bitfield(n, L):
result = np.binary_repr(n, L)
return [int(digit) for digit in result]
L = num_nodes # length of the bitstring that represents the assignment
max = 2**L
has_sol = False
for i in range(max):
cur = bitfield(i, L)
cur_v = clique.satisfy_or_not(np.array(cur), w, K)
if cur_v:
has_sol = True
break
return has_sol, cur
has_sol, sol = brute_force()
if has_sol:
print("solution is ", sol)
else:
print("no solution found for K=", K)
```
### Part I: run the optimization in the non-programming way
```
qubit_op, offset = clique.get_clique_qubitops(w, K)
algo_input = EnergyInput(qubit_op)
params = {
'problem': {'name': 'ising'},
'algorithm': {'name': 'ExactEigensolver'}
}
result = run_algorithm(params, algo_input)
x = clique.sample_most_likely(len(w), result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("solution is", ising_sol)
else:
print("no solution found for K=", K)
```
### Part II: run the optimization in the programming way
```
algo = ExactEigensolver(algo_input.qubit_op, k=1, aux_operators=[])
result = algo.run()
x = clique.sample_most_likely(len(w), result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("solution is", ising_sol)
else:
print("no solution found for K=", K)
```
### Part III: run the optimization with the VQE
```
algorithm_cfg = {
'name': 'VQE',
'operator_mode': 'matrix'
}
optimizer_cfg = {
'name': 'COBYLA'
}
var_form_cfg = {
'name': 'RY',
'depth': 5,
'entanglement': 'linear'
}
params = {
'problem': {'name': 'ising', 'random_seed': 10598},
'algorithm': algorithm_cfg,
'optimizer': optimizer_cfg,
'variational_form': var_form_cfg
}
backend = Aer.get_backend('statevector_simulator')
result = run_algorithm(params, algo_input, backend=backend)
x = clique.sample_most_likely(len(w), result['eigvecs'][0])
ising_sol = clique.get_graph_solution(x)
if clique.satisfy_or_not(ising_sol, w, K):
print("solution is", ising_sol)
else:
print("no solution found for K=", K)
```
| true | code | 0.397003 | null | null | null | null |
|
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
# 1. Деревья решений для классификации (продолжение)
На прошлом занятии мы разобрали идею Деревьев решений:

Давайте теперь разберемся **как происходит разделения в каждом узле** то есть как проходит этап **обучения модели**. Есть как минимум две причины в этом разобраться : во-первых это позволит нам решать задачи классификации на 3 и более классов, во-вторых это даст нам возможность считать *важность* признаков в обученной модели.
Для начала посмотрим какие бывают деревья решений
----
Дерево решений вообще говоря **не обязано быть бинарным**, на практике однако используются именно бинарные деревья, поскольку для любоого не бинарного дерева решений **можно построить бинарное** (при этом увеличится глубина дерева).
### 1. Деревья решений использую простой одномерный предикат для разделения объектов
Имеется ввиду что в каждом узле разделение объектов (и создание двух новых узлов) происходит **по 1 (одному)** признаку:
*Все объекты со значением некоторого признака меньше трешхолда отправляются в один узел, а больше - в другой:*
$$
[x_j < t]
$$
Вообще говоря это совсем не обязательно, например в каждом отдельном узле можно строить любую модель (например логистическую регрессию или KNN), рассматривая сразу несколько признаков.
### 2. Оценка качества
Мы говорили про простой функционал качества разбиения (**выбора трешхолда**): количество ошибок (1-accuracy).
На практике используются два критерия: Gini's impurity index и Information gain.
**Индекс Джини**
$$
I_{Gini} = 1 - \sum_i^K p_i^2
$$
где $K$ - количество классов, a $p_i = \frac{|n_i|}{n}$ - доля представителей $i$ - ого класса в данном узле
**Энтропия**
$$
H(p) = - \sum_i^K p_i\log(p_i)
$$
**Информационный критерий**
$$
IG(p) = H(\text{parent}) - H(\text{child})
$$
#### Разделение производится по тому трешхолду и тому признаку по которому взвешенное среднее функционала качества в узлах потомках наименьшее.
### 3. Критерий остановки
Мы с вами говорили о таких параметрах Решающего дерева как минимальное число объектов в листе,
и минимальное число объектов в узле, для того чтобы он был разделен на два. Еще один критерий -
глубина дерева. Возможны и другие.
* Ограничение числа объектов в листе
* Ограничение числа объектов в узле, для того чтобы он был разделен
* Ограничение глубины дерева
* Ограничение минимального прироста Энтропии или Информационного критерия при разделении
* Остановка в случае если все объекты в листе принадлежат к одному классу
На прошлой лекции мы обсуждали технику которая называется **Прунинг** (pruning) это альтернатива Критериям остановки, когда сначала строится переобученное дерево, а затем она каким то образом упрощается. На практике по ряду причин чаще используются критерии остановки, а не прунинг.
Подробнее см. https://github.com/esokolov/ml-course-hse/blob/master/2018-fall/lecture-notes/lecture07-trees.pdf
Оссобенности разбиения непрерывных признаков
* http://kevinmeurer.com/a-simple-guide-to-entropy-based-discretization/
* http://clear-lines.com/blog/post/Discretizing-a-continuous-variable-using-Entropy.aspx
---
## 1.1. Оценка качества разделения в узле
```
def gini_impurity(y_current):
n = y_current.shape[0]
val, count = np.unique(y_current, return_counts=True)
gini = 1 - ((count/n)**2).sum()
return gini
def entropy(y_current):
gini = 1
n = y_current.shape[0]
val, count = np.unique(y_current, return_counts=True)
p = count/n
igain = p.dot(np.log(p))
return igain
n = 100
Y_example = np.zeros((100,100))
for i in range(100):
for j in range(i, 100):
Y_example[i, j] = 1
gini = [gini_impurity(y) for y in Y_example]
ig = [-entropy(y) for y in Y_example]
plt.figure(figsize=(7,7))
plt.plot(np.linspace(0,1,100), gini, label='Index Gini');
plt.plot(np.linspace(0,1,100), ig, label ='Entropy');
plt.legend()
plt.xlabel('Доля примеров\n положительного класса')
plt.ylabel('Значение оптимизируемого\n функционала');
```
## 1.2. Пример работы Решающего дерева
**Индекс Джини** и **Информационный критерий** это меры сбалансированности вектора (насколько значения объектов в наборе однородны). Максимальная неоднородность когда объектов разных классов поровну. Максимальная однородность когда в наборе объекты одного класса.
Разбивая множество объектов на два подмножества, мы стремимся уменьшить неоднородность в каждом подмножестве.
Посмотрем на примере Ирисов Фишера
### Ирисы Фишера
```
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
model = DecisionTreeClassifier()
model = model.fit(iris.data, iris.target)
feature_names = ['sepal length', 'sepal width', 'petal length', 'petal width']
target_names = ['setosa', 'versicolor', 'virginica']
model.feature_importances_
np.array(model.decision_path(iris.data).todense())[0]
np.array(model.decision_path(iris.data).todense())[90]
iris.data[0]
model.predict(iris.data)
model.tree_.node_count
```
### Цифры. Интерпретируемость
```
from sklearn.datasets import load_digits
X, y = load_digits(n_class=2, return_X_y=True)
plt.figure(figsize=(12,12))
for i in range(9):
ax = plt.subplot(3,3,i+1)
ax.imshow(X[i].reshape(8,8), cmap='gray')
from sklearn.metrics import accuracy_score
model = DecisionTreeClassifier()
model.fit(X, y)
y_pred = model.predict(X)
print(accuracy_score(y, y_pred))
print(X.shape)
np.array(model.decision_path(X).todense())[0]
model.feature_importances_
plt.imshow(model.feature_importances_.reshape(8,8));
from sklearn.tree import export_graphviz
export_graphviz(model, out_file='tree.dot', filled=True)
# #sudo apt-get install graphviz
# !dot -Tpng 'tree.dot' -o 'tree.png'
# 
np.array(model.decision_path(X).todense())[0]
plt.imshow(X[0].reshape(8,8))
```
## 2.3. Решающие деревья легко обобщаются на задачу многоклассовой классификации
### Пример с рукописными цифрами
```
X, y = load_digits(n_class=10, return_X_y=True)
plt.figure(figsize=(12,12))
for i in range(9):
ax = plt.subplot(3,3,i+1)
ax.imshow(X[i].reshape(8,8), cmap='gray')
ax.set_title(y[i])
ax.set_xticks([])
ax.set_yticks([])
model = DecisionTreeClassifier()
model.fit(X, y)
y_pred = model.predict(X)
print(accuracy_score(y, y_pred))
plt.imshow(model.feature_importances_.reshape(8,8));
model.feature_importances_
```
### Вопрос: откуда мы получаем feature importance?
## 2.4. Пример на котором дерево решений строит очень сложную разделяющую кривую
Пример взят отсюда https://habr.com/ru/company/ods/blog/322534/#slozhnyy-sluchay-dlya-derevev-resheniy .
Как мы помним Деревья используют одномерный предикат для разделени множества объектов.
Это значит что если данные плохо разделимы по **каждому** (индивидуальному) признаку по отдельности, результирующее решающее правило может оказаться очень сложным.
```
from sklearn.tree import DecisionTreeClassifier
def form_linearly_separable_data(n=500, x1_min=0, x1_max=30, x2_min=0, x2_max=30):
data, target = [], []
for i in range(n):
x1, x2 = np.random.randint(x1_min, x1_max), np.random.randint(x2_min, x2_max)
if np.abs(x1 - x2) > 0.5:
data.append([x1, x2])
target.append(np.sign(x1 - x2))
return np.array(data), np.array(target)
X, y = form_linearly_separable_data()
plt.figure(figsize=(10,10))
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='autumn');
```
Давайте посмотрим как данные выглядит в проекции на 1 ось
```
plt.figure(figsize=(15,5))
ax1 = plt.subplot(1,2,1)
ax1.set_title('Проекция на ось $X_0$')
ax1.hist(X[y==1, 0], alpha=.3);
ax1.hist(X[y==-1, 0], alpha=.6);
ax2 = plt.subplot(1,2,2)
ax2.set_title('Проекция на ось $X_1$')
ax2.hist(X[y==1, 1], alpha=.3);
ax2.hist(X[y==-1, 1], alpha=.6);
def get_grid(data, eps=0.01):
x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1
y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1
return np.meshgrid(np.arange(x_min, x_max, eps),
np.arange(y_min, y_max, eps))
tree = DecisionTreeClassifier(random_state=17).fit(X, y)
xx, yy = get_grid(X, eps=.05)
predicted = tree.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.figure(figsize=(10,10))
plt.pcolormesh(xx, yy, predicted, cmap='autumn', alpha=0.3)
plt.scatter(X[y==1, 0], X[y==1, 1], marker='x', s=100, cmap='autumn', linewidth=1.5)
plt.scatter(X[y==-1, 0], X[y==-1, 1], marker='o', s=100, cmap='autumn', edgecolors='k',linewidth=1.5)
plt.title('Easy task. Decision tree compexifies everything');
# export_graphviz(tree, out_file='complex_tree.dot', filled=True)
# !dot -Tpng 'complex_tree.dot' -o 'complex_tree.png'
```
## 2.5. Деревья решений для регрессии (кратко)
см. sklearn.DecisionTreeRegressor
# 3. Ансамблирование деревьев. Случайный лес.
Что если у нас несколько классификаторов (каждый может быть не очень *умным*) ошибающихся на разных объектах
Тогда если в качестве предсказания мы будем использовать *моду* мы можем расчитывать на лучшую предсказательную силу.
### Идея 1
Как получить модели которые ошибаются в разных местах?
Давайте брать *тупые* деревья но учить их на **разных подвыборках признаков** !
### Идея 2
Как получить модели которые ошибаются в разных местах?
Давайте брать *тупые* деревья, но учить их на **разных подвыборках объектов** !
### Результат: Случайный лес.
sklearn.ensemble RandomForrest
| true | code | 0.437884 | null | null | null | null |
|
# Datasets and Neural Networks
This notebook will step through the process of loading an arbitrary dataset in PyTorch, and creating a simple neural network for regression.
# Datasets
We will first work through loading an arbitrary dataset in PyTorch. For this project, we chose the <a href="http://www.cs.toronto.edu/~delve/data/abalone/desc.html">delve abalone dataset</a>.
First, download and unzip the dataset from the link above, then unzip `Dataset.data.gz` and move `Dataset.data` into `hackpack-ml/models/data`.
We are given the following attribute information in the spec:
```
Attributes:
1 sex u M F I # Gender or Infant (I)
2 length u (0,Inf] # Longest shell measurement (mm)
3 diameter u (0,Inf] # perpendicular to length (mm)
4 height u (0,Inf] # with meat in shell (mm)
5 whole_weight u (0,Inf] # whole abalone (gr)
6 shucked_weight u (0,Inf] # weight of meat (gr)
7 viscera_weight u (0,Inf] # gut weight (after bleeding) (gr)
8 shell_weight u (0,Inf] # after being dried (gr)
9 rings u 0..29 # +1.5 gives the age in years
```
```
import math
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
import torch.nn.functional as F
import pandas as pd
from torch.utils.data import Dataset, DataLoader
```
Pandas is a data manipulation library that works really well with structured data. We can use Pandas DataFrames to load the dataset.
```
col_names = ['sex', 'length', 'diameter', 'height', 'whole_weight',
'shucked_weight', 'viscera_weight', 'shell_weight', 'rings']
abalone_df = pd.read_csv('../data/Dataset.data', sep=' ', names=col_names)
abalone_df.head(n=3)
```
We define a subclass of PyTorch Dataset for our Abalone dataset.
```
class AbaloneDataset(data.Dataset):
"""Abalone dataset. Provides quick iteration over rows of data."""
def __init__(self, csv):
"""
Args: csv (string): Path to the Abalone dataset.
"""
self.features = ['sex', 'length', 'diameter', 'height', 'whole_weight',
'shucked_weight', 'viscera_weight', 'shell_weight']
self.y = ['rings']
self.abalone_df = pd.read_csv(csv, sep=' ', names=(self.features + self.y))
# Turn categorical data into machine interpretable format (one hot)
self.abalone_df['sex'] = pd.get_dummies(self.abalone_df['sex'])
def __len__(self):
return len(self.abalone_df)
def __getitem__(self, idx):
"""Return (x,y) pair where x are abalone features and y is age."""
features = self.abalone_df.iloc[idx][self.features].values
y = self.abalone_df.iloc[idx][self.y]
return torch.Tensor(features).float(), torch.Tensor(y).float()
```
# Neural Networks
The task is to predict the age (number of rings) of abalone from physical measurements. We build a simple neural network with one hidden layer to model the regression.
```
class Net(nn.Module):
def __init__(self, feature_size):
super(Net, self).__init__()
# feature_size input channels (8), 1 output channels
self.fc1 = nn.Linear(feature_size, 4)
self.fc2 = nn.Linear(4, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
```
We instantiate an Abalone dataset instance and create DataLoaders for train and test sets.
```
dataset = AbaloneDataset('../data/Dataset.data')
train_split, test_split = math.floor(len(dataset) * 0.8), math.ceil(len(dataset) * 0.2)
trainset = [dataset[i] for i in range(train_split)]
testset = [dataset[train_split + j] for j in range(test_split)]
batch_sz = len(trainset) # Compact data allows for big batch size
trainloader = data.DataLoader(trainset, batch_size=batch_sz, shuffle=True, num_workers=4)
testloader = data.DataLoader(testset, batch_size=batch_sz, shuffle=False, num_workers=4)
```
Now, we can initialize our network and define train and test functions
```
net = Net(len(dataset.features))
loss_fn = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.1)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
gpu_ids = [0] # On Colab, we have access to one GPU. Change this value as you see fit
def train(epoch):
"""
Trains our net on data from the trainloader for a single epoch
"""
net.train()
with tqdm(total=len(trainloader.dataset)) as progress_bar:
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad() # Clear any stored gradients for new step
outputs = net(inputs.float())
loss = loss_fn(outputs, targets) # Calculate loss between prediction and label
loss.backward() # Backpropagate gradient updates through net based on loss
optimizer.step() # Update net weights based on gradients
progress_bar.set_postfix(loss=loss.item())
progress_bar.update(inputs.size(0))
def test(epoch):
"""
Run net in inference mode on test data.
"""
net.eval()
# Ensures the net will not update weights
with torch.no_grad():
with tqdm(total=len(testloader.dataset)) as progress_bar:
for batch_idx, (inputs, targets) in enumerate(testloader):
inputs, targets = inputs.to(device).float(), targets.to(device).float()
outputs = net(inputs)
loss = loss_fn(outputs, targets)
progress_bar.set_postfix(testloss=loss.item())
progress_bar.update(inputs.size(0))
```
Now that everything is prepared, it's time to train!
```
test_freq = 5 # Frequency to run model on validation data
for epoch in range(0, 200):
train(epoch)
if epoch % test_freq == 0:
test(epoch)
```
We use the network's eval mode to do a sample prediction to see how well it does.
```
net.eval()
sample = testset[0]
predicted_age = net(sample[0])
true_age = sample[1]
print(f'Input features: {sample[0]}')
print(f'Predicted age: {predicted_age.item()}, True age: {true_age[0]}')
```
Congratulations! You now know how to load your own datasets into PyTorch and run models on it. For an example of Computer Vision, check out the DenseNet notebook. Happy hacking!
| true | code | 0.827166 | null | null | null | null |
|
# Optimization with equality constraints
```
import math
import numpy as np
from scipy import optimize as opt
```
maximize $.4\,\log(x_1)+.6\,\log(x_2)$ s.t. $x_1+3\,x_2=50$.
```
I = 50
p = np.array([1, 3])
U = lambda x: (.4*math.log(x[0])+.6*math.log(x[1]))
x0 = (I/len(p))/np.array(p)
budget = ({'type': 'eq', 'fun': lambda x: I-np.sum(np.multiply(x, p))})
opt.minimize(lambda x: -U(x), x0, method='SLSQP', constraints=budget, tol=1e-08,
options={'disp': True, 'ftol': 1e-08})
def consumer(U, p, I):
budget = ({'type': 'eq', 'fun': lambda x: I-np.sum(np.multiply(x, p))})
x0 = (I/len(p))/np.array(p)
sol = opt.minimize(lambda x: -U(x), x0, method='SLSQP', constraints=budget, tol=1e-08,
options={'disp': False, 'ftol': 1e-08})
if sol.status == 0:
return {'x': sol.x, 'V': -sol.fun, 'MgU': -sol.jac, 'mult': -sol.jac[0]/p[0]}
else:
return 0
consumer(U, p, I)
delta=.01
(consumer(U, p, I+delta)['V']-consumer(U, p, I-delta)['V'])/(2*delta)
delta=.001
numerador = (consumer(U,p+np.array([delta, 0]), I)['V']-consumer(U,p+np.array([-delta, 0]), I)['V'])/(2*delta)
denominador = (consumer(U, p, I+delta)['V']-consumer(U, p, I-delta)['V'])/(2*delta)
-numerador/denominador
```
## Cost function
```
# Production function
F = lambda x: (x[0]**.8)*(x[1]**.2)
w = np.array([5, 4])
y = 1
constraint = ({'type': 'eq', 'fun': lambda x: y-F(x)})
x0 = np.array([.5, .5])
cost = opt.minimize(lambda x: w@x, x0, method='SLSQP', constraints=constraint, tol=1e-08,
options={'disp': True, 'ftol': 1e-08})
F(cost.x)
cost
```
## Exercise
```
a = 2
u = lambda c: -np.exp(-a*c)
R = 2
Z2 = np.array([.72, .92, 1.12, 1.32])
Z3 = np.array([.86, .96, 1.06, 1.16])
def U(x):
states = len(Z2)*len(Z3)
U = u(x[0])
for z2 in Z2:
for z3 in Z3:
U += (1/states)*u(x[1]*R+x[2]*z2+x[3]*z3)
return U
p = np.array([1, 1, .5, .5])
I = 4
# a=1
consumer(U, p, I)
# a=5
consumer(U, p, I)
# a=2
consumer(U, p, I)
import matplotlib.pyplot as plt
x = np.arange(0.0, 2.0, 0.01)
a = 2
u = lambda c: -np.exp(-a*c)
plt.plot(x, u(x))
a = -2
plt.plot(x, u(x))
```
# Optimization with inequality constraints
```
f = lambda x: -x[0]**3+x[1]**2-2*x[0]*(x[2]**2)
constraints =({'type': 'eq', 'fun': lambda x: 2*x[0]+x[1]**2+x[2]-5},
{'type': 'ineq', 'fun': lambda x: 5*x[0]**2-x[1]**2-x[2]-2})
constraints =({'type': 'eq', 'fun': lambda x: x[0]**3-x[1]})
x0 = np.array([.5, .5, 2])
opt.minimize(f, x0, method='SLSQP', constraints=constraints, tol=1e-08,
options={'disp': True, 'ftol': 1e-08})
```
| true | code | 0.272363 | null | null | null | null |
|
# SAMUR Emergency Frequencies
This notebook explores how the frequency of different types of emergency changes with time in relation to different periods (hours of the day, days of the week, months of the year...) and locations in Madrid. This will be useful for constructing a realistic emergency generator in the city simulation.
Let's start with some imports and setup, and then read the table.
```
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import yaml
%matplotlib inline
df = pd.read_csv("../data/emergency_data.csv")
df.head()
```
The column for the time of the call is a string, so let's change that into a timestamp.
```
df["time_call"] = pd.to_datetime(df["Solicitud"])
```
We will also need to assign a numerical code to each district of the city in order to properly vectorize the distribution an make it easier to work along with other parts of the project.
```
district_codes = {
'Centro': 1,
'Arganzuela': 2,
'Retiro': 3,
'Salamanca': 4,
'Chamartín': 5,
'Tetuán': 6,
'Chamberí': 7,
'Fuencarral - El Pardo': 8,
'Moncloa - Aravaca': 9,
'Latina': 10,
'Carabanchel': 11,
'Usera': 12,
'Puente de Vallecas': 13,
'Moratalaz': 14,
'Ciudad Lineal': 15,
'Hortaleza': 16,
'Villaverde': 17,
'Villa de Vallecas': 18,
'Vicálvaro': 19,
'San Blas - Canillejas': 20,
'Barajas': 21,
}
df["district_code"] = df.Distrito.apply(lambda x: district_codes[x])
```
Each emergency has already been assigned a severity level, depending on the nature of the reported emergency.
```
df["severity"] = df["Gravedad"]
```
We also need the hour, weekday and month of the event in order to assign it in the various distributions.
```
df["hour"] = df["time_call"].apply(lambda x: x.hour) # From 0 to 23
df["weekday"] = df["time_call"].apply(lambda x: x.weekday()+1) # From 1 (Mon) to 7 (Sun)
df["month"] = df["time_call"].apply(lambda x: x.month)
```
Let's also strip down the dataset to just the columns we need right now.
```
df = df[["district_code", "severity", "time_call", "hour", "weekday", "month"]]
df.head()
```
We are going to group the distributions by severity.
```
emergencies_per_grav = df.severity.value_counts().sort_index().rename("total_emergencies")
emergencies_per_grav
```
We will also need the global frequency of the emergencies:
```
total_seconds = (df.time_call.max()-df.time_call.min()).total_seconds()
frequencies_per_grav = (emergencies_per_grav / total_seconds).rename("emergency_frequencies")
frequencies_per_grav
```
Each emergency will need to be assigne a district. Assuming independent distribution of emergencies by district and time, each will be assigned to a district according to a global probability based on this dataset, as follows.
```
prob_per_district = (df.district_code.value_counts().sort_index()/df.district_code.value_counts().sum()).rename("distric_weight")
prob_per_district
```
In order to be able to simplify the generation of emergencies, we are going to assume that the distributions of emergencies per hour, per weekday and per month are independent, sharing no correlation. This is obiously not fully true, but it is a good approximation for the chosen time-frames.
```
hourly_dist = (df.hour.value_counts()/df.hour.value_counts().mean()).sort_index().rename("hourly_distribution")
daily_dist = (df.weekday.value_counts()/df.weekday.value_counts().mean()).sort_index().rename("daily_distribution")
monthly_dist = (df.month.value_counts()/df.month.value_counts().mean()).sort_index().rename("monthly_distribution")
```
We will actually make one of these per severity level.
This will allow us to modify the base emergency density of a given severity as follows:
```
def emergency_density(gravity, hour, weekday, month):
base_density = frequencies_per_grav[gravity]
density = base_density * hourly_dist[hour] * daily_dist[weekday] * monthly_dist[month]
return density
emergency_density(3, 12, 4, 5) # Emergency frequency for severity level 3, at 12 hours of a thursday in May
```
In order for the model to read these distributions we will need to store them in a dict-like format, in this case YAML, which is easily readable by human or machine.
```
dists = {}
for severity in range(1, 6):
sub_df = df[df["severity"] == severity]
frequency = float(frequencies_per_grav.round(8)[severity])
hourly_dist = (sub_df.hour. value_counts()/sub_df.hour. value_counts().mean()).sort_index().round(5).to_dict()
daily_dist = (sub_df.weekday.value_counts()/sub_df.weekday.value_counts().mean()).sort_index().round(5).to_dict()
monthly_dist = (sub_df.month. value_counts()/sub_df.month. value_counts().mean()).sort_index().round(5).to_dict()
district_prob = (sub_df.district_code.value_counts()/sub_df.district_code.value_counts().sum()).sort_index().round(5).to_dict()
dists[severity] = {"frequency": frequency,
"hourly_dist": hourly_dist,
"daily_dist": daily_dist,
"monthly_dist": monthly_dist,
"district_prob": district_prob}
f = open("../data/distributions.yaml", "w+")
yaml.dump(dists, f, allow_unicode=True)
```
We can now check that the dictionary stored in the YAML file is the same one we have created.
```
with open("../data/distributions.yaml") as dist_file:
yaml_dict = yaml.safe_load(dist_file)
yaml_dict == dists
```
| true | code | 0.288331 | null | null | null | null |
|
# 1 - Sequence to Sequence Learning with Neural Networks
In this series we'll be building a machine learning model to go from once sequence to another, using PyTorch and torchtext. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.
In this first notebook, we'll start simple to understand the general concepts by implementing the model from the [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper.
## Introduction
The most common sequence-to-sequence (seq2seq) models are *encoder-decoder* models, which commonly use a *recurrent neural network* (RNN) to *encode* the source (input) sentence into a single vector. In this notebook, we'll refer to this single vector as a *context vector*. We can think of the context vector as being an abstract representation of the entire input sentence. This vector is then *decoded* by a second RNN which learns to output the target (output) sentence by generating it one word at a time.

The above image shows an example translation. The input/source sentence, "guten morgen", is passed through the embedding layer (yellow) and then input into the encoder (green). We also append a *start of sequence* (`<sos>`) and *end of sequence* (`<eos>`) token to the start and end of sentence, respectively. At each time-step, the input to the encoder RNN is both the embedding, $e$, of the current word, $e(x_t)$, as well as the hidden state from the previous time-step, $h_{t-1}$, and the encoder RNN outputs a new hidden state $h_t$. We can think of the hidden state as a vector representation of the sentence so far. The RNN can be represented as a function of both of $e(x_t)$ and $h_{t-1}$:
$$h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$$
We're using the term RNN generally here, it could be any recurrent architecture, such as an *LSTM* (Long Short-Term Memory) or a *GRU* (Gated Recurrent Unit).
Here, we have $X = \{x_1, x_2, ..., x_T\}$, where $x_1 = \text{<sos>}, x_2 = \text{guten}$, etc. The initial hidden state, $h_0$, is usually either initialized to zeros or a learned parameter.
Once the final word, $x_T$, has been passed into the RNN via the embedding layer, we use the final hidden state, $h_T$, as the context vector, i.e. $h_T = z$. This is a vector representation of the entire source sentence.
Now we have our context vector, $z$, we can start decoding it to get the output/target sentence, "good morning". Again, we append start and end of sequence tokens to the target sentence. At each time-step, the input to the decoder RNN (blue) is the embedding, $d$, of current word, $d(y_t)$, as well as the hidden state from the previous time-step, $s_{t-1}$, where the initial decoder hidden state, $s_0$, is the context vector, $s_0 = z = h_T$, i.e. the initial decoder hidden state is the final encoder hidden state. Thus, similar to the encoder, we can represent the decoder as:
$$s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$$
Although the input/source embedding layer, $e$, and the output/target embedding layer, $d$, are both shown in yellow in the diagram they are two different embedding layers with their own parameters.
In the decoder, we need to go from the hidden state to an actual word, therefore at each time-step we use $s_t$ to predict (by passing it through a `Linear` layer, shown in purple) what we think is the next word in the sequence, $\hat{y}_t$.
$$\hat{y}_t = f(s_t)$$
The words in the decoder are always generated one after another, with one per time-step. We always use `<sos>` for the first input to the decoder, $y_1$, but for subsequent inputs, $y_{t>1}$, we will sometimes use the actual, ground truth next word in the sequence, $y_t$ and sometimes use the word predicted by our decoder, $\hat{y}_{t-1}$. This is called *teacher forcing*, see a bit more info about it [here](https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/).
When training/testing our model, we always know how many words are in our target sentence, so we stop generating words once we hit that many. During inference it is common to keep generating words until the model outputs an `<eos>` token or after a certain amount of words have been generated.
Once we have our predicted target sentence, $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$, we compare it against our actual target sentence, $Y = \{ y_1, y_2, ..., y_T \}$, to calculate our loss. We then use this loss to update all of the parameters in our model.
## Preparing Data
We'll be coding up the models in PyTorch and using torchtext to help us do all of the pre-processing required. We'll also be using spaCy to assist in the tokenization of the data.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.legacy.datasets import Multi30k
from torchtext.legacy.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
```
We'll set the random seeds for deterministic results.
```
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
```
Next, we'll create the tokenizers. A tokenizer is used to turn a string containing a sentence into a list of individual tokens that make up that string, e.g. "good morning!" becomes ["good", "morning", "!"]. We'll start talking about the sentences being a sequence of tokens from now, instead of saying they're a sequence of words. What's the difference? Well, "good" and "morning" are both words and tokens, but "!" is a token, not a word.
spaCy has model for each language ("de_core_news_sm" for German and "en_core_web_sm" for English) which need to be loaded so we can access the tokenizer of each model.
**Note**: the models must first be downloaded using the following on the command line:
```
python -m spacy download en_core_web_sm
python -m spacy download de_core_news_sm
```
We load the models as such:
```
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
```
Next, we create the tokenizer functions. These can be passed to torchtext and will take in the sentence as a string and return the sentence as a list of tokens.
In the paper we are implementing, they find it beneficial to reverse the order of the input which they believe "introduces many short term dependencies in the data that make the optimization problem much easier". We copy this by reversing the German sentence after it has been transformed into a list of tokens.
```
def tokenize_de(text):
"""
Tokenizes German text from a string into a list of strings (tokens) and reverses it
"""
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
"""
Tokenizes English text from a string into a list of strings (tokens)
"""
return [tok.text for tok in spacy_en.tokenizer(text)]
```
torchtext's `Field`s handle how data should be processed. All of the possible arguments are detailed [here](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L61).
We set the `tokenize` argument to the correct tokenization function for each, with German being the `SRC` (source) field and English being the `TRG` (target) field. The field also appends the "start of sequence" and "end of sequence" tokens via the `init_token` and `eos_token` arguments, and converts all words to lowercase.
```
SRC = Field(tokenize = tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize = tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
```
Next, we download and load the train, validation and test data.
The dataset we'll be using is the [Multi30k dataset](https://github.com/multi30k/dataset). This is a dataset with ~30,000 parallel English, German and French sentences, each with ~12 words per sentence.
`exts` specifies which languages to use as the source and target (source goes first) and `fields` specifies which field to use for the source and target.
```
train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
fields = (SRC, TRG))
```
We can double check that we've loaded the right number of examples:
```
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
```
We can also print out an example, making sure the source sentence is reversed:
```
print(vars(train_data.examples[0]))
```
The period is at the beginning of the German (src) sentence, so it looks like the sentence has been correctly reversed.
Next, we'll build the *vocabulary* for the source and target languages. The vocabulary is used to associate each unique token with an index (an integer). The vocabularies of the source and target languages are distinct.
Using the `min_freq` argument, we only allow tokens that appear at least 2 times to appear in our vocabulary. Tokens that appear only once are converted into an `<unk>` (unknown) token.
It is important to note that our vocabulary should only be built from the training set and not the validation/test set. This prevents "information leakage" into our model, giving us artifically inflated validation/test scores.
```
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
```
The final step of preparing the data is to create the iterators. These can be iterated on to return a batch of data which will have a `src` attribute (the PyTorch tensors containing a batch of numericalized source sentences) and a `trg` attribute (the PyTorch tensors containing a batch of numericalized target sentences). Numericalized is just a fancy way of saying they have been converted from a sequence of readable tokens to a sequence of corresponding indexes, using the vocabulary.
We also need to define a `torch.device`. This is used to tell torchText to put the tensors on the GPU or not. We use the `torch.cuda.is_available()` function, which will return `True` if a GPU is detected on our computer. We pass this `device` to the iterator.
When we get a batch of examples using an iterator we need to make sure that all of the source sentences are padded to the same length, the same with the target sentences. Luckily, torchText iterators handle this for us!
We use a `BucketIterator` instead of the standard `Iterator` as it creates batches in such a way that it minimizes the amount of padding in both the source and target sentences.
```
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
## Building the Seq2Seq Model
We'll be building our model in three parts. The encoder, the decoder and a seq2seq model that encapsulates the encoder and decoder and will provide a way to interface with each.
### Encoder
First, the encoder, a 2 layer LSTM. The paper we are implementing uses a 4-layer LSTM, but in the interest of training time we cut this down to 2-layers. The concept of multi-layer RNNs is easy to expand from 2 to 4 layers.
For a multi-layer RNN, the input sentence, $X$, after being embedded goes into the first (bottom) layer of the RNN and hidden states, $H=\{h_1, h_2, ..., h_T\}$, output by this layer are used as inputs to the RNN in the layer above. Thus, representing each layer with a superscript, the hidden states in the first layer are given by:
$$h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$$
The hidden states in the second layer are given by:
$$h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$$
Using a multi-layer RNN also means we'll also need an initial hidden state as input per layer, $h_0^l$, and we will also output a context vector per layer, $z^l$.
Without going into too much detail about LSTMs (see [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) blog post to learn more about them), all we need to know is that they're a type of RNN which instead of just taking in a hidden state and returning a new hidden state per time-step, also take in and return a *cell state*, $c_t$, per time-step.
$$\begin{align*}
h_t &= \text{RNN}(e(x_t), h_{t-1})\\
(h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})
\end{align*}$$
We can just think of $c_t$ as another type of hidden state. Similar to $h_0^l$, $c_0^l$ will be initialized to a tensor of all zeros. Also, our context vector will now be both the final hidden state and the final cell state, i.e. $z^l = (h_T^l, c_T^l)$.
Extending our multi-layer equations to LSTMs, we get:
$$\begin{align*}
(h_t^1, c_t^1) &= \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))\\
(h_t^2, c_t^2) &= \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Note how only our hidden state from the first layer is passed as input to the second layer, and not the cell state.
So our encoder looks something like this:

We create this in code by making an `Encoder` module, which requires we inherit from `torch.nn.Module` and use the `super().__init__()` as some boilerplate code. The encoder takes the following arguments:
- `input_dim` is the size/dimensionality of the one-hot vectors that will be input to the encoder. This is equal to the input (source) vocabulary size.
- `emb_dim` is the dimensionality of the embedding layer. This layer converts the one-hot vectors into dense vectors with `emb_dim` dimensions.
- `hid_dim` is the dimensionality of the hidden and cell states.
- `n_layers` is the number of layers in the RNN.
- `dropout` is the amount of dropout to use. This is a regularization parameter to prevent overfitting. Check out [this](https://www.coursera.org/lecture/deep-neural-network/understanding-dropout-YaGbR) for more details about dropout.
We aren't going to discuss the embedding layer in detail during these tutorials. All we need to know is that there is a step before the words - technically, the indexes of the words - are passed into the RNN, where the words are transformed into vectors. To read more about word embeddings, check these articles: [1](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/), [2](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html), [3](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), [4](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/).
The embedding layer is created using `nn.Embedding`, the LSTM with `nn.LSTM` and a dropout layer with `nn.Dropout`. Check the PyTorch [documentation](https://pytorch.org/docs/stable/nn.html) for more about these.
One thing to note is that the `dropout` argument to the LSTM is how much dropout to apply between the layers of a multi-layer RNN, i.e. between the hidden states output from layer $l$ and those same hidden states being used for the input of layer $l+1$.
In the `forward` method, we pass in the source sentence, $X$, which is converted into dense vectors using the `embedding` layer, and then dropout is applied. These embeddings are then passed into the RNN. As we pass a whole sequence to the RNN, it will automatically do the recurrent calculation of the hidden states over the whole sequence for us! Notice that we do not pass an initial hidden or cell state to the RNN. This is because, as noted in the [documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM), that if no hidden/cell state is passed to the RNN, it will automatically create an initial hidden/cell state as a tensor of all zeros.
The RNN returns: `outputs` (the top-layer hidden state for each time-step), `hidden` (the final hidden state for each layer, $h_T$, stacked on top of each other) and `cell` (the final cell state for each layer, $c_T$, stacked on top of each other).
As we only need the final hidden and cell states (to make our context vector), `forward` only returns `hidden` and `cell`.
The sizes of each of the tensors is left as comments in the code. In this implementation `n_directions` will always be 1, however note that bidirectional RNNs (covered in tutorial 3) will have `n_directions` as 2.
```
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden, cell
```
### Decoder
Next, we'll build our decoder, which will also be a 2-layer (4 in the paper) LSTM.

The `Decoder` class does a single step of decoding, i.e. it ouputs single token per time-step. The first layer will receive a hidden and cell state from the previous time-step, $(s_{t-1}^1, c_{t-1}^1)$, and feeds it through the LSTM with the current embedded token, $y_t$, to produce a new hidden and cell state, $(s_t^1, c_t^1)$. The subsequent layers will use the hidden state from the layer below, $s_t^{l-1}$, and the previous hidden and cell states from their layer, $(s_{t-1}^l, c_{t-1}^l)$. This provides equations very similar to those in the encoder.
$$\begin{align*}
(s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\
(s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))
\end{align*}$$
Remember that the initial hidden and cell states to our decoder are our context vectors, which are the final hidden and cell states of our encoder from the same layer, i.e. $(s_0^l,c_0^l)=z^l=(h_T^l,c_T^l)$.
We then pass the hidden state from the top layer of the RNN, $s_t^L$, through a linear layer, $f$, to make a prediction of what the next token in the target (output) sequence should be, $\hat{y}_{t+1}$.
$$\hat{y}_{t+1} = f(s_t^L)$$
The arguments and initialization are similar to the `Encoder` class, except we now have an `output_dim` which is the size of the vocabulary for the output/target. There is also the addition of the `Linear` layer, used to make the predictions from the top layer hidden state.
Within the `forward` method, we accept a batch of input tokens, previous hidden states and previous cell states. As we are only decoding one token at a time, the input tokens will always have a sequence length of 1. We `unsqueeze` the input tokens to add a sentence length dimension of 1. Then, similar to the encoder, we pass through an embedding layer and apply dropout. This batch of embedded tokens is then passed into the RNN with the previous hidden and cell states. This produces an `output` (hidden state from the top layer of the RNN), a new `hidden` state (one for each layer, stacked on top of each other) and a new `cell` state (also one per layer, stacked on top of each other). We then pass the `output` (after getting rid of the sentence length dimension) through the linear layer to receive our `prediction`. We then return the `prediction`, the new `hidden` state and the new `cell` state.
**Note**: as we always have a sequence length of 1, we could use `nn.LSTMCell`, instead of `nn.LSTM`, as it is designed to handle a batch of inputs that aren't necessarily in a sequence. `nn.LSTMCell` is just a single cell and `nn.LSTM` is a wrapper around potentially multiple cells. Using the `nn.LSTMCell` in this case would mean we don't have to `unsqueeze` to add a fake sequence length dimension, but we would need one `nn.LSTMCell` per layer in the decoder and to ensure each `nn.LSTMCell` receives the correct initial hidden state from the encoder. All of this makes the code less concise - hence the decision to stick with the regular `nn.LSTM`.
```
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#n directions in the decoder will both always be 1, therefore:
#hidden = [n layers, batch size, hid dim]
#context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#cell = [n layers * n directions, batch size, hid dim]
#seq len and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [n layers, batch size, hid dim]
#cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
```
### Seq2Seq
For the final part of the implemenetation, we'll implement the seq2seq model. This will handle:
- receiving the input/source sentence
- using the encoder to produce the context vectors
- using the decoder to produce the predicted output/target sentence
Our full model will look like this:

The `Seq2Seq` model takes in an `Encoder`, `Decoder`, and a `device` (used to place tensors on the GPU, if it exists).
For this implementation, we have to ensure that the number of layers and the hidden (and cell) dimensions are equal in the `Encoder` and `Decoder`. This is not always the case, we do not necessarily need the same number of layers or the same hidden dimension sizes in a sequence-to-sequence model. However, if we did something like having a different number of layers then we would need to make decisions about how this is handled. For example, if our encoder has 2 layers and our decoder only has 1, how is this handled? Do we average the two context vectors output by the decoder? Do we pass both through a linear layer? Do we only use the context vector from the highest layer? Etc.
Our `forward` method takes the source sentence, target sentence and a teacher-forcing ratio. The teacher forcing ratio is used when training our model. When decoding, at each time-step we will predict what the next token in the target sequence will be from the previous tokens decoded, $\hat{y}_{t+1}=f(s_t^L)$. With probability equal to the teaching forcing ratio (`teacher_forcing_ratio`) we will use the actual ground-truth next token in the sequence as the input to the decoder during the next time-step. However, with probability `1 - teacher_forcing_ratio`, we will use the token that the model predicted as the next input to the model, even if it doesn't match the actual next token in the sequence.
The first thing we do in the `forward` method is to create an `outputs` tensor that will store all of our predictions, $\hat{Y}$.
We then feed the input/source sentence, `src`, into the encoder and receive out final hidden and cell states.
The first input to the decoder is the start of sequence (`<sos>`) token. As our `trg` tensor already has the `<sos>` token appended (all the way back when we defined the `init_token` in our `TRG` field) we get our $y_1$ by slicing into it. We know how long our target sentences should be (`max_len`), so we loop that many times. The last token input into the decoder is the one **before** the `<eos>` token - the `<eos>` token is never input into the decoder.
During each iteration of the loop, we:
- pass the input, previous hidden and previous cell states ($y_t, s_{t-1}, c_{t-1}$) into the decoder
- receive a prediction, next hidden state and next cell state ($\hat{y}_{t+1}, s_{t}, c_{t}$) from the decoder
- place our prediction, $\hat{y}_{t+1}$/`output` in our tensor of predictions, $\hat{Y}$/`outputs`
- decide if we are going to "teacher force" or not
- if we do, the next `input` is the ground-truth next token in the sequence, $y_{t+1}$/`trg[t]`
- if we don't, the next `input` is the predicted next token in the sequence, $\hat{y}_{t+1}$/`top1`, which we get by doing an `argmax` over the output tensor
Once we've made all of our predictions, we return our tensor full of predictions, $\hat{Y}$/`outputs`.
**Note**: our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
$$\begin{align*}
\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
Later on when we calculate the loss, we cut off the first element of each tensor to get:
$$\begin{align*}
\text{trg} = [&y_1, y_2, y_3, <eos>]\\
\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
```
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is used as the initial hidden state of the decoder
hidden, cell = self.encoder(src)
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden and previous cell states
#receive output tensor (predictions) and new hidden and cell states
output, hidden, cell = self.decoder(input, hidden, cell)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
```
# Training the Seq2Seq Model
Now we have our model implemented, we can begin training it.
First, we'll initialize our model. As mentioned before, the input and output dimensions are defined by the size of the vocabulary. The embedding dimesions and dropout for the encoder and decoder can be different, but the number of layers and the size of the hidden/cell states must be the same.
We then define the encoder, decoder and then our Seq2Seq model, which we place on the `device`.
```
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
```
Next up is initializing the weights of our model. In the paper they state they initialize all weights from a uniform distribution between -0.08 and +0.08, i.e. $\mathcal{U}(-0.08, 0.08)$.
We initialize weights in PyTorch by creating a function which we `apply` to our model. When using `apply`, the `init_weights` function will be called on every module and sub-module within our model. For each module we loop through all of the parameters and sample them from a uniform distribution with `nn.init.uniform_`.
```
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
```
We also define a function that will calculate the number of trainable parameters in the model.
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
We define our optimizer, which we use to update our parameters in the training loop. Check out [this](http://ruder.io/optimizing-gradient-descent/) post for information about different optimizers. Here, we'll use Adam.
```
optimizer = optim.Adam(model.parameters())
```
Next, we define our loss function. The `CrossEntropyLoss` function calculates both the log softmax as well as the negative log-likelihood of our predictions.
Our loss function calculates the average loss per token, however by passing the index of the `<pad>` token as the `ignore_index` argument we ignore the loss whenever the target token is a padding token.
```
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
```
Next, we'll define our training loop.
First, we'll set the model into "training mode" with `model.train()`. This will turn on dropout (and batch normalization, which we aren't using) and then iterate through our data iterator.
As stated before, our decoder loop starts at 1, not 0. This means the 0th element of our `outputs` tensor remains all zeros. So our `trg` and `outputs` look something like:
$$\begin{align*}
\text{trg} = [<sos>, &y_1, y_2, y_3, <eos>]\\
\text{outputs} = [0, &\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
Here, when we calculate the loss, we cut off the first element of each tensor to get:
$$\begin{align*}
\text{trg} = [&y_1, y_2, y_3, <eos>]\\
\text{outputs} = [&\hat{y}_1, \hat{y}_2, \hat{y}_3, <eos>]
\end{align*}$$
At each iteration:
- get the source and target sentences from the batch, $X$ and $Y$
- zero the gradients calculated from the last batch
- feed the source and target into the model to get the output, $\hat{Y}$
- as the loss function only works on 2d inputs with 1d targets we need to flatten each of them with `.view`
- we slice off the first column of the output and target tensors as mentioned above
- calculate the gradients with `loss.backward()`
- clip the gradients to prevent them from exploding (a common issue in RNNs)
- update the parameters of our model by doing an optimizer step
- sum the loss value to a running total
Finally, we return the loss that is averaged over all batches.
```
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Our evaluation loop is similar to our training loop, however as we aren't updating any parameters we don't need to pass an optimizer or a clip value.
We must remember to set the model to evaluation mode with `model.eval()`. This will turn off dropout (and batch normalization, if used).
We use the `with torch.no_grad()` block to ensure no gradients are calculated within the block. This reduces memory consumption and speeds things up.
The iteration loop is similar (without the parameter updates), however we must ensure we turn teacher forcing off for evaluation. This will cause the model to only use it's own predictions to make further predictions within a sentence, which mirrors how it would be used in deployment.
```
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
```
Next, we'll create a function that we'll use to tell us how long an epoch takes.
```
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
We can finally start training our model!
At each epoch, we'll be checking if our model has achieved the best validation loss so far. If it has, we'll update our best validation loss and save the parameters of our model (called `state_dict` in PyTorch). Then, when we come to test our model, we'll use the saved parameters used to achieve the best validation loss.
We'll be printing out both the loss and the perplexity at each epoch. It is easier to see a change in perplexity than a change in loss as the numbers are much bigger.
```
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
```
We'll load the parameters (`state_dict`) that gave our model the best validation loss and run it the model on the test set.
```
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
```
In the following notebook we'll implement a model that achieves improved test perplexity, but only uses a single layer in the encoder and the decoder.
| true | code | 0.859192 | null | null | null | null |
|
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
# Training Pipeline - Custom Script
_**Training many models using a custom script**_
----
This notebook demonstrates how to create a pipeline that trains and registers many models using a custom script. We utilize the [ParallelRunStep](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-run-step) to parallelize the process of training the models to make the process more efficient. For this solution accelerator we are using the [OJ Sales Dataset](https://azure.microsoft.com/en-us/services/open-datasets/catalog/sample-oj-sales-simulated/) to train individual models that predict sales for each store and brand of orange juice.
The model we use here is a simple, regression-based forecaster built on scikit-learn and pandas utilities. See the [training script](scripts/train.py) to see how the forecaster is constructed. This forecaster is intended for demonstration purposes, so it does not handle the large variety of special cases that one encounters in time-series modeling. For instance, the model here assumes that all time-series are comprised of regularly sampled observations on a contiguous interval with no missing values. The model does not include any handling of categorical variables. For a more general-use forecaster that handles missing data, advanced featurization, and automatic model selection, see the [AutoML Forecasting task](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-forecast). Also, see the notebooks demonstrating [AutoML forecasting in a many models scenario](../Automated_ML).
### Prerequisites
At this point, you should have already:
1. Created your AML Workspace using the [00_Setup_AML_Workspace notebook](../00_Setup_AML_Workspace.ipynb)
2. Run [01_Data_Preparation.ipynb](../01_Data_Preparation.ipynb) to setup your compute and create the dataset
#### Please ensure you have the latest version of the Azure ML SDK and also install Pipeline Steps Package
```
#!pip install --upgrade azureml-sdk
# !pip install azureml-pipeline-steps
```
## 1.0 Connect to workspace and datastore
```
from azureml.core import Workspace
# set up workspace
ws = Workspace.from_config()
# set up datastores
dstore = ws.get_default_datastore()
print('Workspace Name: ' + ws.name,
'Azure Region: ' + ws.location,
'Subscription Id: ' + ws.subscription_id,
'Resource Group: ' + ws.resource_group,
sep = '\n')
```
## 2.0 Create an experiment
```
from azureml.core import Experiment
experiment = Experiment(ws, 'oj_training_pipeline')
print('Experiment name: ' + experiment.name)
```
## 3.0 Get the training Dataset
Next, we get the training Dataset using the [Dataset.get_by_name()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--) method.
This is the training dataset we created and registered in the [data preparation notebook](../01_Data_Preparation.ipynb). If you chose to use only a subset of the files, the training dataset name will be `oj_data_small_train`. Otherwise, the name you'll have to use is `oj_data_train`.
We recommend to start with the small dataset and make sure everything runs successfully, then scale up to the full dataset.
```
dataset_name = 'oj_data_small_train'
from azureml.core.dataset import Dataset
dataset = Dataset.get_by_name(ws, name=dataset_name)
dataset_input = dataset.as_named_input(dataset_name)
```
## 4.0 Create the training pipeline
Now that the workspace, experiment, and dataset are set up, we can put together a pipeline for training.
### 4.1 Configure environment for ParallelRunStep
An [environment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-environments) defines a collection of resources that we will need to run our pipelines. We configure a reproducible Python environment for our training script including the [scikit-learn](https://scikit-learn.org/stable/index.html) python library.
```
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
train_env = Environment(name="many_models_environment")
train_conda_deps = CondaDependencies.create(pip_packages=['sklearn', 'pandas', 'joblib', 'azureml-defaults', 'azureml-core', 'azureml-dataprep[fuse]'])
train_env.python.conda_dependencies = train_conda_deps
```
### 4.2 Choose a compute target
Currently ParallelRunConfig only supports AMLCompute. This is the compute cluster you created in the [setup notebook](../00_Setup_AML_Workspace.ipynb#3.0-Create-compute-cluster).
```
cpu_cluster_name = "cpucluster"
from azureml.core.compute import AmlCompute
compute = AmlCompute(ws, cpu_cluster_name)
```
### 4.3 Set up ParallelRunConfig
[ParallelRunConfig](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_config.parallelrunconfig?view=azure-ml-py) provides the configuration for the ParallelRunStep we'll be creating next. Here we specify the environment and compute target we created above along with the entry script that will be for each batch.
There's a number of important parameters to configure including:
- **mini_batch_size**: The number of files per batch. If you have 500 files and mini_batch_size is 10, 50 batches would be created containing 10 files each. Batches are split across the various nodes.
- **node_count**: The number of compute nodes to be used for running the user script. For the small sample of OJ datasets, we only need a single node, but you will likely need to increase this number for larger datasets composed of more files. If you increase the node count beyond five here, you may need to increase the max_nodes for the compute cluster as well.
- **process_count_per_node**: The number of processes per node. The compute cluster we are using has 8 cores so we set this parameter to 8.
- **run_invocation_timeout**: The run() method invocation timeout in seconds. The timeout should be set to be higher than the maximum training time of one model (in seconds), by default it's 60. Since the batches that takes the longest to train are about 120 seconds, we set it to be 180 to ensure the method has adequate time to run.
We also added tags to preserve the information about our training cluster's node count, process count per node, and dataset name. You can find the 'Tags' column in Azure Machine Learning Studio.
```
from azureml.pipeline.steps import ParallelRunConfig
processes_per_node = 8
node_count = 1
timeout = 180
parallel_run_config = ParallelRunConfig(
source_directory='./scripts',
entry_script='train.py',
mini_batch_size="1",
run_invocation_timeout=timeout,
error_threshold=-1,
output_action="append_row",
environment=train_env,
process_count_per_node=processes_per_node,
compute_target=compute,
node_count=node_count)
```
### 4.4 Set up ParallelRunStep
This [ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) is the main step in our training pipeline.
First, we set up the output directory and define the pipeline's output name. The datastore that stores the pipeline's output data is Workspace's default datastore.
```
from azureml.pipeline.core import PipelineData
output_dir = PipelineData(name="training_output", datastore=dstore)
```
We provide our ParallelRunStep with a name, the ParallelRunConfig created above and several other parameters:
- **inputs**: A list of input datasets. Here we'll use the dataset created in the previous notebook. The number of files in that path determines the number of models will be trained in the ParallelRunStep.
- **output**: A PipelineData object that corresponds to the output directory. We'll use the output directory we just defined.
- **arguments**: A list of arguments required for the train.py entry script. Here, we provide the schema for the timeseries data - i.e. the names of target, timestamp, and id columns - as well as columns that should be dropped prior to modeling, a string identifying the model type, and the number of observations we want to leave aside for testing.
```
from azureml.pipeline.steps import ParallelRunStep
parallel_run_step = ParallelRunStep(
name="many-models-training",
parallel_run_config=parallel_run_config,
inputs=[dataset_input],
output=output_dir,
allow_reuse=False,
arguments=['--target_column', 'Quantity',
'--timestamp_column', 'WeekStarting',
'--timeseries_id_columns', 'Store', 'Brand',
'--drop_columns', 'Revenue', 'Store', 'Brand',
'--model_type', 'lr',
'--test_size', 20]
)
```
## 5.0 Run the pipeline
Next, we submit our pipeline to run. The run will train models for each dataset using a train set, compute accuracy metrics for the fits using a test set, and finally re-train models with all the data available. With 10 files, this should only take a few minutes but with the full dataset this can take over an hour.
```
from azureml.pipeline.core import Pipeline
pipeline = Pipeline(workspace=ws, steps=[parallel_run_step])
run = experiment.submit(pipeline)
#Wait for the run to complete
run.wait_for_completion(show_output=False, raise_on_error=True)
```
## 6.0 View results of training pipeline
The dataframe we return in the run method of train.py is outputted to *parallel_run_step.txt*. To see the results of our training pipeline, we'll download that file, read in the data to a DataFrame, and then visualize the results, including the in-sample metrics.
The run submitted to the Azure Machine Learning Training Compute Cluster may take a while. The output is not generated until the run is complete. You can monitor the status of the run in Azure Portal https://ml.azure.com
### 6.1 Download parallel_run_step.txt locally
```
import os
def download_results(run, target_dir=None, step_name='many-models-training', output_name='training_output'):
stitch_run = run.find_step_run(step_name)[0]
port_data = stitch_run.get_output_data(output_name)
port_data.download(target_dir, show_progress=True)
return os.path.join(target_dir, 'azureml', stitch_run.id, output_name)
file_path = download_results(run, 'output')
file_path
```
### 6.2 Convert the file to a dataframe
```
import pandas as pd
df = pd.read_csv(file_path + '/parallel_run_step.txt', sep=" ", header=None)
df.columns = ['Store', 'Brand', 'Model', 'File Name', 'ModelName', 'StartTime', 'EndTime', 'Duration',
'MSE', 'RMSE', 'MAE', 'MAPE', 'Index', 'Number of Models', 'Status']
df['StartTime'] = pd.to_datetime(df['StartTime'])
df['EndTime'] = pd.to_datetime(df['EndTime'])
df['Duration'] = df['EndTime'] - df['StartTime']
df.head()
```
### 6.3 Review Results
```
total = df['EndTime'].max() - df['StartTime'].min()
print('Number of Models: ' + str(len(df)))
print('Total Duration: ' + str(total)[6:])
print('Average MAPE: ' + str(round(df['MAPE'].mean(), 5)))
print('Average MSE: ' + str(round(df['MSE'].mean(), 5)))
print('Average RMSE: ' + str(round(df['RMSE'].mean(), 5)))
print('Average MAE: '+ str(round(df['MAE'].mean(), 5)))
print('Maximum Duration: '+ str(df['Duration'].max())[7:])
print('Minimum Duration: ' + str(df['Duration'].min())[7:])
print('Average Duration: ' + str(df['Duration'].mean())[7:])
```
### 6.4 Visualize Performance across models
Here, we produce some charts from the errors metrics calculated during the run using a subset put aside for testing.
First, we examine the distribution of mean absolute percentage error (MAPE) over all the models:
```
import seaborn as sns
import matplotlib.pyplot as plt
fig = sns.boxplot(y='MAPE', data=df)
fig.set_title('MAPE across all models')
```
Next, we can break that down by Brand or Store to see variations in error across our models
```
fig = sns.boxplot(x='Brand', y='MAPE', data=df)
fig.set_title('MAPE by Brand')
```
We can also look at how long models for different brands took to train
```
brand = df.groupby('Brand')
brand = brand['Duration'].sum()
brand = pd.DataFrame(brand)
brand['time_in_seconds'] = [time.total_seconds() for time in brand['Duration']]
brand.drop(columns=['Duration']).plot(kind='bar')
plt.xlabel('Brand')
plt.ylabel('Seconds')
plt.title('Total Training Time by Brand')
plt.show()
```
## 7.0 Publish and schedule the pipeline (Optional)
### 7.1 Publish the pipeline
Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines.
```
# published_pipeline = pipeline.publish(name = 'train_many_models',
# description = 'train many models',
# version = '1',
# continue_on_step_failure = False)
```
### 7.2 Schedule the pipeline
You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift.
```
# from azureml.pipeline.core import Schedule, ScheduleRecurrence
# training_pipeline_id = published_pipeline.id
# recurrence = ScheduleRecurrence(frequency="Month", interval=1, start_time="2020-01-01T09:00:00")
# recurring_schedule = Schedule.create(ws, name="training_pipeline_recurring_schedule",
# description="Schedule Training Pipeline to run on the first day of every month",
# pipeline_id=training_pipeline_id,
# experiment_name=experiment.name,
# recurrence=recurrence)
```
## Next Steps
Now that you've trained and scored the models, move on to [03_CustomScript_Forecasting_Pipeline.ipynb](03_CustomScript_Forecasting_Pipeline.ipynb) to make forecasts with your models.
| true | code | 0.493592 | null | null | null | null |
|
# Repertoire classification subsampling
When training a classifier to assign repertoires to the subject from which they were obtained, we need a set of subsampled sequences. The sequences have been condensed to just the V- and J-gene assignments and the CDR3 length (VJ-CDR3len). Subsample sizes range from 10 to 10,000 sequences per biological replicate.
The [`abutils`](https://www.github.com/briney/abutils) Python package is required for this notebook, and can be installed by running `pip install abutils`.
*NOTE: this notebook requires the use of the Unix command line tool `shuf`. Thus, it requires a Unix-based operating system to run correctly (MacOS and most flavors of Linux should be fine). Running this notebook on Windows 10 may be possible using the [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about) but we have not tested this.*
```
from __future__ import print_function, division
from collections import Counter
import os
import subprocess as sp
import sys
import tempfile
from abutils.utils.pipeline import make_dir
```
## Subjects, subsample sizes, and directories
The `input_dir` should contain deduplicated clonotype sequences. The datafiles are too large to be included in the Github repository, but may be downloaded [**here**](http://burtonlab.s3.amazonaws.com/GRP_github_data/techrep-merged_vj-cdr3len_no-header.tar.gz). If downloading the data (which will be downloaded as a compressed archive), decompress the archive in the `data` directory (in the same parent directory as this notebook) and you should be ready to go. If you want to store the downloaded data in some other location, adjust the `input_dir` path below as needed.
By default, subsample sizes increase by 10 from 10 to 100, by 100 from 100 to 1,000, and by 1,000 from 1,000 to 10,000.
```
with open('./data/subjects.txt') as f:
subjects = sorted(f.read().split())
subsample_sizes = list(range(10, 100, 10)) + list(range(100, 1000, 100)) + list(range(1000, 11000, 1000))
input_dir = './data/techrep-merged_vj-cdr3len_no-header/'
subsample_dir = './data/repertoire_classification/user-created_subsamples_vj-cdr3len'
make_dir(subsample_dir)
```
## Subsampling
```
def subsample(infile, outfile, n_seqs, iterations):
with open(outfile, 'w') as f:
f.write('')
shuf_cmd = 'shuf -n {} {}'.format(n_seqs, infile)
p = sp.Popen(shuf_cmd, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)
stdout, stderr = p.communicate()
with open(outfile, 'a') as f:
for iteration in range(iterations):
seqs = ['_'.join(s.strip().split()) for s in stdout.strip().split('\n') if s.strip()]
counts = Counter(seqs)
count_strings = []
for k, v in counts.items():
count_strings.append('{}:{}'.format(k, v))
f.write(','.join(count_strings) + '\n')
for subject in subjects:
print(subject)
files = list_files(os.path.join(input_dir, subject))
for file_ in files:
for subsample_size in subsample_sizes:
num = os.path.basename(file_).split('_')[0]
ofile = os.path.join(subsample_dir, '{}_{}-{}'.format(subject, subsample_size, num))
subsample(file_, ofile, subsample_size, 50)
```
| true | code | 0.423935 | null | null | null | null |
|
# Scenario Analysis: Pop Up Shop

Kürschner (talk) 17:51, 1 December 2020 (UTC), CC0, via Wikimedia Commons
```
# install Pyomo and solvers for Google Colab
import sys
if "google.colab" in sys.modules:
!wget -N -q https://raw.githubusercontent.com/jckantor/MO-book/main/tools/install_on_colab.py
%run install_on_colab.py
```
## The problem
There is an opportunity to operate a pop-up shop to sell a unique commemorative item for events held at a famous location. The items cost 12 € each and will selL for 40 €. Unsold items can be returned to the supplier at a value of only 2 € due to their commemorative nature.
| Parameter | Symbol | Value |
| :---: | :---: | :---: |
| sales price | $r$ | 40 € |
| unit cost | $c$ | 12 € |
| salvage value | $w$ | 2 € |
Profit will increase with sales. Demand for these items, however, will be high only if the weather is good. Historical data suggests the following scenarios.
| Scenario ($s$) | Demand ($d_s$) | Probability ($p_s$) |
| :---: | :-----: | :----------: |
| Sunny Skies | 650 | 0.10 |
| Good Weather | 400 | 0.60 |
| Poor Weather | 200 | 0.30 |
The problem is to determine how many items to order for the pop-up shop.
The dilemma is that the weather won't be known until after the order is placed. Ordering enough items to meet demand for a good weather day results in a financial penalty on returned goods if the weather is poor. But ordering just enough items to satisfy demand on a poor weather day leaves "money on the table" if the weather is good.
How many items should be ordered for sale?
## Expected value for the mean scenario (EVM)
A naive solution to this problem is to place an order equal to the expected demand. The expected demand is given by
$$
\begin{align*}
\mathbb E[D] & = \sum_{s\in S} p_s d_s
\end{align*}
$$
Choosing an order size $x = \mathbb E[d]$ results in an expected profit we call the **expected value of the mean scenario (EVM)**.
Variable $y_s$ is the actual number of items sold if scenario $s$ should occur. The number sold is the lesser of the demand $d_s$ and the order size $x$.
$$
\begin{align*}
y_s & = \min(d_s, x) & \forall s \in S
\end{align*}
$$
Any unsold inventory $x - y_s$ remaining after the event will be sold at the salvage price $w$. Taking into account the revenue from sales $r y_s$, the salvage value of the unsold inventory $w(x - y_s)$, and the cost of the order $c x$, the profit $f_s$ for scenario $s$ is given by
$$
\begin{align*}
f_s & = r y_s + w (x - y_s) - c x & \forall s \in S
\end{align*}
$$
The average or expected profit is given by
$$
\begin{align*}
\text{EVM} = \mathbb E[f] & = \sum_{s\in S} p_s f_s
\end{align*}
$$
These calculations can be executed using operations on the pandas dataframe. Let's begin by calculating the expected demand.
Below we create a pandas DataFrame object to store the scenario data.
```
import numpy as np
import pandas as pd
# price information
r = 40
c = 12
w = 2
# scenario information
scenarios = {
"sunny skies" : {"probability": 0.10, "demand": 650},
"good weather": {"probability": 0.60, "demand": 400},
"poor weather": {"probability": 0.30, "demand": 200},
}
df = pd.DataFrame.from_dict(scenarios).T
display(df)
expected_demand = sum(df["probability"] * df["demand"])
print(f"Expected demand = {expected_demand}")
```
Subsequent calculations can be done directly withthe pandas dataframe holding the scenario data.
```
df["order"] = expected_demand
df["sold"] = df[["demand", "order"]].min(axis=1)
df["salvage"] = df["order"] - df["sold"]
df["profit"] = r * df["sold"] + w * df["salvage"] - c * df["order"]
EVM = sum(df["probability"] * df["profit"])
print(f"Mean demand = {expected_demand}")
print(f"Expected value of the mean demand (EVM) = {EVM}")
display(df)
```
## Expected value of the stochastic solution (EVSS)
The optimization problem is to find the order size $x$ that maximizes expected profit subject to operational constraints on the decision variables. The variables $x$ and $y_s$ are non-negative integers, while $f_s$ is a real number that can take either positive and negative values. The number of goods sold in scenario $s$ has to be less than the order size $x$ and customer demand $d_s$.
The problem to be solved is
$$
\begin{align*}
\text{EV} = & \max_{x, y_s} \mathbb E[F] = \sum_{s\in S} p_s f_s \\
\text{subject to:} \\
f_s & = r y_s + w(x - y_s) - c x & \forall s \in S\\
y_s & \leq x & \forall s \in S \\
y_s & \leq d_s & \forall s \in S
\end{align*}
$$
where $S$ is the set of all scenarios under consideration.
```
import pyomo.environ as pyo
import pandas as pd
# price information
r = 40
c = 12
w = 2
# scenario information
scenarios = {
"sunny skies" : {"demand": 650, "probability": 0.1},
"good weather": {"demand": 400, "probability": 0.6},
"poor weather": {"demand": 200, "probability": 0.3},
}
# create model instance
m = pyo.ConcreteModel('Pop-up Shop')
# set of scenarios
m.S = pyo.Set(initialize=scenarios.keys())
# decision variables
m.x = pyo.Var(domain=pyo.NonNegativeIntegers)
m.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)
m.f = pyo.Var(m.S, domain=pyo.Reals)
# objective
@m.Objective(sense=pyo.maximize)
def EV(m):
return sum([scenarios[s]["probability"]*m.f[s] for s in m.S])
# constraints
@m.Constraint(m.S)
def profit(m, s):
return m.f[s] == r*m.y[s] + w*(m.x - m.y[s]) - c*m.x
@m.Constraint(m.S)
def sales_less_than_order(m, s):
return m.y[s] <= m.x
@m.Constraint(m.S)
def sales_less_than_demand(m, s):
return m.y[s] <= scenarios[s]["demand"]
# solve
solver = pyo.SolverFactory('glpk')
results = solver.solve(m)
# display solution using Pandas
print("Solver Termination Condition:", results.solver.termination_condition)
print("Expected Profit:", m.EV())
print()
for s in m.S:
scenarios[s]["order"] = m.x()
scenarios[s]["sold"] = m.y[s]()
scenarios[s]["salvage"] = m.x() - m.y[s]()
scenarios[s]["profit"] = m.f[s]()
df = pd.DataFrame.from_dict(scenarios).T
display(df)
```
Optimizing over all scenarios provides an expected profit of 8,920 €, an increase of 581 € over the base case of simply ordering the expected number of items sold. The new solution places a larger order. In poor weather conditions there will be more returns and lower profit that is more than compensated by the increased profits in good weather conditions.
The addtional value that results from solve of this planning problem is called the **Value of the Stochastic Solution (VSS)**. The value of the stochastic solution is the additional profit compared to ordering to meet expected in demand. In this case,
$$\text{VSS} = \text{EV} - \text{EVM} = 8,920 - 8,339 = 581$$
## Expected value with perfect information (EVPI)
Maximizing expected profit requires the size of the order be decided before knowing what scenario will unfold. The decision for $x$ has to be made "here and now" with probablistic information about the future, but without specific information on which future will actually transpire.
Nevertheless, we can perform the hypothetical calculation of what profit would be realized if we could know the future. We are still subject to the variability of weather, what is different is we know what the weather will be at the time the order is placed.
The resulting value for the expected profit is called the **Expected Value of Perfect Information (EVPI)**. The difference EVPI - EV is the extra profit due to having perfect knowledge of the future.
To compute the expected profit with perfect information, we let the order variable $x$ be indexed by the subsequent scenario that will unfold. Given decision varaible $x_s$, the model for EVPI becomes
$$
\begin{align*}
\text{EVPI} = & \max_{x_s, y_s} \mathbb E[f] = \sum_{s\in S} p_s f_s \\
\text{subject to:} \\
f_s & = r y_s + w(x_s - y_s) - c x_s & \forall s \in S\\
y_s & \leq x_s & \forall s \in S \\
y_s & \leq d_s & \forall s \in S
\end{align*}
$$
The following implementation is a variation of the prior cell.
```
import pyomo.environ as pyo
import pandas as pd
# price information
r = 40
c = 12
w = 2
# scenario information
scenarios = {
"sunny skies" : {"demand": 650, "probability": 0.1},
"good weather": {"demand": 400, "probability": 0.6},
"poor weather": {"demand": 200, "probability": 0.3},
}
# create model instance
m = pyo.ConcreteModel('Pop-up Shop')
# set of scenarios
m.S = pyo.Set(initialize=scenarios.keys())
# decision variables
m.x = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)
m.y = pyo.Var(m.S, domain=pyo.NonNegativeIntegers)
m.f = pyo.Var(m.S, domain=pyo.Reals)
# objective
@m.Objective(sense=pyo.maximize)
def EV(m):
return sum([scenarios[s]["probability"]*m.f[s] for s in m.S])
# constraints
@m.Constraint(m.S)
def profit(m, s):
return m.f[s] == r*m.y[s] + w*(m.x[s] - m.y[s]) - c*m.x[s]
@m.Constraint(m.S)
def sales_less_than_order(m, s):
return m.y[s] <= m.x[s]
@m.Constraint(m.S)
def sales_less_than_demand(m, s):
return m.y[s] <= scenarios[s]["demand"]
# solve
solver = pyo.SolverFactory('glpk')
results = solver.solve(m)
# display solution using Pandas
print("Solver Termination Condition:", results.solver.termination_condition)
print("Expected Profit:", m.EV())
print()
for s in m.S:
scenarios[s]["order"] = m.x[s]()
scenarios[s]["sold"] = m.y[s]()
scenarios[s]["salvage"] = m.x[s]() - m.y[s]()
scenarios[s]["profit"] = m.f[s]()
df = pd.DataFrame.from_dict(scenarios).T
display(df)
```
## Summary
To summarize, have computed three different solutions to the problem of order size:
* The expected value of the mean solution (EVM) is the expected profit resulting from ordering the number of items expected to sold under all scenarios.
* The expected value of the stochastic solution (EVSS) is the expected profit found by solving an two-state optimization problem where the order size was the "here and now" decision without specific knowledge of which future scenario would transpire.
* The expected value of perfect information (EVPI) is the result of a hypotherical case where knowledge of the future scenario was somehow available when then order had to be placed.
For this example we found
| Solution | Value (€) |
| :------ | ----: |
| Expected Value of the Mean Solution (EVM) | 8,399.0 |
| Expected Value of the Stochastic Solution (EVSS) | 8,920.0 |
| Expected Value of Perfect Information (EVPI) | 10,220.0 |
These results verify our expectation that
$$
\begin{align*}
EVM \leq EVSS \leq EVPI
\end{align*}
$$
The value of the stochastic solution
$$
\begin{align*}
VSS = EVSS - EVM = 581
\end{align*}
$$
The value of perfect information
$$
\begin{align*}
VPI = EVPI - EVSS = 1,300
\end{align*}
$$
As one might expect, there is a cost that results from lack of knowledge about an uncertain future.
| true | code | 0.413921 | null | null | null | null |
|
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O cats_and_dogs_filtered.zip
! unzip cats_and_dogs_filtered.zip
import keras,os
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D , Flatten
from keras.preprocessing.image import ImageDataGenerator
import numpy as np
trdata = ImageDataGenerator()
traindata = trdata.flow_from_directory(directory="cats_and_dogs_filtered/train",target_size=(224,224))
tsdata = ImageDataGenerator()
testdata = tsdata.flow_from_directory(directory="cats_and_dogs_filtered/validation", target_size=(224,224))
model = Sequential()
model.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Flatten())
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=4096,activation="relu"))
model.add(Dense(units=2, activation="softmax"))
from keras.optimizers import Adam
opt = Adam(lr=0.001)
model.compile(optimizer=opt, loss=keras.losses.categorical_crossentropy, metrics=['accuracy'])
model.summary()
from keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
early = EarlyStopping(monitor='val_acc', min_delta=0, patience=20, verbose=1, mode='auto')
hist = model.fit_generator(steps_per_epoch=100,generator=traindata, validation_data= testdata, validation_steps=10,epochs=100,callbacks=[checkpoint,early])
import matplotlib.pyplot as plt
plt.plot(hist.history["acc"])
plt.plot(hist.history['val_acc'])
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title("model accuracy")
plt.ylabel("Accuracy")
plt.xlabel("Epoch")
plt.legend(["Accuracy","Validation Accuracy","loss","Validation Loss"])
plt.show()
from keras.preprocessing import image
img = image.load_img("Pomeranian_01.jpeg",target_size=(224,224))
img = np.asarray(img)
plt.imshow(img)
img = np.expand_dims(img, axis=0)
from keras.models import load_model
saved_model = load_model("vgg16_1.h5")
output = saved_model.predict(img)
if output[0][0] > output[0][1]:
print("cat")
else:
print('dog')
```
| true | code | 0.643273 | null | null | null | null |
|
# Classification with Neural Network for Yoga poses detection
## Import Dependencies
```
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import classification_report, log_loss, accuracy_score
from sklearn.model_selection import train_test_split
```
## Getting the data (images) and labels
```
# Data path
train_dir = 'pose_recognition_data/dataset'
# Getting the folders name to be able to labelize the data
Name=[]
for file in os.listdir(train_dir):
Name+=[file]
print(Name)
print(len(Name))
N=[]
for i in range(len(Name)):
N+=[i]
normal_mapping=dict(zip(Name,N))
reverse_mapping=dict(zip(N,Name))
def mapper(value):
return reverse_mapping[value]
dataset=[]
testset=[]
count=0
for file in os.listdir(train_dir):
t=0
path=os.path.join(train_dir,file)
for im in os.listdir(path):
image=load_img(os.path.join(path,im), grayscale=False, color_mode='rgb', target_size=(40,40))
image=img_to_array(image)
image=image/255.0
if t<60:
dataset+=[[image,count]]
else:
testset+=[[image,count]]
t+=1
count=count+1
data,labels0=zip(*dataset)
test,testlabels0=zip(*testset)
labels1=to_categorical(labels0)
labels=np.array(labels1)
# Transforming the into Numerical Data
data=np.array(data)
test=np.array(test)
trainx,testx,trainy,testy=train_test_split(data,labels,test_size=0.2,random_state=44)
print(trainx.shape)
print(testx.shape)
print(trainy.shape)
print(testy.shape)
# Data augmentation
datagen = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=20,zoom_range=0.2,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,fill_mode="nearest")
# Loading the pretrained model , here DenseNet201
pretrained_model3 = tf.keras.applications.DenseNet201(input_shape=(40,40,3),include_top=False,weights='imagenet',pooling='avg')
pretrained_model3.trainable = False
inputs3 = pretrained_model3.input
x3 = tf.keras.layers.Dense(128, activation='relu')(pretrained_model3.output)
outputs3 = tf.keras.layers.Dense(107, activation='softmax')(x3)
model = tf.keras.Model(inputs=inputs3, outputs=outputs3)
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
his=model.fit(datagen.flow(trainx,trainy,batch_size=32),validation_data=(testx,testy),epochs=50)
y_pred=model.predict(testx)
pred=np.argmax(y_pred,axis=1)
ground = np.argmax(testy,axis=1)
print(classification_report(ground,pred))
#Checking accuracy of our model
get_acc = his.history['accuracy']
value_acc = his.history['val_accuracy']
get_loss = his.history['loss']
validation_loss = his.history['val_loss']
epochs = range(len(get_acc))
plt.plot(epochs, get_acc, 'r', label='Accuracy of Training data')
plt.plot(epochs, value_acc, 'b', label='Accuracy of Validation data')
plt.title('Training vs validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
# Checking the loss of data
epochs = range(len(get_loss))
plt.plot(epochs, get_loss, 'r', label='Loss of Training data')
plt.plot(epochs, validation_loss, 'b', label='Loss of Validation data')
plt.title('Training vs validation loss')
plt.legend(loc=0)
plt.figure()
plt.show()
load_img("pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png",target_size=(40,40))
image = load_img("pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png",target_size=(40,40))
image=img_to_array(image)
image=image/255.0
prediction_image=np.array(image)
prediction_image= np.expand_dims(image, axis=0)
prediction=model.predict(prediction_image)
value=np.argmax(prediction)
move_name=mapper(value)
print("Prediction is {}.".format(move_name))
print(test.shape)
pred2=model.predict(test)
print(pred2.shape)
PRED=[]
for item in pred2:
value2=np.argmax(item)
PRED+=[value2]
ANS=testlabels0
accuracy=accuracy_score(ANS,PRED)
print(accuracy)
```
| true | code | 0.554531 | null | null | null | null |
|
## _*H2 ground state energy computation using Iterative QPE*_
This notebook demonstrates using Qiskit Chemistry to plot graphs of the ground state energy of the Hydrogen (H2) molecule over a range of inter-atomic distances using IQPE (Iterative Quantum Phase Estimation) algorithm. It is compared to the same energies as computed by the ExactEigensolver
This notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the qiskit_chemistry stack. Such a dictionary can be manipulated programmatically and this is indeed the case here where we alter the molecule supplied to the driver in each loop.
This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
```
import numpy as np
import pylab
from qiskit import LegacySimulators
from qiskit_chemistry import QiskitChemistry
import time
# Input dictionary to configure Qiskit Chemistry for the chemistry problem.
qiskit_chemistry_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': '', 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'transformation': 'full', 'qubit_mapping': 'parity'},
'algorithm': {'name': ''},
'initial_state': {'name': 'HartreeFock'},
}
molecule = 'H .0 .0 -{0}; H .0 .0 {0}'
algorithms = [
{
'name': 'IQPE',
'num_iterations': 16,
'num_time_slices': 3000,
'expansion_mode': 'trotter',
'expansion_order': 1,
},
{
'name': 'ExactEigensolver'
}
]
backends = [
LegacySimulators.get_backend('qasm_simulator'),
None
]
start = 0.5 # Start distance
by = 0.5 # How much to increase distance by
steps = 20 # Number of steps to increase by
energies = np.empty([len(algorithms), steps+1])
hf_energies = np.empty(steps+1)
distances = np.empty(steps+1)
import concurrent.futures
import multiprocessing as mp
import copy
def subrountine(i, qiskit_chemistry_dict, d, backend, algorithm):
solver = QiskitChemistry()
qiskit_chemistry_dict['PYSCF']['atom'] = molecule.format(d/2)
qiskit_chemistry_dict['algorithm'] = algorithm
result = solver.run(qiskit_chemistry_dict, backend=backend)
return i, d, result['energy'], result['hf_energy']
start_time = time.time()
max_workers = max(4, mp.cpu_count())
with concurrent.futures.ProcessPoolExecutor(max_workers=max_workers) as executor:
futures = []
for j in range(len(algorithms)):
algorithm = algorithms[j]
backend = backends[j]
for i in range(steps+1):
d = start + i*by/steps
future = executor.submit(
subrountine,
i,
copy.deepcopy(qiskit_chemistry_dict),
d,
backend,
algorithm
)
futures.append(future)
for future in concurrent.futures.as_completed(futures):
i, d, energy, hf_energy = future.result()
energies[j][i] = energy
hf_energies[i] = hf_energy
distances[i] = d
print(' --- complete')
print('Distances: ', distances)
print('Energies:', energies)
print('Hartree-Fock energies:', hf_energies)
print("--- %s seconds ---" % (time.time() - start_time))
pylab.plot(distances, hf_energies, label='Hartree-Fock')
for j in range(len(algorithms)):
pylab.plot(distances, energies[j], label=algorithms[j]['name'])
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('H2 Ground State Energy')
pylab.legend(loc='upper right')
pylab.show()
pylab.plot(distances, np.subtract(hf_energies, energies[1]), label='Hartree-Fock')
pylab.plot(distances, np.subtract(energies[0], energies[1]), label='IQPE')
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference from ExactEigensolver')
pylab.legend(loc='upper right')
pylab.show()
```
| true | code | 0.51562 | null | null | null | null |
|
# ML Pipeline Preparation
Follow the instructions below to help you create your ML pipeline.
### 1. Import libraries and load data from database.
- Import Python libraries
- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
- Define feature and target variables X and Y
```
# import necessary libraries
import pandas as pd
import numpy as np
import os
import pickle
import nltk
import re
from sqlalchemy import create_engine
import sqlite3
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier,AdaBoostClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer, accuracy_score, f1_score, fbeta_score, classification_report
from sklearn.metrics import precision_recall_fscore_support
from scipy.stats import hmean
from scipy.stats.mstats import gmean
from nltk.corpus import stopwords
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords'])
import matplotlib.pyplot as plt
%matplotlib inline
# load data from database
engine = create_engine('sqlite:///InsertDatabaseName.db')
df = pd.read_sql("SELECT * FROM InsertTableName", engine)
df.head()
# View types of unque 'genre' attribute
genre_types = df.genre.value_counts()
genre_types
# check for attributes with missing values/elements
df.isnull().mean().head()
# drops attributes with missing values
df.dropna()
df.head()
# load data from database with 'X' as attributes for message column
X = df["message"]
# load data from database with 'Y' attributes for the last 36 columns
Y = df.drop(['id', 'message', 'original', 'genre'], axis = 1)
```
### 2. Write a tokenization function to process your text data
```
# Proprocess text by removing unwanted properties
def tokenize(text):
'''
input:
text: input text data containing attributes
output:
clean_tokens: cleaned text without unwanted texts
'''
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# take out all punctuation while tokenizing
tokenizer = RegexpTokenizer(r'\w+')
tokens = tokenizer.tokenize(text)
# lemmatize as shown in the lesson
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
```
### 3. Build a machine learning pipeline
This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
```
pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier())),
])
# Visualize model parameters
pipeline.get_params()
```
### 4. Train pipeline
- Split data into train and test sets
- Train pipeline
```
# use sklearn split function to split dataset into train and 20% test sets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2)
# Train pipeline using RandomForest Classifier algorithm
pipeline.fit(X_train, y_train)
```
### 5. Test your model
Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's classification_report on each.
```
# Output result metrics of trained RandomForest Classifier algorithm
def evaluate_model(model, X_test, y_test):
'''
Input:
model: RandomForest Classifier trained model
X_test: Test training features
Y_test: Test training response variable
Output:
None:
Display model precision, recall, f1-score, support
'''
y_pred = model.predict(X_test)
for item, col in enumerate(y_test):
print(col)
print(classification_report(y_test[col], y_pred[:, item]))
# classification_report to display model precision, recall, f1-score, support
evaluate_model(pipeline, X_test, y_test)
```
### 6. Improve your model
Use grid search to find better parameters.
```
parameters = {'clf__estimator__max_depth': [10, 50, None],
'clf__estimator__min_samples_leaf':[2, 5, 10]}
cv = GridSearchCV(pipeline, parameters)
```
### 7. Test your model
Show the accuracy, precision, and recall of the tuned model.
Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
```
# Train pipeline using the improved model
cv.fit(X_train, y_train)
# # classification_report to display model precision, recall, f1-score, support
evaluate_model(cv, X_test, y_test)
cv.best_estimator_
```
### 8. Try improving your model further. Here are a few ideas:
* try other machine learning algorithms
* add other features besides the TF-IDF
```
# Improve model using DecisionTree Classifier
new_pipeline = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(DecisionTreeClassifier()))
])
# Train improved model
new_pipeline.fit(X_train, y_train)
# Run result metric score display function
evaluate_model(new_pipeline, X_test, y_test)
```
### 9. Export your model as a pickle file
```
# save a copy file of the the trained model to disk
trained_model_file = 'trained_model.sav'
pickle.dump(cv, open(trained_model_file, 'wb'))
```
### 10. Use this notebook to complete `train.py`
Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
| true | code | 0.561034 | null | null | null | null |
|
# Random Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Auto-Power Spectral Density
The (auto-) [power spectral density](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density) (PSD) is defined as the Fourier transformation of the [auto-correlation function](correlation_functions.ipynb) (ACF).
### Definition
For a continuous-amplitude, real-valued, wide-sense stationary (WSS) random signal $x[k]$ the PSD is given as
\begin{equation}
\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \mathcal{F}_* \{ \varphi_{xx}[\kappa] \},
\end{equation}
where $\mathcal{F}_* \{ \cdot \}$ denotes the [discrete-time Fourier transformation](https://en.wikipedia.org/wiki/Discrete-time_Fourier_transform) (DTFT) and $\varphi_{xx}[\kappa]$ the ACF of $x[k]$. Note that the DTFT is performed with respect to $\kappa$. The ACF of a random signal of finite length $N$ can be expressed by way of a linear convolution
\begin{equation}
\varphi_{xx}[\kappa] = \frac{1}{N} \cdot x_N[k] * x_N[-k].
\end{equation}
Taking the DTFT of the left- and right-hand side results in
\begin{equation}
\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, X_N(\mathrm{e}^{-\,\mathrm{j}\,\Omega}) =
\frac{1}{N} \, | X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2.
\end{equation}
The last equality results from the definition of the magnitude and the symmetry of the DTFT for real-valued signals. The spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ quantifies the amplitude density of the signal $x_N[k]$. It can be concluded from above result that the PSD quantifies the squared amplitude or power density of a random signal. This explains the term power spectral density.
### Properties
The properties of the PSD can be deduced from the properties of the ACF and the DTFT as:
1. From the link between the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and the spectrum $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ derived above it can be concluded that the PSD is real valued
$$\Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \in \mathbb{R}$$
2. From the even symmetry $\varphi_{xx}[\kappa] = \varphi_{xx}[-\kappa]$ of the ACF it follows that
$$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \Phi_{xx}(\mathrm{e}^{\,-\mathrm{j}\, \Omega}) $$
3. The PSD of an uncorrelated random signal is given as
$$ \Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \sigma_x^2 + \mu_x^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) ,$$
which can be deduced from the [ACF of an uncorrelated signal](correlation_functions.ipynb#Properties).
4. The quadratic mean of a random signal is given as
$$ E\{ x[k]^2 \} = \varphi_{xx}[\kappa=0] = \frac{1}{2\pi} \int\limits_{-\pi}^{\pi} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \,\mathrm{d} \Omega $$
The last relation can be found by expressing the ACF via the inverse DTFT of $\Phi_{xx}$ and considering that $\mathrm{e}^{\mathrm{j} \Omega \kappa} = 1$ when evaluating the integral for $\kappa=0$.
### Example - Power Spectral Density of a Speech Signal
In this example the PSD $\Phi_{xx}(\mathrm{e}^{\,\mathrm{j} \,\Omega})$ of a speech signal of length $N$ is estimated by applying a discrete Fourier transformation (DFT) to its ACF. For a better interpretation of the PSD, the frequency axis $f = \frac{\Omega}{2 \pi} \cdot f_s$ has been chosen for illustration, where $f_s$ denotes the sampling frequency of the signal. The speech signal constitutes a recording of the vowel 'o' spoken from a German male, loaded into variable `x`.
In Python the ACF is stored in a vector with indices $0, 1, \dots, 2N - 2$ corresponding to the lags $\kappa = (0, 1, \dots, 2N - 2)^\mathrm{T} - (N-1)$. When computing the discrete Fourier transform (DFT) of the ACF numerically by the fast Fourier transform (FFT) one has to take this shift into account. For instance, by multiplying the DFT $\Phi_{xx}[\mu]$ by $\mathrm{e}^{\mathrm{j} \mu \frac{2 \pi}{2N - 1} (N-1)}$.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
# read audio file
fs, x = wavfile.read('../data/vocal_o_8k.wav')
x = np.asarray(x, dtype=float)
N = len(x)
# compute ACF
acf = 1/N * np.correlate(x, x, mode='full')
# compute PSD
psd = np.fft.fft(acf)
psd = psd * np.exp(1j*np.arange(2*N-1)*2*np.pi*(N-1)/(2*N-1))
f = np.fft.fftfreq(2*N-1, d=1/fs)
# plot PSD
plt.figure(figsize=(10, 4))
plt.plot(f, np.real(psd))
plt.title('Estimated power spectral density')
plt.ylabel(r'$\hat{\Phi}_{xx}(e^{j \Omega})$')
plt.xlabel(r'$f / Hz$')
plt.axis([0, 500, 0, 1.1*max(np.abs(psd))])
plt.grid()
```
**Exercise**
* What does the PSD tell you about the average spectral contents of a speech signal?
Solution: The speech signal exhibits a harmonic structure with the dominant fundamental frequency $f_0 \approx 100$ Hz and a number of harmonics $f_n \approx n \cdot f_0$ for $n > 0$. This due to the fact that vowels generate random signals which are in good approximation periodic. To generate vowels, the sound produced by the periodically vibrating vowel folds is filtered by the resonance volumes and articulators above the voice box. The spectrum of periodic signals is a line spectrum.
## Cross-Power Spectral Density
The cross-power spectral density is defined as the Fourier transformation of the [cross-correlation function](correlation_functions.ipynb#Cross-Correlation-Function) (CCF).
### Definition
For two continuous-amplitude, real-valued, wide-sense stationary (WSS) random signals $x[k]$ and $y[k]$, the cross-power spectral density is given as
\begin{equation}
\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mathcal{F}_* \{ \varphi_{xy}[\kappa] \},
\end{equation}
where $\varphi_{xy}[\kappa]$ denotes the CCF of $x[k]$ and $y[k]$. Note again, that the DTFT is performed with respect to $\kappa$. The CCF of two random signals of finite length $N$ and $M$ can be expressed by way of a linear convolution
\begin{equation}
\varphi_{xy}[\kappa] = \frac{1}{N} \cdot x_N[k] * y_M[-k].
\end{equation}
Note the chosen $\frac{1}{N}$-averaging convention corresponds to the length of signal $x$. If $N \neq M$, care should be taken on the interpretation of this normalization. In case of $N=M$ the $\frac{1}{N}$-averaging yields a [biased estimator](https://en.wikipedia.org/wiki/Bias_of_an_estimator) of the CCF, which consistently should be denoted with $\hat{\varphi}_{xy,\mathrm{biased}}[\kappa]$.
Taking the DTFT of the left- and right-hand side from above cross-correlation results in
\begin{equation}
\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{1}{N} \, X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})\, Y_M(\mathrm{e}^{-\,\mathrm{j}\,\Omega}).
\end{equation}
### Properties
1. The symmetries of $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ can be derived from the symmetries of the CCF and the DTFT as
$$ \underbrace {\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \Phi_{xy}^*(\mathrm{e}^{-\,\mathrm{j}\, \Omega})}_{\varphi_{xy}[\kappa] \in \mathbb{R}} =
\underbrace {\Phi_{yx}(\mathrm{e}^{\,- \mathrm{j}\, \Omega}) = \Phi_{yx}^*(\mathrm{e}^{\,\mathrm{j}\, \Omega})}_{\varphi_{yx}[-\kappa] \in \mathbb{R}},$$
from which $|\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})| = |\Phi_{yx}(\mathrm{e}^{\,\mathrm{j}\, \Omega})|$ can be concluded.
2. The cross PSD of two uncorrelated random signals is given as
$$ \Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega}) = \mu_x^2 \mu_y^2 \cdot {\bot \!\! \bot \!\! \bot}\left( \frac{\Omega}{2 \pi} \right) $$
which can be deduced from the CCF of an uncorrelated signal.
### Example - Cross-Power Spectral Density
The following example estimates and plots the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of two random signals $x_N[k]$ and $y_M[k]$ of finite lengths $N = 64$ and $M = 512$.
```
N = 64 # length of x
M = 512 # length of y
# generate two uncorrelated random signals
np.random.seed(1)
x = 2 + np.random.normal(size=N)
y = 3 + np.random.normal(size=M)
N = len(x)
M = len(y)
# compute cross PSD via CCF
acf = 1/N * np.correlate(x, y, mode='full')
psd = np.fft.fft(acf)
psd = psd * np.exp(1j*np.arange(N+M-1)*2*np.pi*(M-1)/(2*M-1))
psd = np.fft.fftshift(psd)
Om = 2*np.pi * np.arange(0, N+M-1) / (N+M-1)
Om = Om - np.pi
# plot results
plt.figure(figsize=(10, 4))
plt.stem(Om, np.abs(psd), basefmt='C0:', use_line_collection=True)
plt.title('Biased estimator of cross power spectral density')
plt.ylabel(r'$|\hat{\Phi}_{xy}(e^{j \Omega})|$')
plt.xlabel(r'$\Omega$')
plt.grid()
```
**Exercise**
* What does the cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega})$ tell you about the statistical properties of the two random signals?
Solution: The cross PSD $\Phi_{xy}(\mathrm{e}^{\,\mathrm{j} \, \Omega})$ is essential only non-zero for $\Omega=0$. It hence can be concluded that the two random signals are not mean-free and uncorrelated to each other.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
| true | code | 0.622832 | null | null | null | null |
|
# Implementation of VGG16
> In this notebook I have implemented VGG16 on CIFAR10 dataset using Pytorch
```
#importing libraries
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms
import torch.optim as optim
import tqdm
import matplotlib.pyplot as plt
from torchvision.datasets import CIFAR10
from torch.utils.data import random_split
from torch.utils.data.dataloader import DataLoader
```
Load the data and do standard preprocessing steps,such as resizing and converting the images into tensor
```
transform = transforms.Compose([transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485,0.456,0.406],
std=[0.229,0.224,0.225])])
train_ds = CIFAR10(root='data/',train = True,download=True,transform = transform)
val_ds = CIFAR10(root='data/',train = False,download=True,transform = transform)
batch_size = 128
train_loader = DataLoader(train_ds,batch_size,shuffle=True,num_workers=4,pin_memory=True)
val_loader = DataLoader(val_ds,batch_size,num_workers=4,pin_memory=True)
```
A custom utility class to print out the accuracy and losses during training and testing
```
def accuracy(outputs,labels):
_,preds = torch.max(outputs,dim=1)
return torch.tensor(torch.sum(preds==labels).item()/len(preds))
class ImageClassificationBase(nn.Module):
def training_step(self,batch):
images, labels = batch
out = self(images)
loss = F.cross_entropy(out,labels)
return loss
def validation_step(self,batch):
images, labels = batch
out = self(images)
loss = F.cross_entropy(out,labels)
acc = accuracy(out,labels)
return {'val_loss': loss.detach(),'val_acc': acc}
def validation_epoch_end(self,outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean()
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean()
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], train_loss: {:.4f}, val_loss: {:.4f}, val_acc: {:.4f}".format(
epoch, result['train_loss'], result['val_loss'], result['val_acc']))
```
### Creating a network
```
VGG_types = {
'VGG11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'VGG16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'VGG19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}
class VGG_net(ImageClassificationBase):
def __init__(self, in_channels=3, num_classes=1000):
super(VGG_net, self).__init__()
self.in_channels = in_channels
self.conv_layers = self.create_conv_layers(VGG_types['VGG16'])
self.fcs = nn.Sequential(
nn.Linear(512*7*7, 4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096, 4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096, num_classes)
)
def forward(self, x):
x = self.conv_layers(x)
x = x.reshape(x.shape[0], -1)
x = self.fcs(x)
return x
def create_conv_layers(self, architecture):
layers = []
in_channels = self.in_channels
for x in architecture:
if type(x) == int:
out_channels = x
layers += [nn.Conv2d(in_channels=in_channels,out_channels=out_channels,
kernel_size=(3,3), stride=(1,1), padding=(1,1)),
nn.BatchNorm2d(x),
nn.ReLU()]
in_channels = x
elif x == 'M':
layers += [nn.MaxPool2d(kernel_size=(2,2), stride=(2,2))]
return nn.Sequential(*layers)
```
A custom function to pick a default device
```
def get_default_device():
"""Pick GPU if available else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
device = get_default_device()
device
def to_device(data,device):
"""Move tensors to chosen device"""
if isinstance(data,(list,tuple)):
return [to_device(x,device) for x in data]
return data.to(device,non_blocking=True)
for images, labels in train_loader:
print(images.shape)
images = to_device(images,device)
print(images.device)
break
class DeviceDataLoader():
"""Wrap a DataLoader to move data to a device"""
def __init__(self,dl,device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data to a dataloader"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
train_loader = DeviceDataLoader(train_loader,device)
val_loader = DeviceDataLoader(val_loader,device)
model = VGG_net(in_channels=3,num_classes=10)
to_device(model,device)
```
### Training the model
```
@torch.no_grad()
def evaluate(model, val_loader):
model.eval()
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
train_losses =[]
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
model.train()
for batch in train_loader:
loss = model.training_step(batch)
train_losses.append(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
result['train_loss'] = torch.stack(train_losses).mean().item()
model.epoch_end(epoch, result)
history.append(result)
return history
history = [evaluate(model, val_loader)]
history
#history = fit(2,0.1,model,train_loader,val_loader)
```
| true | code | 0.827602 | null | null | null | null |
|
# REINFORCE in PyTorch
Just like we did before for Q-learning, this time we'll design a PyTorch network to learn `CartPole-v0` via policy gradient (REINFORCE).
Most of the code in this notebook is taken from approximate Q-learning, so you'll find it more or less familiar and even simpler.
```
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
A caveat: with some versions of `pyglet`, the following cell may crash with `NameError: name 'base' is not defined`. The corresponding bug report is [here](https://github.com/pyglet/pyglet/issues/134). If you see this error, try restarting the kernel.
```
env = gym.make("CartPole-v0")
# gym compatibility: unwrap TimeLimit
if hasattr(env, '_max_episode_steps'):
env = env.env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
plt.imshow(env.render("rgb_array"))
```
# Building the network for REINFORCE
For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.
For numerical stability, please __do not include the softmax layer into your network architecture__.
We'll use softmax or log-softmax where appropriate.
```
import torch
import torch.nn as nn
# Build a simple neural network that predicts policy logits.
# Keep it simple: CartPole isn't worth deep architectures.
model = nn.Sequential(
<YOUR CODE: define a neural network that predicts policy logits>
)
```
#### Predict function
Note: output value of this function is not a torch tensor, it's a numpy array.
So, here gradient calculation is not needed.
<br>
Use [no_grad](https://pytorch.org/docs/stable/autograd.html#torch.autograd.no_grad)
to suppress gradient calculation.
<br>
Also, `.detach()` (or legacy `.data` property) can be used instead, but there is a difference:
<br>
With `.detach()` computational graph is built but then disconnected from a particular tensor,
so `.detach()` should be used if that graph is needed for backprop via some other (not detached) tensor;
<br>
In contrast, no graph is built by any operation in `no_grad()` context, thus it's preferable here.
```
def predict_probs(states):
"""
Predict action probabilities given states.
:param states: numpy array of shape [batch, state_shape]
:returns: numpy array of shape [batch, n_actions]
"""
# convert states, compute logits, use softmax to get probability
<YOUR CODE>
return <YOUR CODE>
test_states = np.array([env.reset() for _ in range(5)])
test_probas = predict_probs(test_states)
assert isinstance(test_probas, np.ndarray), \
"you must return np array and not %s" % type(test_probas)
assert tuple(test_probas.shape) == (test_states.shape[0], env.action_space.n), \
"wrong output shape: %s" % np.shape(test_probas)
assert np.allclose(np.sum(test_probas, axis=1), 1), "probabilities do not sum to 1"
```
### Play the game
We can now use our newly built agent to play the game.
```
def generate_session(env, t_max=1000):
"""
Play a full session with REINFORCE agent.
Returns sequences of states, actions, and rewards.
"""
# arrays to record session
states, actions, rewards = [], [], []
s = env.reset()
for t in range(t_max):
# action probabilities array aka pi(a|s)
action_probs = predict_probs(np.array([s]))[0]
# Sample action with given probabilities.
a = <YOUR CODE>
new_s, r, done, info = env.step(a)
# record session history to train later
states.append(s)
actions.append(a)
rewards.append(r)
s = new_s
if done:
break
return states, actions, rewards
# test it
states, actions, rewards = generate_session(env)
```
### Computing cumulative rewards
$$
\begin{align*}
G_t &= r_t + \gamma r_{t + 1} + \gamma^2 r_{t + 2} + \ldots \\
&= \sum_{i = t}^T \gamma^{i - t} r_i \\
&= r_t + \gamma * G_{t + 1}
\end{align*}
$$
```
def get_cumulative_rewards(rewards, # rewards at each step
gamma=0.99 # discount for reward
):
"""
Take a list of immediate rewards r(s,a) for the whole session
and compute cumulative returns (a.k.a. G(s,a) in Sutton '16).
G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...
A simple way to compute cumulative rewards is to iterate from the last
to the first timestep and compute G_t = r_t + gamma*G_{t+1} recurrently
You must return an array/list of cumulative rewards with as many elements as in the initial rewards.
"""
<YOUR CODE>
return <YOUR CODE: array of cumulative rewards>
get_cumulative_rewards(rewards)
assert len(get_cumulative_rewards(list(range(100)))) == 100
assert np.allclose(
get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9),
[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])
assert np.allclose(
get_cumulative_rewards([0, 0, 1, -2, 3, -4, 0], gamma=0.5),
[0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])
assert np.allclose(
get_cumulative_rewards([0, 0, 1, 2, 3, 4, 0], gamma=0),
[0, 0, 1, 2, 3, 4, 0])
print("looks good!")
```
#### Loss function and updates
We now need to define objective and update over policy gradient.
Our objective function is
$$ J \approx { 1 \over N } \sum_{s_i,a_i} G(s_i,a_i) $$
REINFORCE defines a way to compute the gradient of the expected reward with respect to policy parameters. The formula is as follows:
$$ \nabla_\theta \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \nabla_\theta \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$
We can abuse PyTorch's capabilities for automatic differentiation by defining our objective function as follows:
$$ \hat J(\theta) \approx { 1 \over N } \sum_{s_i, a_i} \log \pi_\theta (a_i \mid s_i) \cdot G_t(s_i, a_i) $$
When you compute the gradient of that function with respect to network weights $\theta$, it will become exactly the policy gradient.
```
def to_one_hot(y_tensor, ndims):
""" helper: take an integer vector and convert it to 1-hot matrix. """
y_tensor = y_tensor.type(torch.LongTensor).view(-1, 1)
y_one_hot = torch.zeros(
y_tensor.size()[0], ndims).scatter_(1, y_tensor, 1)
return y_one_hot
# Your code: define optimizers
optimizer = torch.optim.Adam(model.parameters(), 1e-3)
def train_on_session(states, actions, rewards, gamma=0.99, entropy_coef=1e-2):
"""
Takes a sequence of states, actions and rewards produced by generate_session.
Updates agent's weights by following the policy gradient above.
Please use Adam optimizer with default parameters.
"""
# cast everything into torch tensors
states = torch.tensor(states, dtype=torch.float32)
actions = torch.tensor(actions, dtype=torch.int32)
cumulative_returns = np.array(get_cumulative_rewards(rewards, gamma))
cumulative_returns = torch.tensor(cumulative_returns, dtype=torch.float32)
# predict logits, probas and log-probas using an agent.
logits = model(states)
probs = nn.functional.softmax(logits, -1)
log_probs = nn.functional.log_softmax(logits, -1)
assert all(isinstance(v, torch.Tensor) for v in [logits, probs, log_probs]), \
"please use compute using torch tensors and don't use predict_probs function"
# select log-probabilities for chosen actions, log pi(a_i|s_i)
log_probs_for_actions = torch.sum(
log_probs * to_one_hot(actions, env.action_space.n), dim=1)
# Compute loss here. Don't forgen entropy regularization with `entropy_coef`
entropy = <YOUR CODE>
loss = <YOUR CODE>
# Gradient descent step
<YOUR CODE>
# technical: return session rewards to print them later
return np.sum(rewards)
```
### The actual training
```
for i in range(100):
rewards = [train_on_session(*generate_session(env)) for _ in range(100)] # generate new sessions
print("mean reward:%.3f" % (np.mean(rewards)))
if np.mean(rewards) > 500:
print("You Win!") # but you can train even further
break
```
### Results & video
```
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from base64 import b64encode
from IPython.display import HTML
video_paths = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
video_path = video_paths[-1] # You can also try other indices
if 'google.colab' in sys.modules:
# https://stackoverflow.com/a/57378660/1214547
with video_path.open('rb') as fp:
mp4 = fp.read()
data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode()
else:
data_url = str(video_path)
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(data_url))
```
| true | code | 0.728176 | null | null | null | null |
|
# BE 240 Lecture 4
# Sub-SBML
## Modeling diffusion, shared resources, and compartmentalized systems
## _Ayush Pandey_
```
# This notebook is designed to be converted to a HTML slide show
# To do this in the command prompt type (in the folder containing the notebook):
# jupyter nbconvert BE240_Lecture4_Sub-SBML.ipynb --to slides
```


# An example:
### Three different "subsystems" - each with its SBML model
### Another "signal in mixture" subsystem - models signal in the environment / mixture
### Using Sub-SBML we can obtain the combined model for such a system with
* transport across membrane
* shared resources : ATP, Ribosome etc
* resolve naming conflicts (Ribo, Ribosome, RNAP, RNAPolymerase etc.)

# Installing Sub-SBML
```
git clone https://github.com/BuildACell/subsbml.git
```
cd to `subsbml` directory then run the following command to install the package in your environment:
```
python setup.py install
```
# Dependencies:
1. python-libsbml : Run `pip install python-libsbml`, if you don't have it already. You probably already have this installed as it is also a dependency for bioscrape
1. A simulator: You will need a simulator of your choice to simulate the SBML models that Sub-SBML generates. Bioscrape is an example of a simulator and we will be using that for simulations.
# Update your bioscrape installation
From the bioscrape directory, run the following if you do not have a remote fork (your own Github fork of the original bioscrape repository - `biocircuits/bioscrape`. To list all remote repositories that your bioscrape directory is connected to you can run `git remote -v`. The `origin` in the next two commands corresponds to the biocircuits/bioscrape github repository (you should change it if your remote has a different name)
```
git pull origin master
python setup.py install
```
Update your BioCRNpyler installation as well - if you plan to use your own BioCRNpyler models with Sub-SBML. Run the same commands as for bioscrape from the BioCRNpyler directory.
## Sub-SBML notes:
## On "name" and "identifier":
> SBML elements can have a name and an identifier argument. A `name` is supposed to be a human readable name of the particular element in the model. On the other hand, an `identifier` is what the software tool reads. Hence, `identifier` argument in an SBML model is mandatory whereas `name` argument is optional.
Sub-SBML works with `name` arguments of various model components to figure out what components interact/get combined/shared etc. Bioscrape/BioCRNpyler and other common software tools generate SBML models with `name` arguments added to various components such as species, parameters. As an example, to combine two species, Sub-SBML looks at the names of the two species and if they are the same - they are combined together and given a new identifier but the name remains the same.
## A simple Sub-SBML use case:
A simple example where we have two different models : transcription and translation. Using Sub-SBML, we can combine these two together and run simulations.
```
# Import statements
from subsbml.Subsystem import createNewSubsystem, createSubsystem
import numpy as np
import pylab as plt
```
## Transcription Model:
Consider the following simple transcription-only model where $G$ is a gene, $T$ is a transcript, and $S$ is the signaling molecule.
We can write the following reduced order dynamics:
1. $G \xrightarrow[]{\rho_{tx}(G, S)} G + T$;
\begin{align}
\rho_{tx}(G, S) = G K_{X}\frac{S^{2}}{K_{S}^{2}+S^{2}}
\\
\end{align}
Here, $S$ is the inducer signal that cooperatively activates the transcription of the gene $G$. Since, this is a positive activation of the gene by the inducer, we have a positive proportional Hill function.
1. $T \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$.
## Translation model:
1. $T \xrightarrow[]{\rho_{tl}(T)} T+X$;
\begin{align}
\rho_{tl}(T) = K_{TR} \frac{T}{K_{R} + T}
\\
\end{align}
Here $X$ is the protein species.
The lumped parameters $K_{TR}$ and $K_R$ model effects due to ribosome saturation. This is the similar Hill function as derived in the enzymatic reaction example.
1. $X \xrightarrow[]{\delta} \varnothing$; massaction kinetics at rate $\delta$.
```
# Import SBML models by creating Subsystem class objects
ss1 = createSubsystem('transcription_SBML_model.xml')
ss2 = createSubsystem('translation_SBML_model.xml')
ss1.renameSName('mRNA_T', 'T')
# Combine the two subsystems together
tx_tl_subsystem = ss1 + ss2
# The longer way to do the same thing:
# tx_tl_subsystem = createNewSubsystem()
# tx_tl_subsystem.combineSubsystems([ss1,ss2], verbose = True)
# Set signal concentration (input) - manually and get ID for protein X
X_id = tx_tl_subsystem.getSpeciesByName('X').getId()
# Writing a Subsystem to an SBML file (Export SBML)
_ = tx_tl_subsystem.writeSBML('txtl_ss.xml')
tx_tl_subsystem.setSpeciesAmount('S',10)
try:
# Simulate with Bioscrape and plot the result
timepoints = np.linspace(0,100,100)
results, _ = tx_tl_subsystem.simulateWithBioscrape(timepoints)
plt.plot(timepoints, results[X_id], linewidth = 3, label = 'S = 10')
tx_tl_subsystem.setSpeciesAmount('S',5)
results, _ = tx_tl_subsystem.simulateWithBioscrape(timepoints)
plt.plot(timepoints, results[X_id], linewidth = 3, label = 'S = 5')
plt.title('Protein X dynamics')
plt.ylabel('[X]')
plt.xlabel('Time')
plt.legend()
plt.show()
except:
print('Simulator not found')
# Viewing the change log for the changes that Sub-SBML made
# print(ss1.changeLog)
# print(ss2.changeLog)
print(tx_tl_subsystem.changeLog)
```
## Signal induction model:
1. $\varnothing \xrightarrow[]{\rho(I)} S$;
\begin{align}
\rho(S) = K_{0} \frac{I^2}{K_{I} + I^2}
\\
\end{align}
Here $S$ is the signal produced on induction by an inducer $I$.
The lumped parameters $K_{0}$ and $K_S$ model effects of cooperative production of the signal by the inducer. This is the similar Hill function as derived in the enzymatic reaction example.
```
ss3 = createSubsystem('signal_in_mixture.xml')
# Signal subsystem (production of signal molecule)
combined_ss = ss1 + ss2 + ss3
# Alternatively
combined_ss = createNewSubsystem()
combined_ss.combineSubsystems([ss1,ss2,ss3])
# Writing a Subsystem to an SBML file (Export SBML)
combined_ss.writeSBML('txtl_combined.xml')
# Set signal concentration (input) - manually and get ID for protein X
combined_ss.setSpeciesAmount('I',10)
X_id = combined_ss.getSpeciesByName('X').getId()
try:
# Simulate with Bioscrape and plot the result
timepoints = np.linspace(0,100,100)
results, _ = combined_ss.simulateWithBioscrape(timepoints)
plt.plot(timepoints, results[X_id], linewidth = 3, label = 'I = 10')
combined_ss.setSpeciesAmount('I',2)
results, _ = combined_ss.simulateWithBioscrape(timepoints)
plt.plot(timepoints, results[X_id], linewidth = 3, label = 'I = 5')
plt.title('Protein X dynamics')
plt.ylabel('[X]')
plt.xlabel('Time')
plt.legend()
plt.show()
except:
print('Simulator not found')
combined_ss.changeLog
```
## What does Sub-SBML look for?
1. For compartments: if two compartments have the same `name` and the same `size` attributes => they are combined together.
1. For species: if two species have the same `name` attribute => they are combined together. If initial amount is not the same, the first amount is set. It is easy to set species amounts later.
1. For parameters: if two paraemters have the same `name` attribute **and** the same `value` => they are combined together.
1. For reactions: if two reactions have the same `name` **and** the same reaction string (reactants -> products) => they are combined together.
1. Other SBML components are also merged.
# Utility functions for Subsystems
1. Set `verbose` keyword argument to `True` to get a list of detailed warning messages that describe the changes being made to the models. Helpful in debugging and creating clean models when combining multiple models.
1. Use `renameSName` method for a `Subsystem` to rename any species' names throughout a model and `renameSIdRefs` to rename identifiers.
1. Use `createBasicSubsystem()` function to get a basic "empty" subsystem model.
1. Use `getSpeciesByName` to get all species with a given name in a Subsystem model.
1. use `shareSubsystems` method similar to `combineSubsystems` method if you are only interested in getting a model with shared resource species combined together.
1. Set `combineNames` keyword argument to `False` in `combineSubsystems` method to combine the Subsystem objects but treating the elements with the same `name` as different.
# Modeling transport across membranes

## System 1 : TX-TL with IPTG reservoir and no membrane
```
from subsbml.System import System, combineSystems
cell_1 = System('cell_1')
ss1 = createSubsystem('txtl_ss.xml')
ss1.renameSName('S', 'IPTG')
ss2 = createSubsystem('IPTG_reservoir.xml')
IPTG_external_conc = ss2.getSpeciesByName('IPTG').getInitialConcentration()
cell_1.setInternal([ss1])
cell_1.setExternal([ss2])
# cell_1.setMembrane() # Membrane-less system
ss1.setSpeciesAmount('IPTG', IPTG_external_conc)
cell_1_model = cell_1.getModel() # Get a Subsystem object that represents the combined model for cell_1
cell_1_model.writeSBML('cell_1_model.xml')
```
## System 2 : TX-TL with IPTG reservoir and a simple membrane
### Membrane : IPTG external and internal diffusion in a one step reversible reaction
```
from subsbml import System, createSubsystem, combineSystems, createNewSubsystem
ss1 = createSubsystem('txtl_ss.xml')
ss1.renameSName('S','IPTG')
ss2 = createSubsystem('IPTG_reservoir.xml')
# Create a simple IPTG membrane where IPTG goes in an out of the membrane via a reversible reaction
mb2 = createSubsystem('membrane_IPTG.xml', membrane = True)
# cell_2 = System('cell_2',ListOfInternalSubsystems = [ss1],
# ListOfExternalSubsystems = [ss2],
# ListOfMembraneSubsystems = [mb2])
cell_2 = System('cell_2')
cell_2.setInternal(ss1)
cell_2.setExternal(ss2)
cell_2.setMembrane(mb2)
cell_2_model = cell_2.getModel()
cell_2_model.setSpeciesAmount('IPTG', 1e4, compartment = 'cell_2_external')
cell_2_model.writeSBML('cell_2_model.xml')
```
## System 3 : TX-TL with IPTG reservoir and a detailed membrane diffusion
### Membrane : IPTG external binds to a transport protein and forms a complex. This complex causes the diffusion of IPTG in the internal of the cell.
```
# Create a more detailed IPTG membrane where IPTG binds to an intermediate transporter protein, forms a complex
# then transports out of the cell system to the external environment
mb3 = createSubsystem('membrane_IPTG_detailed.xml', membrane = True)
cell_3 = System('cell_3',ListOfInternalSubsystems = [ss1],
ListOfExternalSubsystems = [ss2],
ListOfMembraneSubsystems = [mb3])
cell_3_model = cell_3.getModel()
cell_3_model.setSpeciesAmount('IPTG', 1e4, compartment = 'cell_3_external')
cell_3_model.writeSBML('cell_3_model.xml')
combined_model = combineSystems([cell_1, cell_2, cell_3])
try:
import numpy as np
import matplotlib.pyplot as plt
timepoints = np.linspace(0,2,100)
results_1, _ = cell_1_model.simulateWithBioscrape(timepoints)
results_2, _ = cell_2_model.simulateWithBioscrape(timepoints)
results_3, _ = cell_3_model.simulateWithBioscrape(timepoints)
X_id1 = cell_1_model.getSpeciesByName('X').getId()
X_id2 = cell_2_model.getSpeciesByName('X', compartment = 'cell_2_internal').getId()
X_id3 = cell_3_model.getSpeciesByName('X', compartment = 'cell_3_internal').getId()
plt.plot(timepoints, results_1[X_id1], linewidth = 3, label = 'No membrane')
plt.plot(timepoints, results_2[X_id2], linewidth = 3, label = 'Simple membrane')
plt.plot(timepoints, results_3[X_id3], linewidth = 3, label = 'Advanced membrane')
plt.xlabel('Time')
plt.ylabel('[X]')
plt.legend()
plt.show()
timepoints = np.linspace(0,200,100)
results_1, _ = cell_1_model.simulateWithBioscrape(timepoints)
results_2, _ = cell_2_model.simulateWithBioscrape(timepoints)
results_3, _ = cell_3_model.simulateWithBioscrape(timepoints)
X_id1 = cell_1_model.getSpeciesByName('X').getId()
X_id2 = cell_2_model.getSpeciesByName('X', compartment = 'cell_2_internal').getId()
X_id3 = cell_3_model.getSpeciesByName('X', compartment = 'cell_3_internal').getId()
plt.plot(timepoints, results_1[X_id1], linewidth = 3, label = 'No membrane')
plt.plot(timepoints, results_2[X_id2], linewidth = 3, label = 'Simple membrane')
plt.plot(timepoints, results_3[X_id3], linewidth = 3, label = 'Advanced membrane')
plt.xlabel('Time')
plt.ylabel('[X]')
plt.legend()
plt.show()
except:
print('Simulator not found')
```
# Additional Sub-SBML Tools:
* Create SBML models directly using `SimpleModel` class
* Simulate directly using `bioscrape` or `libRoadRunner` with various simulation options
* Various utility functions to edit SBML models:
1. Change species names/identifiers throughout an SBML model.
1. Edit parameter values or species initial conditions easily (directly in an SBML model).
* `combineSystems` function can be used to combine multiple `System` objects together as shown in the previous cell. Also, a special use case interaction modeling function is available : `connectSubsystems`. Refer to the tutorial_interconnetion.ipynb notebook in the tutorials directory for more information about this.
# Things to Try:
1. Compartmentalize your own SBML model - generate more than 1 model each with a different compartment names. Using tools in this notebook, try to combine your models together and regenerate the expected simulation.
1. Implement a diffusion model and use it as a membrane model for a `System` of your choice.
1. Implement an even more complicated diffusion model for the above example and run the simulation.
1. **The package has not been tested extensively. So, it would be really great if you could raise [issues](https://github.com/BuildACell/subsbml/issues) on Github if you face any errors with your models. Also, feel free to send a message on Slack channel or DM.**
| true | code | 0.647213 | null | null | null | null |
|
# Examples of usage of Gate Angle Placeholder
The word "Placeholder" is used in Qubiter (we are in good company, Tensorflow uses this word in the same way) to mean a variable for which we delay/postpone assigning a numerical value (evaluating it) until a later time. In the case of Qubiter, it is useful to define gates with placeholders standing for angles. One can postpone evaluating those placeholders until one is ready to call the circuit simulator, and then pass the values of the placeholders as an argument to the simulator’s constructor. Placeholders of this type can be useful, for example, with quantum neural nets (QNNs). In some QNN algorithms, the circuit gate structure is fixed but the angles of the gates are varied many times, gradually, trying to lower a cost function each time.
> In Qubiter, legal variable names must be of form `#3` or `-#3` or `#3*.5` or
`-#3*.5` where 3 can be replaced by any non-negative int, and .5 can
be replaced by anything that can be an argument of float() without
throwing an exception. In this example, the 3 that follows the hash
character is called the variable number
>NEW! (functional placeholder variables)
Now legal variable names can ALSO be of the form `my_fun#1#2` or
`-my_fun#1#2`, where
* the 1 and 2 can be replaced by any non-negative integers and there
might be any number > 0 of hash variables. Thus, there need not
always be precisely 2 hash variables as in the example.
* `my_fun` can be replaced by the name of any function with one or
more input floats (2 inputs in the example), as long as the first
character of the function's name is a lower case letter.
>The strings `my_fun#1#2` or `-my_fun#1#2` indicate than one wants to
use for the angle being replaced, the values of `my_fun(#1, #2)` or
`-my_fun(#1, #2)`, respectively, where the inputs #1 and #2 are
floats standing for radians and the output is also a float standing
for radians.
```
import os
import sys
print(os.getcwd())
os.chdir('../../')
print(os.getcwd())
sys.path.insert(0,os.getcwd())
```
We begin by writing a simple circuit with 4 qubits. As usual, the following code will
write an English and a Picture file in the `io_folder` directory. Note that some
angles have been entered into the write() Python functions as legal
variable names instead of floats. In the English file, you will see those legal
names where the numerical values of those angles would have been.
```
from qubiter.SEO_writer import *
from qubiter.SEO_reader import *
from qubiter.EchoingSEO_reader import *
from qubiter.SEO_simulator import *
num_bits = 4
file_prefix = 'placeholder_test'
emb = CktEmbedder(num_bits, num_bits)
wr = SEO_writer(file_prefix, emb)
wr.write_Rx(2, rads=np.pi/7)
wr.write_Rx(1, rads='#2*.5')
wr.write_Rx(1, rads='my_fun1#2')
wr.write_Rn(3, rads_list=['#1', '-#1*3', '#3'])
wr.write_Rx(1, rads='-my_fun2#2#1')
wr.write_cnot(2, 3)
wr.close_files()
```
The following 2 files were just written:
1. <a href='../io_folder/placeholder_test_4_eng.txt'>../io_folder/placeholder_test_4_eng.txt</a>
2. <a href='../io_folder/placeholder_test_4_ZLpic.txt'>../io_folder/placeholder_test_4_ZLpic.txt</a>
Simply by creating an object of the class SEO_reader with the flag `write_log` set equal to True, you can create a log file which contains
* a list of distinct variable numbers
* a list of distinct function names
encountered in the English file
```
rdr = SEO_reader(file_prefix, num_bits, write_log=True)
```
The following log file was just written:
<a href='../io_folder/placeholder_test_4_log.txt'>../io_folder/placeholder_test_4_log.txt</a>
Next, let us create two functions that will be used for the functional placeholders
```
def my_fun1(x):
return x*.5
def my_fun2(x, y):
return x + y
```
**Partial Substitution**
This creates new files
with `#1=30`, `#2=60`, `'my_fun1'->my_fun1`,
but `#3` and `'my_fun2'` still undecided
```
vman = PlaceholderManager(eval_all_vars=False,
var_num_to_rads={1: np.pi/6, 2: np.pi/3},
fun_name_to_fun={'my_fun1': my_fun1})
wr = SEO_writer(file_prefix + '_eval01', emb)
EchoingSEO_reader(file_prefix, num_bits, wr,
vars_manager=vman)
```
The following 2 files were just written:
1. <a href='../io_folder/placeholder_test_eval01_4_eng.txt'>../io_folder/placeholder_test_eval01_4_eng.txt</a>
2. <a href='../io_folder/placeholder_test_eval01_4_ZLpic.txt'>../io_folder/placeholder_test_eval01_4_ZLpic.txt</a>
The following code runs the simulator after substituting
`#1=30`, `#2=60`, `#3=90`, `'my_fun1'->my_fun1`, `'my_fun2'->my_fun2`
```
vman = PlaceholderManager(
var_num_to_rads={1: np.pi/6, 2: np.pi/3, 3: np.pi/2},
fun_name_to_fun={'my_fun1': my_fun1, 'my_fun2': my_fun2}
)
sim = SEO_simulator(file_prefix, num_bits, verbose=False,
vars_manager=vman)
StateVec.describe_st_vec_dict(sim.cur_st_vec_dict)
```
| true | code | 0.230227 | null | null | null | null |
|
# The art of using pipelines
Pipelines are a natural way to think about a machine learning system. Indeed with some practice a data scientist can visualise data "flowing" through a series of steps. The input is typically some raw data which has to be processed in some manner. The goal is to represent the data in such a way that is can be ingested by a machine learning algorithm. Along the way some steps will extract features, while others will normalize the data and remove undesirable elements. Pipelines are simple, and yet they are a powerful way of designing sophisticated machine learning systems.
Both [scikit-learn](https://stackoverflow.com/questions/33091376/python-what-is-exactly-sklearn-pipeline-pipeline) and [pandas](https://tomaugspurger.github.io/method-chaining) make it possible to use pipelines. However it's quite rare to see pipelines being used in practice (at least on Kaggle). Sometimes you get to see people using scikit-learn's `pipeline` module, however the `pipe` method from `pandas` is sadly underappreciated. A big reason why pipelines are not given much love is that it's easier to think of batch learning in terms of a script or a notebook. Indeed many people doing data science seem to prefer a procedural style to a declarative style. Moreover in practice pipelines can be a bit rigid if one wishes to do non-orthodox operations.
Although pipelines may be a bit of an odd fit for batch learning, they make complete sense when they are used for online learning. Indeed the UNIX philosophy has advocated the use of pipelines for data processing for many decades. If you can visualise data as a stream of observations then using pipelines should make a lot of sense to you. We'll attempt to convince you by writing a machine learning algorithm in a procedural way and then converting it to a declarative pipeline in small steps. Hopefully by the end you'll be convinced, or not!
In this notebook we'll manipulate data from the [Kaggle Recruit Restaurants Visitor Forecasting competition](https://www.kaggle.com/c/recruit-restaurant-visitor-forecasting). The data is directly available through `river`'s `datasets` module.
```
from pprint import pprint
from river import datasets
for x, y in datasets.Restaurants():
pprint(x)
pprint(y)
break
```
We'll start by building and running a model using a procedural coding style. The performance of the model doesn't matter, we're simply interested in the design of the model.
```
from river import feature_extraction
from river import linear_model
from river import metrics
from river import preprocessing
from river import stats
means = (
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
scaler = preprocessing.StandardScaler()
lin_reg = linear_model.LinearRegression()
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Derive date features
x['weekday'] = x['date'].weekday()
x['is_weekend'] = x['date'].weekday() in (5, 6)
# Process the rolling means of the target
for mean in means:
x = {**x, **mean.transform_one(x)}
mean.learn_one(x, y)
# Remove the key/value pairs that aren't features
for key in ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']:
x.pop(key)
# Rescale the data
x = scaler.learn_one(x).transform_one(x)
# Fit the linear regression
y_pred = lin_reg.predict_one(x)
lin_reg.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
```
We're not using many features. We can print the last `x` to get an idea of the features (don't forget they've been scaled!)
```
pprint(x)
```
The above chunk of code is quite explicit but it's a bit verbose. The whole point of libraries such as `river` is to make life easier for users. Moreover there's too much space for users to mess up the order in which things are done, which increases the chance of there being target leakage. We'll now rewrite our model in a declarative fashion using a pipeline *à la sklearn*.
```
from river import compose
def get_date_features(x):
weekday = x['date'].weekday()
return {'weekday': weekday, 'is_weekend': weekday in (5, 6)}
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
metric = metrics.MAE()
for x, y in datasets.Restaurants():
# Make a prediction without using the target
y_pred = model.predict_one(x)
# Update the model using the target
model.learn_one(x, y)
# Update the metric using the out-of-fold prediction
metric.update(y, y_pred)
print(metric)
```
We use a `Pipeline` to arrange each step in a sequential order. A `TransformerUnion` is used to merge multiple feature extractors into a single transformer. The `for` loop is now much shorter and is thus easier to grok: we get the out-of-fold prediction, we fit the model, and finally we update the metric. This way of evaluating a model is typical of online learning, and so we put it wrapped it inside a function called `progressive_val_score` part of the `evaluate` module. We can use it to replace the `for` loop.
```
from river import evaluate
model = compose.Pipeline(
('features', compose.TransformerUnion(
('date_features', compose.FuncTransformer(get_date_features)),
('last_7_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7))),
('last_14_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14))),
('last_21_mean', feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)))
)),
('drop_non_features', compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude')),
('scale', preprocessing.StandardScaler()),
('lin_reg', linear_model.LinearRegression())
)
evaluate.progressive_val_score(dataset=datasets.Restaurants(), model=model, metric=metrics.MAE())
```
Notice that you couldn't have used the `progressive_val_score` method if you wrote the model in a procedural manner.
Our code is getting shorter, but it's still a bit difficult on the eyes. Indeed there is a lot of boilerplate code associated with pipelines that can get tedious to write. However `river` has some special tricks up it's sleeve to save you from a lot of pain.
The first trick is that the name of each step in the pipeline can be omitted. If no name is given for a step then `river` automatically infers one.
```
model = compose.Pipeline(
compose.TransformerUnion(
compose.FuncTransformer(get_date_features),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)),
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Under the hood a `Pipeline` inherits from `collections.OrderedDict`. Indeed this makes sense because if you think about it a `Pipeline` is simply a sequence of steps where each step has a name. The reason we mention this is because it means you can manipulate a `Pipeline` the same way you would manipulate an ordinary `dict`. For instance we can print the name of each step by using the `keys` method.
```
for name in model.steps:
print(name)
```
The first step is a `FeatureUnion` and it's string representation contains the string representation of each of it's elements. Not having to write names saves up some time and space and is certainly less tedious.
The next trick is that we can use mathematical operators to compose our pipeline. For example we can use the `+` operator to merge `Transformer`s into a `TransformerUnion`.
```
model = compose.Pipeline(
compose.FuncTransformer(get_date_features) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) + \
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21)),
compose.Discard('store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude'),
preprocessing.StandardScaler(),
linear_model.LinearRegression()
)
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Likewhise we can use the `|` operator to assemble steps into a `Pipeline`.
```
model = (
compose.FuncTransformer(get_date_features) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(7)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(14)) +
feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(21))
)
to_discard = ['store_id', 'date', 'genre_name', 'area_name', 'latitude', 'longitude']
model = model | compose.Discard(*to_discard) | preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Hopefully you'll agree that this is a powerful way to express machine learning pipelines. For some people this should be quite remeniscent of the UNIX pipe operator. One final trick we want to mention is that functions are automatically wrapped with a `FuncTransformer`, which can be quite handy.
```
model = get_date_features
for n in [7, 14, 21]:
model += feature_extraction.TargetAgg(by='store_id', how=stats.RollingMean(n))
model |= compose.Discard(*to_discard)
model |= preprocessing.StandardScaler()
model |= linear_model.LinearRegression()
evaluate.progressive_val_score(datasets.Restaurants(), model, metrics.MAE())
```
Naturally some may prefer the procedural style we first used because they find it easier to work with. It all depends on your style and you should use what you feel comfortable with. However we encourage you to use operators because we believe that this will increase the readability of your code, which is very important. To each their own!
Before finishing we can take an interactive look at our pipeline.
```
model
```
| true | code | 0.661868 | null | null | null | null |
|
# Tutorial - Time Series Forecasting - Autoregression (AR)
The goal is to forecast time series with the Autoregression (AR) Approach. 1) JetRail Commuter, 2) Air Passengers, 3) Function Autoregression with Air Passengers, and 5) Function Autoregression with Wine Sales.
References Jason Brownlee - https://machinelearningmastery.com/time-series-forecasting-methods-in-python-cheat-sheet/
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
import warnings
warnings.filterwarnings("ignore")
# Load File
url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/JetRail%20Avg%20Hourly%20Traffic%20Data%20-%202012-2013.csv'
df = pd.read_csv(url)
df.info()
df.Datetime = pd.to_datetime(df.Datetime,format='%Y-%m-%d %H:%M')
df.index = df.Datetime
```
# Autoregression (AR) Approach with JetRail
The autoregression (AR) method models the next step in the sequence as a linear function of the observations at prior time steps.
The notation for the model involves specifying the order of the model p as a parameter to the AR function, e.g. AR(p). For example, AR(1) is a first-order autoregression model.
The method is suitable for univariate time series without trend and seasonal components.
```
#Split Train Test
import math
total_size=len(df)
split = 10392 / 11856
train_size=math.floor(split*total_size)
train=df.head(train_size)
test=df.tail(len(df) -train_size)
from statsmodels.tsa.ar_model import AR
model = AR(train.Count)
fit1 = model.fit()
y_hat = test.copy()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
#Plotting data
plt.figure(figsize=(12,8))
plt.plot(train.index, train['Count'], label='Train')
plt.plot(test.index,test['Count'], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR) Forecast")
plt.show()
```
# RMSE Calculation
```
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test.Count, y_hat.AR))
print('RMSE = '+str(rms))
```
# Autoregression (AR) Approach with Air Passagers
```
# Subsetting
url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/International%20Airline%20Passengers.csv'
df = pd.read_csv(url, sep =";")
df.info()
df.Month = pd.to_datetime(df.Month,format='%Y-%m')
df.index = df.Month
#df.head()
#Creating train and test set
import math
total_size=len(df)
train_size=math.floor(0.7*total_size) #(70% Dataset)
train=df.head(train_size)
test=df.tail(len(df) -train_size)
#train.info()
#test.info()
from statsmodels.tsa.ar_model import AR
# Create prediction table
y_hat = test.copy()
model = AR(train['Passengers'])
fit1 = model.fit()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
y_hat.describe()
plt.figure(figsize=(12,8))
plt.plot(train.index, train['Passengers'], label='Train')
plt.plot(test.index,test['Passengers'], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR)")
plt.show()
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test.Passengers, y_hat.AR))
print('RMSE = '+str(rms))
```
# Function Autoregression (AR) Approach with variables
```
def AR_forecasting(mydf,colval,split):
#print(split)
import math
from statsmodels.tsa.api import Holt
from sklearn.metrics import mean_squared_error
from math import sqrt
global y_hat, train, test
total_size=len(mydf)
train_size=math.floor(split*total_size) #(70% Dataset)
train=mydf.head(train_size)
test=mydf.tail(len(mydf) -train_size)
y_hat = test.copy()
model = AR(train[colval])
fit1 = model.fit()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
plt.figure(figsize=(12,8))
plt.plot(train.index, train[colval], label='Train')
plt.plot(test.index,test[colval], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR) Forecast")
plt.show()
rms = sqrt(mean_squared_error(test[colval], y_hat.AR))
print('RMSE = '+str(rms))
AR_forecasting(df,'Passengers',0.7)
```
# Testing Function Autoregression (AR) Approach with Wine Dataset
```
url = 'https://raw.githubusercontent.com/tristanga/Data-Cleaning/master/Converting%20Time%20Series/Wine_Sales_R_Dataset.csv'
df = pd.read_csv(url)
df.info()
df.Date = pd.to_datetime(df.Date,format='%Y-%m-%d')
df.index = df.Date
AR_forecasting(df,'Sales',0.7)
```
| true | code | 0.467696 | null | null | null | null |
|
# Tune TensorFlow Serving
## Guidelines
### CPU-only
If your system is CPU-only (no GPU), then consider the following values:
* `num_batch_threads` equal to the number of CPU cores
* `max_batch_size` to infinity (ie. MAX_INT)
* `batch_timeout_micros` to 0.
Then experiment with batch_timeout_micros values in the 1-10 millisecond (1000-10000 microsecond) range, while keeping in mind that 0 may be the optimal value.
### GPU
If your model uses a GPU device for part or all of your its inference work, consider the following value:
* `num_batch_threads` to the number of CPU cores.
* `batch_timeout_micros` to infinity while tuning `max_batch_size` to achieve the desired balance between throughput and average latency. Consider values in the hundreds or thousands.
For online serving, tune `batch_timeout_micros` to rein in tail latency.
The idea is that batches normally get filled to max_batch_size, but occasionally when there is a lapse in incoming requests, to avoid introducing a latency spike it makes sense to process whatever's in the queue even if it represents an underfull batch.
The best value for `batch_timeout_micros` is typically a few milliseconds, and depends on your context and goals.
Zero is a value to consider as it works well for some workloads. For bulk-processing batch jobs, choose a large value, perhaps a few seconds, to ensure good throughput but not wait too long for the final (and likely underfull) batch.
## Close TensorFlow Serving and Load Test Terminals
## Open a Terminal through Jupyter Notebook
### (Menu Bar -> File -> New...)

## Enable Request Batching
## Start TensorFlow Serving in Separate Terminal
The params are as follows:
* `port` for TensorFlow Serving (int)
* `model_name` (anything)
* `model_base_path` (/path/to/model/ above all versioned sub-directories)
* `enable_batching` (true|false)
```
tensorflow_model_server \
--port=9000 \
--model_name=linear \
--model_base_path=/root/models/linear_fully_optimized/cpu \
--batching_parameters_file=/root/config/tf_serving/batch_config.txt \
--enable_batching=true \
```
### `batch_config.txt`
* `num_batch_threads` (usually equal to the number of CPU cores or a multiple thereof)
* `max_batch_size` (# of requests - start with infinity, tune down to find the right balance between latency and throughput)
* `batch_timeout_micros` (minimum batch window duration)
```
num_batch_threads { value: 100 }
max_batch_size { value: 99999999 }
batch_timeout_micros { value: 100000 }
```
## Start Load Test in the Terminal
```
loadtest high
```
Notice the throughput and avg/min/max latencies:
```
summary ... = 301.1/s Avg: 227 Min: 3 Max: 456 Err: 0 (0.00%)
```
## Modify Request Batching Parameters, Repeat Load Test
Gain intuition on the performance impact of changing the request batching parameters.
| true | code | 0.738592 | null | null | null | null |
|
# Bayesian Optimization
[Bayesian optimization](https://en.wikipedia.org/wiki/Bayesian_optimization) is a powerful strategy for minimizing (or maximizing) objective functions that are costly to evaluate. It is an important component of [automated machine learning](https://en.wikipedia.org/wiki/Automated_machine_learning) toolboxes such as [auto-sklearn](https://automl.github.io/auto-sklearn/stable/), [auto-weka](http://www.cs.ubc.ca/labs/beta/Projects/autoweka/), and [scikit-optimize](https://scikit-optimize.github.io/), where Bayesian optimization is used to select model hyperparameters. Bayesian optimization is used for a wide range of other applications as well; as cataloged in the review [2], these include interactive user-interfaces, robotics, environmental monitoring, information extraction, combinatorial optimization, sensor networks, adaptive Monte Carlo, experimental design, and reinforcement learning.
## Problem Setup
We are given a minimization problem
$$ x^* = \text{arg}\min \ f(x), $$
where $f$ is a fixed objective function that we can evaluate pointwise.
Here we assume that we do _not_ have access to the gradient of $f$. We also
allow for the possibility that evaluations of $f$ are noisy.
To solve the minimization problem, we will construct a sequence of points $\{x_n\}$ that converge to $x^*$. Since we implicitly assume that we have a fixed budget (say 100 evaluations), we do not expect to find the exact minumum $x^*$: the goal is to get the best approximate solution we can given the allocated budget.
The Bayesian optimization strategy works as follows:
1. Place a prior on the objective function $f$. Each time we evaluate $f$ at a new point $x_n$, we update our model for $f(x)$. This model serves as a surrogate objective function and reflects our beliefs about $f$ (in particular it reflects our beliefs about where we expect $f(x)$ to be close to $f(x^*)$). Since we are being Bayesian, our beliefs are encoded in a posterior that allows us to systematically reason about the uncertainty of our model predictions.
2. Use the posterior to derive an "acquisition" function $\alpha(x)$ that is easy to evaluate and differentiate (so that optimizing $\alpha(x)$ is easy). In contrast to $f(x)$, we will generally evaluate $\alpha(x)$ at many points $x$, since doing so will be cheap.
3. Repeat until convergence:
+ Use the acquisition function to derive the next query point according to
$$ x_{n+1} = \text{arg}\min \ \alpha(x). $$
+ Evaluate $f(x_{n+1})$ and update the posterior.
A good acquisition function should make use of the uncertainty encoded in the posterior to encourage a balance between exploration—querying points where we know little about $f$—and exploitation—querying points in regions we have good reason to think $x^*$ may lie. As the iterative procedure progresses our model for $f$ evolves and so does the acquisition function. If our model is good and we've chosen a reasonable acquisition function, we expect that the acquisition function will guide the query points $x_n$ towards $x^*$.
In this tutorial, our model for $f$ will be a Gaussian process. In particular we will see how to use the [Gaussian Process module](http://docs.pyro.ai/en/0.3.1/contrib.gp.html) in Pyro to implement a simple Bayesian optimization procedure.
```
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import torch
import torch.autograd as autograd
import torch.optim as optim
from torch.distributions import constraints, transform_to
import pyro
import pyro.contrib.gp as gp
assert pyro.__version__.startswith('1.5.2')
pyro.set_rng_seed(1)
```
## Define an objective function
For the purposes of demonstration, the objective function we are going to consider is the [Forrester et al. (2008) function](https://www.sfu.ca/~ssurjano/forretal08.html):
$$f(x) = (6x-2)^2 \sin(12x-4), \quad x\in [0, 1].$$
This function has both a local minimum and a global minimum. The global minimum is at $x^* = 0.75725$.
```
def f(x):
return (6 * x - 2)**2 * torch.sin(12 * x - 4)
```
Let's begin by plotting $f$.
```
x = torch.linspace(0, 1)
plt.figure(figsize=(8, 4))
plt.plot(x.numpy(), f(x).numpy())
plt.show()
```
## Setting a Gaussian Process prior
[Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process) are a popular choice for a function priors due to their power and flexibility. The core of a Gaussian Process is its covariance function $k$, which governs the similarity of $f(x)$ for pairs of input points. Here we will use a Gaussian Process as our prior for the objective function $f$. Given inputs $X$ and the corresponding noisy observations $y$, the model takes the form
$$f\sim\mathrm{MultivariateNormal}(0,k(X,X)),$$
$$y\sim f+\epsilon,$$
where $\epsilon$ is i.i.d. Gaussian noise and $k(X,X)$ is a covariance matrix whose entries are given by $k(x,x^\prime)$ for each pair of inputs $(x,x^\prime)$.
We choose the [Matern](https://en.wikipedia.org/wiki/Mat%C3%A9rn_covariance_function) kernel with $\nu = \frac{5}{2}$ (as suggested in reference [1]). Note that the popular [RBF](https://en.wikipedia.org/wiki/Radial_basis_function_kernel) kernel, which is used in many regression tasks, results in a function prior whose samples are infinitely differentiable; this is probably an unrealistic assumption for most 'black-box' objective functions.
```
# initialize the model with four input points: 0.0, 0.33, 0.66, 1.0
X = torch.tensor([0.0, 0.33, 0.66, 1.0])
y = f(X)
gpmodel = gp.models.GPRegression(X, y, gp.kernels.Matern52(input_dim=1),
noise=torch.tensor(0.1), jitter=1.0e-4)
```
The following helper function `update_posterior` will take care of updating our `gpmodel` each time we evaluate $f$ at a new value $x$.
```
def update_posterior(x_new):
y = f(x_new) # evaluate f at new point.
X = torch.cat([gpmodel.X, x_new]) # incorporate new evaluation
y = torch.cat([gpmodel.y, y])
gpmodel.set_data(X, y)
# optimize the GP hyperparameters using Adam with lr=0.001
optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001)
gp.util.train(gpmodel, optimizer)
```
## Define an acquisition function
There are many reasonable options for the acquisition function (see references [1] and [2] for a list of popular choices and a discussion of their properties). Here we will use one that is 'simple to implement and interpret,' namely the 'Lower Confidence Bound' acquisition function.
It is given by
$$
\alpha(x) = \mu(x) - \kappa \sigma(x)
$$
where $\mu(x)$ and $\sigma(x)$ are the mean and square root variance of the posterior at the point $x$, and the arbitrary constant $\kappa>0$ controls the trade-off between exploitation and exploration. This acquisition function will be minimized for choices of $x$ where either: i) $\mu(x)$ is small (exploitation); or ii) where $\sigma(x)$ is large (exploration). A large value of $\kappa$ means that we place more weight on exploration because we prefer candidates $x$ in areas of high uncertainty. A small value of $\kappa$ encourages exploitation because we prefer candidates $x$ that minimize $\mu(x)$, which is the mean of our surrogate objective function. We will use $\kappa=2$.
```
def lower_confidence_bound(x, kappa=2):
mu, variance = gpmodel(x, full_cov=False, noiseless=False)
sigma = variance.sqrt()
return mu - kappa * sigma
```
The final component we need is a way to find (approximate) minimizing points $x_{\rm min}$ of the acquisition function. There are several ways to proceed, including gradient-based and non-gradient-based techniques. Here we will follow the gradient-based approach. One of the possible drawbacks of gradient descent methods is that the minimization algorithm can get stuck at a local minimum. In this tutorial, we adopt a (very) simple approach to address this issue:
- First, we seed our minimization algorithm with 5 different values: i) one is chosen to be $x_{n-1}$, i.e. the candidate $x$ used in the previous step; and ii) four are chosen uniformly at random from the domain of the objective function.
- We then run the minimization algorithm to approximate convergence for each seed value.
- Finally, from the five candidate $x$s identified by the minimization algorithm, we select the one that minimizes the acquisition function.
Please refer to reference [2] for a more detailed discussion of this problem in Bayesian Optimization.
```
def find_a_candidate(x_init, lower_bound=0, upper_bound=1):
# transform x to an unconstrained domain
constraint = constraints.interval(lower_bound, upper_bound)
unconstrained_x_init = transform_to(constraint).inv(x_init)
unconstrained_x = unconstrained_x_init.clone().detach().requires_grad_(True)
minimizer = optim.LBFGS([unconstrained_x], line_search_fn='strong_wolfe')
def closure():
minimizer.zero_grad()
x = transform_to(constraint)(unconstrained_x)
y = lower_confidence_bound(x)
autograd.backward(unconstrained_x, autograd.grad(y, unconstrained_x))
return y
minimizer.step(closure)
# after finding a candidate in the unconstrained domain,
# convert it back to original domain.
x = transform_to(constraint)(unconstrained_x)
return x.detach()
```
## The inner loop of Bayesian Optimization
With the various helper functions defined above, we can now encapsulate the main logic of a single step of Bayesian Optimization in the function `next_x`:
```
def next_x(lower_bound=0, upper_bound=1, num_candidates=5):
candidates = []
values = []
x_init = gpmodel.X[-1:]
for i in range(num_candidates):
x = find_a_candidate(x_init, lower_bound, upper_bound)
y = lower_confidence_bound(x)
candidates.append(x)
values.append(y)
x_init = x.new_empty(1).uniform_(lower_bound, upper_bound)
argmin = torch.min(torch.cat(values), dim=0)[1].item()
return candidates[argmin]
```
## Running the algorithm
To illustrate how Bayesian Optimization works, we make a convenient plotting function that will help us visualize our algorithm's progress.
```
def plot(gs, xmin, xlabel=None, with_title=True):
xlabel = "xmin" if xlabel is None else "x{}".format(xlabel)
Xnew = torch.linspace(-0.1, 1.1)
ax1 = plt.subplot(gs[0])
ax1.plot(gpmodel.X.numpy(), gpmodel.y.numpy(), "kx") # plot all observed data
with torch.no_grad():
loc, var = gpmodel(Xnew, full_cov=False, noiseless=False)
sd = var.sqrt()
ax1.plot(Xnew.numpy(), loc.numpy(), "r", lw=2) # plot predictive mean
ax1.fill_between(Xnew.numpy(), loc.numpy() - 2*sd.numpy(), loc.numpy() + 2*sd.numpy(),
color="C0", alpha=0.3) # plot uncertainty intervals
ax1.set_xlim(-0.1, 1.1)
ax1.set_title("Find {}".format(xlabel))
if with_title:
ax1.set_ylabel("Gaussian Process Regression")
ax2 = plt.subplot(gs[1])
with torch.no_grad():
# plot the acquisition function
ax2.plot(Xnew.numpy(), lower_confidence_bound(Xnew).numpy())
# plot the new candidate point
ax2.plot(xmin.numpy(), lower_confidence_bound(xmin).numpy(), "^", markersize=10,
label="{} = {:.5f}".format(xlabel, xmin.item()))
ax2.set_xlim(-0.1, 1.1)
if with_title:
ax2.set_ylabel("Acquisition Function")
ax2.legend(loc=1)
```
Our surrogate model `gpmodel` already has 4 function evaluations at its disposal; however, we have yet to optimize the GP hyperparameters. So we do that first. Then in a loop we call the `next_x` and `update_posterior` functions repeatedly. The following plot illustrates how Gaussian Process posteriors and the corresponding acquisition functions change at each step in the algorith. Note how query points are chosen both for exploration and exploitation.
```
plt.figure(figsize=(12, 30))
outer_gs = gridspec.GridSpec(5, 2)
optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001)
gp.util.train(gpmodel, optimizer)
for i in range(8):
xmin = next_x()
gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer_gs[i])
plot(gs, xmin, xlabel=i+1, with_title=(i % 2 == 0))
update_posterior(xmin)
plt.show()
```
Because we have assumed that our observations contain noise, it is improbable that we will find the exact minimizer of the function $f$. Still, with a relatively small budget of evaluations (8) we see that the algorithm has converged to very close to the global minimum at $x^* = 0.75725$.
While this tutorial is only intended to be a brief introduction to Bayesian Optimization, we hope that we have been able to convey the basic underlying ideas. Consider watching the lecture by Nando de Freitas [3] for an excellent exposition of the basic theory. Finally, the reference paper [2] gives a review of recent research on Bayesian Optimization, together with many discussions about important technical details.
## References
[1] `Practical bayesian optimization of machine learning algorithms`,<br />
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams
[2] `Taking the human out of the loop: A review of bayesian optimization`,<br />
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando De Freitas
[3] [Machine learning - Bayesian optimization and multi-armed bandits](https://www.youtube.com/watch?v=vz3D36VXefI)
| true | code | 0.798482 | null | null | null | null |
|
# Exploratory Data Analysis
In this notebook, I have illuminated some of the strategies that one can use to explore the data and gain some insights about it.
We will start from finding metadata about the data, to determining what techniques to use, to getting some important insights about the data. This is based on the IBM's Data Analysis with Python course on Coursera.
## The Problem
The problem is to find the variables that impact the car price. For this problem, we will use a real-world dataset that details information about cars.
The dataset used is an open-source dataset made available by Jeffrey C. Schlimmer. The one used in this notebook is hosted on the IBM Cloud. The dataset provides details of some cars. It includes properties like make, horse-power, price, wheel-type and so on.
## Loading data and finding the metadata
Import libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
%matplotlib inline
```
Load the data as pandas dataframe
```
path='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/automobileEDA.csv'
df = pd.read_csv(path)
df.head()
```
### Metadata: The columns's types
Finding column's types is an important step. It serves two purposes:
1. See if we need to convert some data. For example, price may be in string instead of numbers. This is very important as it could throw everything that we do afterwards off.
2. Find out what type of analysis we need to do with what column. After fixing the problems given above, the type of the object is often a great indicator of whether the data is categorical or numerical. This is important as it would determine what kind of exploratory analysis we can and want to do.
To find out the type, we can simply use `.dtypes` property of the dataframe. Here's an example using the dataframe we loaded above.
```
df.dtypes
```
From the results above, we can see that we can roughly divide the types into two categories: numeric (int64 and float64) and object. Although object type can contain lots of things, it's used often to store string variables. A quick glance at the table tells us that there's no glaring errors in object types.
Now we divide them into two categories: numerical variables and categorical variables. Numerical, as the name states, are the variables that hold numerical data. Categorical variables hold string that describes a certain property of the data (such as Audi as the make).
Make a special note that our target variable, price, is numerical. So the relationships we would be exploring would be between numerical-and-numerical data and numerical-and-categorical data.
## Relationship between Numerical Data
First we will explore the relationship between two numerical data and see if we can learn some insights out of it.
In the beginning, it's helpful to get the correlation between the variables. For this, we can use the `corr()` method to find out the correlation between all the variables.
Do note that the method finds out the Pearson correlation. Natively, pandas also support Spearman and the Kendall Tau correlation. You can also pass in a custom callable if you want. Check out the docs for more info.
Here's how to do it with the dataframe that we have:
```
df.corr()
```
Note that the diagonal elements are always one; because correlation with itself is always one.
Now, it seems somewhat daunting, and frankly, unneccessary to have this big of a table and correlation between things we don't care (say bore and stroke). If we want to find out the correlation with just price, using `corrwith()` method is helpful.
Here's how to do it:
```
corr = df.corrwith(df['price'])
# Prettify
pd.DataFrame(data=corr.values, index=corr.index, columns=['Correlation'])
```
From the table above, we have some idea about what can we expect the relationship should be like.
As a refresher, in Pearson correlation, values range in [-1, 1] with -1 and 1 implying a perfect linear relationship and 0 implying none. A positive value implies a positive relationship (value increase in response to increment) and negative value implies negative relationship (value decrease in response to increment).
The next step is to have a more visual outlook on the relationship.
### Visualizing Relationships
Continuous numerical variables are variables that may contain any value within some range. In pandas dtype, continuous numerical variables can have the type "int64" or "float64".
Scatterplots are a great way to visualize these variables is by using scatterplots.
To take it further, it's better to use a scatter plot with a regression line. This should also be able to provide us with some preliminary ways to test our hypothesis of the relationship between them.
In this notebook, we would be using the `regplot()` function in the `seaborn` package.
Below are some examples.
<h4>Positive linear relationship</h4>
Let's plot "engine-size" vs "price" since the correlation between them seems strong.
```
plt.figure(figsize=(5,5))
sns.regplot(x="engine-size", y="price", data=df);
```
As the engine-size goes up, the price goes up. This indicates a decent positive direct correlation between these two variables. Thus, we can say that the engine size is a good predictor of price since the regression line is almost a perfect diagonal line.
We can also check this with the Pearson correlation we got above. It's 0.87, which means sense.
Let's also try highway mpg too since the correlation between them is -0.7
```
sns.regplot(x="highway-mpg", y="price", data=df);
```
The graph shows a decent negative realtionship. So, it could be a potential indicator. Although, it seems that the relationship isn't exactly normal--given the curve of the points.
Let's try a higher order regression line.
```
sns.regplot(x="highway-mpg", y="price", data=df, order=2);
```
There. It seems much better.
### Weak Linear Relationship
Not all variables have to be correlated. Let's check out the graph of "Peak-rpm" as a predictor variable for "price".
```
sns.regplot(x="peak-rpm", y="price", data=df);
```
From the graph, it's clear that peak rpm is a bad indicator of price. It seems that there is no relationship between them. It seems almost random.
A quick check at the correlation value confirms this. The value is -0.1. It's very close to zero, implying no relationship.
Although there are cases in which low value can be misguiding, it's usually only for relationships that show a non-linear relationship in which value goes down and up. But the graph confirms there is none.
## Relationship between Numerical and Categorical data
Categorical variables, like their name imply, divide the data into certain categories. They essentially describe a 'characteristic' of the data unit, and are often selected from a small group of categories.
Although they commonly have "object" type, it's possible to have them has "int64" too (for example 'Level of happiness').
### Visualizing with Boxplots
Boxplots are a great way to visualize such relationships. Boxplots essentially show the spread of the data. You can use the `boxplot()` function in the seaborn package. Alternatively, you can use boxen or violin plots too.
Here's an example by plotting relationship between "body-style" and "price"
```
sns.boxplot(x="body-style", y="price", data=df);
```
We can infer that there is likely to be no significant relationship as there is a decent over lap.
Let's examine engine "engine-location" and "price"
```
sns.boxplot(x="engine-location", y="price", data=df);
```
Although there are a lot of outliers for the front, the distribution of price between these two engine-location categories is distinct enough to take engine-location as a potential good predictor of price.
Let's examine "drive-wheels" and "price".
```
sns.boxplot(x="drive-wheels", y="price", data=df);
```
<p>Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price.</p>
### Statistical method to checking for a significant realtionship - ANOVA
Although visualisation is helpful, it does not give us a concrete and certain vision in this (and often in others) case. So, it follows that we would want a metric to evaluate it by. For correlation between categorical and continuous variable, there are various tests. ANOVA family of tests is a common one to use.
The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups.
Do note that ANOVA is an _omnibus_ test statistic and it can't tell you what groups are the ones that have correlation among them. Only that there are at least two groups with a significant difference.
In python, we can calculate the ANOVA statistic fairly easily using the `scipy.stats` module. The function `f_oneway()` calculates and returns:
__F-test score__: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means. Although the degree of the 'largeneess' differs from data to data. You can use the F-table to find out the critical F-value by using the significance level and degrees of freedom for numerator and denominator and compare it with the calculated F-test score.
__P-value__: P-value tells how statistically significant is our calculated score value.
If the variables are strongly correlated, the expectation is to have ANOVA to return a sizeable F-test score and a small p-value.
#### Drive Wheels
Since ANOVA analyzes the difference between different groups of the same variable, the `groupby()` function will come in handy. With this, we can easily and concisely seperate the dataset into groups of drive-wheels. Essentially, the function allows us to split the dataset into groups and perform calculations on groups moving forward. Check out Grouping below for more explanation.
Let's see if different types 'drive-wheels' impact 'price', we group the data.
```
grouped_anova = df[['drive-wheels', 'price']].groupby(['drive-wheels'])
grouped_anova.head(2)
```
We can obtain the values of the method group using the method `get_group()`
```
grouped_anova.get_group('4wd')['price']
```
Finally, we use the function `f_oneway()` to obtain the F-test score and P-value.
```
# ANOVA
f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price'], grouped_anova.get_group('4wd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val)
```
From the result, we can see that we have a large F-test score and a very small p-value. Still, we need to check if all three tested groups are highly correlated?
#### Separately: fwd and rwd
```
f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val )
```
Seems like the result is significant and they are correlated. Let's examine the other groups
#### 4wd and rwd
```
f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val)
```
<h4>4wd and fwd</h4>
```
f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('fwd')['price'])
print("ANOVA results: F=", f_val, ", P =", p_val)
```
## Relationship between Categorical Data: Corrected Cramer's V
A good way to test relation between two categorical variable is Corrected Cramer's V.
**Note:** A p-value close to zero means that our variables are very unlikely to be completely unassociated in some population. However, this does not mean the variables are strongly associated; a weak association in a large sample size may also result in p = 0.000.
**General Rule of Thumb:**
* V ∈ [0.1,0.3]: weak association
* V ∈ [0.4,0.5]: medium association
* V > 0.5: strong association
Here's how to do it in python:
```python
import scipy.stats as ss
import pandas as pd
import numpy as np
def cramers_corrected_stat(x, y):
""" calculate Cramers V statistic for categorial-categorial association.
uses correction from Bergsma and Wicher,
Journal of the Korean Statistical Society 42 (2013): 323-328
"""
result = -1
if len(x.value_counts()) == 1:
print("First variable is constant")
elif len(y.value_counts()) == 1:
print("Second variable is constant")
else:
conf_matrix = pd.crosstab(x, y)
if conf_matrix.shape[0] == 2:
correct = False
else:
correct = True
chi2, p = ss.chi2_contingency(conf_matrix, correction=correct)[0:2]
n = sum(conf_matrix.sum())
phi2 = chi2/n
r, k = conf_matrix.shape
phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1))
rcorr = r - ((r-1)**2)/(n-1)
kcorr = k - ((k-1)**2)/(n-1)
result = np.sqrt(phi2corr / min((kcorr-1), (rcorr-1)))
return round(result, 6), round(p, 6)
```
## Descriptive Statistical Analysis
Although the insights gained above are significant, it's clear we need more work.
Since we are exploring the data, performing some common and useful descriptive statistical analysis would be nice. However, there are a lot of them and would require a lot of work to do them by scratch. Fortunately, `pandas` library has a neat method that computes all of them for us.
The `describe()` method, when invoked on a dataframe automatically computes basic statistics for all continuous variables. Do note that any NaN values are automatically skipped in these statistics. By default, it will show stats for numerical data.
Here's what it will show:
* Count of that variable
* Mean
* Standard Deviation (std)
* Minimum Value
* IQR (Interquartile Range: 25%, 50% and 75%)
* Maximum Value
If you want, you can change the percentiles too. Check out the docs for that.
Here's how to do it in our dataframe:
```
df.describe()
```
To get the information about categorical variables, we need to specifically tell it to pandas to include them.
For categorical variables, it shows:
* Count
* Unique values
* The most common value or 'top'
* Frequency of the 'top'
```
df.describe(include=['object'])
```
### Value Counts
Sometimes, we need to understand the distribution of the categorical data. This could mean understanding how many units of each characteristic/variable we have. `value_counts()` is a method in pandas that can help with it. If we use it with a series, it will give us the unique values and how many of them exist.
_Caution:_ Using it with DataFrame works like count of unique rows by combination of all columns (like in SQL). This may or may not be what you want. For example, using it with drive-wheels and engine-location would give you the number of rows with unique pair of values.
Here's an example of doing it with the drive-wheels column.
```
df['drive-wheels'].value_counts().to_frame()
```
`.to_frame()` method is added to make it into a dataframe, hence making it look better.
You can play around and rename the column and index name if you want.
We can repeat the above process for the variable 'engine-location'.
```
df['engine-location'].value_counts().to_frame()
```
Examining the value counts of the engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, this result is skewed. Thus, we are not able to draw any conclusions about the engine location.
## Grouping
Grouping is a useful technique to explore the data. With grouping, we can split data and apply various transforms. For example, we can find out the mean of different body styles. This would help us to have more insight into whether there's a relationsip between our target variable and the variable we are using grouping on.
Although oftenly used on categorical data, grouping can also be used with numerical data by seperating them into categories. For example we might seperate car by prices into affordable and luxury groups.
In pandas, we can use the `groupby()` method.
Let's try it with the 'drive-wheels' variable. First we will find out how many unique values there are. We do that by `unique()` method.
```
df['drive-wheels'].unique()
```
If we want to know, on average, which type of drive wheel is most valuable, we can group "drive-wheels" and then average them.
```
df[['drive-wheels','body-style','price']].groupby(['drive-wheels']).mean()
```
From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.
It's also possible to group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'.
Let's store it in the variable `grouped_by_wheels_and_body`.
```
grouped_by_wheels_and_body = df[['drive-wheels','body-style','price']].groupby(['drive-wheels','body-style']).mean()
grouped_by_wheels_and_body
```
Although incredibly useful, it's a little hard to read. It's better to convert it to a pivot table.
A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. There are various ways to do so. A way to do that is to use the method `pivot()`. However, with groups like the one above (multi-index), one can simply call the `unstack()` method.
```
grouped_by_wheels_and_body = grouped_by_wheels_and_body.unstack()
grouped_by_wheels_and_body
```
Often, we won't have data for some of the pivot cells. Often, it's filled with the value 0, but any other value could potentially be used as well. This could be mean or some other flag.
```
grouped_by_wheels_and_body.fillna(0)
```
Let's do the same for body-style only
```
df[['price', 'body-style']].groupby('body-style').mean()
```
### Visualizing Groups
Heatmaps are a great way to visualize groups. They can show relationships clearly in this case.
Do note that you need to be careful with the color schemes. Since chosing appropriate colorscheme is not only appropriate for your 'story' of the data, it is also important since it can impact the perception of the data.
[This resource](https://matplotlib.org/tutorials/colors/colormaps.html) gives a great idea on what to choose as a color scheme and when it's appropriate. It also has samples of the scheme below too for a quick preview along with when should one use them.
Here's an example of using it with the pivot table we created with the `seaborn` package.
```
sns.heatmap(grouped_by_wheels_and_body, cmap="Blues");
```
This heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'.
## Correlation and Causation
Correlation and causation are terms that are used often and confused with each other--or worst considered to imply the other. Here's a quick overview of them:
__Correlation__: The degree of association (or resemblance) of variables with each other.
__Causation__: A relationship of cause and effect between variables.
It is important to know the difference between these two.
Note that correlation does __not__ imply causation.
Determining correlation is much simpler. We can almost always use methods such as Pearson Correlation, ANOVA method, and graphs. Determining causation may require independent experimentation.
### Pearson Correlation
Described earlier, Pearson Correlation is great way to measure linear dependence between two variables. It's also the default method in the method corr.
```
df.corr()
```
### Cramer's V
Cramer's V is a great method to calculate the relationship between two categorical variables. Read above about Cramer's V to get a better estimate.
**General Rule of Thumb:**
* V ∈ [0.1,0.3]: weak association
* V ∈ [0.4,0.5]: medium association
* V > 0.5: strong association
### ANOVA Method
As discussed previously, ANOVA method is great to conduct analysis to determine whether there's a significant realtionship between categorical and continous variables. Check out the ANOVA section above for more details.
Now, just knowing the correlation statistics is not enough. We also need to know whether the relationship is statistically significant or not. We can use p-value for that.
### P-value
In very simple terms, p-value checks the probability whether the result we have could be just a random chance. For example, for a p-value of 0.05, we are certain that our results are insignificant about 5% of time and are significant 95% of the time.
It's recommended to define a tolerance level of the p-value beforehand. Here's some common interpretations of p-value:
* The p-value is $<$ 0.001: A strong evidence that the correlation is significant.
* The p-value is $<$ 0.05: A moderate evidence that the correlation is significant.
* The p-value is $<$ 0.1: A weak evidence that the correlation is significant.
* The p-value is $>$ 0.1: No evidence that the correlation is significant.
We can obtain this information using `stats` module in the `scipy` library.
Let's calculate it for wheel-base vs price
```
pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)
```
Since the p-value is $<$ 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585)
Let's try one more example: horsepower vs price.
```
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)
```
Since the p-value is $<$ 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1).
### Conclusion: Important Variables
We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. Some more analysis later, we can find that the important variables are:
Continuous numerical variables:
* Length
* Width
* Curb-weight
* Engine-size
* Horsepower
* City-mpg
* Highway-mpg
* Wheel-base
* Bore
Categorical variables:
* Drive-wheels
If needed, we can now mone onto into building machine learning models as we now know what to feed our model.
P.S. [This medium article](https://medium.com/@outside2SDs/an-overview-of-correlation-measures-between-categorical-and-continuous-variables-4c7f85610365#:~:text=A%20simple%20approach%20could%20be,variance%20of%20the%20continuous%20variable.&text=If%20the%20variables%20have%20no,similar%20to%20the%20original%20variance) is a great resource that talks about various ways of correlation between categorical and continous variables.
## Author
By Abhinav Garg
| true | code | 0.33231 | null | null | null | null |
|
# Deep learning for Natural Language Processing
* Simple text representations, bag of words
* Word embedding and... not just another word2vec this time
* 1-dimensional convolutions for text
* Aggregating several data sources "the hard way"
* Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning
Special thanks to Irina Golzmann for help with technical part.
# NLTK
You will require nltk v3.2 to solve this assignment
__It is really important that the version is 3.2, otherwize russian tokenizer might not work__
Install/update
* `sudo pip install --upgrade nltk==3.2`
* If you don't remember when was the last pip upgrade, `sudo pip install --upgrade pip`
If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Dataset
Ex-kaggle-competition on job salary prediction

Original conest - https://www.kaggle.com/c/job-salary-prediction
### Download
Go [here](https://www.kaggle.com/c/job-salary-prediction) and download as usual
CSC cloud: data should already be here somewhere, just poke the nearest instructor.
# What's inside
Different kinds of features:
* 2 text fields - title and description
* Categorical fields - contract type, location
Only 1 binary target whether or not such advertisement contains prohibited materials
* criminal, misleading, human reproduction-related, etc
* diving into the data may result in prolonged sleep disorders
```
df = pd.read_csv("./Train_rev1.csv",sep=',')
print df.shape, df.SalaryNormalized.mean()
df[:5]
```
# Tokenizing
First, we create a dictionary of all existing words.
Assign each word a number - it's Id
```
from nltk.tokenize import RegexpTokenizer
from collections import Counter,defaultdict
tokenizer = RegexpTokenizer(r"\w+")
#Dictionary of tokens
token_counts = Counter()
#All texts
all_texts = np.hstack([df.FullDescription.values,df.Title.values])
#Compute token frequencies
for s in all_texts:
if type(s) is not str:
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
for token in tokens:
token_counts[token] +=1
```
### Remove rare tokens
We are unlikely to make use of words that are only seen a few times throughout the corpora.
Again, if you want to beat Kaggle competition metrics, consider doing something better.
```
#Word frequency distribution, just for kicks
_=plt.hist(token_counts.values(),range=[0,50],bins=50)
#Select only the tokens that had at least 10 occurences in the corpora.
#Use token_counts.
min_count = 5
tokens = <tokens from token_counts keys that had at least min_count occurences throughout the dataset>
token_to_id = {t:i+1 for i,t in enumerate(tokens)}
null_token = "NULL"
token_to_id[null_token] = 0
print "# Tokens:",len(token_to_id)
if len(token_to_id) < 10000:
print "Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc"
if len(token_to_id) > 100000:
print "Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc"
```
### Replace words with IDs
Set a maximum length for titles and descriptions.
* If string is longer that that limit - crop it, if less - pad with zeros.
* Thus we obtain a matrix of size [n_samples]x[max_length]
* Element at i,j - is an identifier of word j within sample i
```
def vectorize(strings, token_to_id, max_len=150):
token_matrix = []
for s in strings:
if type(s) is not str:
token_matrix.append([0]*max_len)
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
token_ids = map(lambda token: token_to_id.get(token,0), tokens)[:max_len]
token_ids += [0]*(max_len - len(token_ids))
token_matrix.append(token_ids)
return np.array(token_matrix)
desc_tokens = vectorize(df.FullDescription.values,token_to_id,max_len = 500)
title_tokens = vectorize(df.Title.values,token_to_id,max_len = 15)
```
### Data format examples
```
print "Matrix size:",title_tokens.shape
for title, tokens in zip(df.Title.values[:3],title_tokens[:3]):
print title,'->', tokens[:10],'...'
```
__ As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network __
# Non-sequences
Some data features are categorical data. E.g. location, contract type, company
They require a separate preprocessing step.
```
#One-hot-encoded category and subcategory
from sklearn.feature_extraction import DictVectorizer
categories = []
data_cat = df[["Category","LocationNormalized","ContractType","ContractTime"]]
categories = [A list of dictionaries {"category":category_name, "subcategory":subcategory_name} for each data sample]
vectorizer = DictVectorizer(sparse=False)
df_non_text = vectorizer.fit_transform(categories)
df_non_text = pd.DataFrame(df_non_text,columns=vectorizer.feature_names_)
```
# Split data into training and test
```
#Target variable - whether or not sample contains prohibited material
target = df.is_blocked.values.astype('int32')
#Preprocessed titles
title_tokens = title_tokens.astype('int32')
#Preprocessed tokens
desc_tokens = desc_tokens.astype('int32')
#Non-sequences
df_non_text = df_non_text.astype('float32')
#Split into training and test set.
#Difficulty selector:
#Easy: split randomly
#Medium: split by companies, make sure no company is in both train and test set
#Hard: do whatever you want, but score yourself using kaggle private leaderboard
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = <define_these_variables>
```
## Save preprocessed data [optional]
* The next tab can be used to stash all the essential data matrices and get rid of the rest of the data.
* Highly recommended if you have less than 1.5GB RAM left
* To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True.
```
save_prepared_data = True #save
read_prepared_data = False #load
#but not both at once
assert not (save_prepared_data and read_prepared_data)
if save_prepared_data:
print "Saving preprocessed data (may take up to 3 minutes)"
import pickle
with open("preprocessed_data.pcl",'w') as fout:
pickle.dump(data_tuple,fout)
with open("token_to_id.pcl",'w') as fout:
pickle.dump(token_to_id,fout)
print "done"
elif read_prepared_data:
print "Reading saved data..."
import pickle
with open("preprocessed_data.pcl",'r') as fin:
data_tuple = pickle.load(fin)
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple
with open("token_to_id.pcl",'r') as fin:
token_to_id = pickle.load(fin)
#Re-importing libraries to allow staring noteboook from here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print "done"
```
# Train the monster
Since we have several data sources, our neural network may differ from what you used to work with.
* Separate input for titles
* cnn+global max or RNN
* Separate input for description
* cnn+global max or RNN
* Separate input for categorical features
* Few dense layers + some black magic if you want
These three inputs must be blended somehow - concatenated or added.
* Output: a simple regression task
```
#libraries
import lasagne
from theano import tensor as T
import theano
#3 inputs and a refere output
title_token_ids = T.matrix("title_token_ids",dtype='int32')
desc_token_ids = T.matrix("desc_token_ids",dtype='int32')
categories = T.matrix("categories",dtype='float32')
target_y = T.vector("is_blocked",dtype='float32')
```
# NN architecture
```
title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids)
descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids)
cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories)
# Descriptions
#word-wise embedding. We recommend to start from some 64 and improving after you are certain it works.
descr_nn = lasagne.layers.EmbeddingLayer(descr_inp,
input_size=len(token_to_id)+1,
output_size=?)
#reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time
descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1])
descr_nn = 1D convolution over embedding, maybe several ones in a stack
#pool over time
descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max)
#Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way
#1dconv -> 1d max pool ->1dconv and finally global pool
# Titles
title_nn = <Process titles somehow (title_inp)>
# Non-sequences
cat_nn = <Process non-sequences(cat_inp)>
nn = <merge three layers into one (e.g. lasagne.layers.concat) >
nn = lasagne.layers.DenseLayer(nn,your_lucky_number)
nn = lasagne.layers.DropoutLayer(nn,p=maybe_use_me)
nn = lasagne.layers.DenseLayer(nn,1,nonlinearity=lasagne.nonlinearities.linear)
```
# Loss function
* The standard way:
* prediction
* loss
* updates
* training and evaluation functions
```
#All trainable params
weights = lasagne.layers.get_all_params(nn,trainable=True)
#Simple NN prediction
prediction = lasagne.layers.get_output(nn)[:,0]
#loss function
loss = lasagne.objectives.squared_error(prediction,target_y).mean()
#Weight optimization step
updates = <your favorite optimizer>
```
### Determinitic prediction
* In case we use stochastic elements, e.g. dropout or noize
* Compile a separate set of functions with deterministic prediction (deterministic = True)
* Unless you think there's no neet for dropout there ofc. Btw is there?
```
#deterministic version
det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0]
#equivalent loss function
det_loss = <an excercise in copy-pasting and editing>
```
### Coffee-lation
```
train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates)
eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction])
```
# Training loop
* The regular way with loops over minibatches
* Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset
```
# Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z)
def iterate_minibatches(*arrays,**kwargs):
batchsize=kwargs.get("batchsize",100)
shuffle = kwargs.get("shuffle",True)
if shuffle:
indices = np.arange(len(arrays[0]))
np.random.shuffle(indices)
for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield [arr[excerpt] for arr in arrays]
```
### Tweaking guide
* batch_size - how many samples are processed per function call
* optimization gets slower, but more stable, as you increase it.
* May consider increasing it halfway through training
* minibatches_per_epoch - max amount of minibatches per epoch
* Does not affect training. Lesser value means more frequent and less stable printing
* Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch
* n_epochs - total amount of epochs to train for
* `n_epochs = 10**10` and manual interrupting is still an option
Tips:
* With small minibatches_per_epoch, network quality may jump up and down for several epochs
* Plotting metrics over training time may be a good way to analyze which architectures work better.
* Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential
```
from sklearn.metrics import mean_squared_error,mean_absolute_error
n_epochs = 100
batch_size = 100
minibatches_per_epoch = 100
for i in range(n_epochs):
#training
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch:break
loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Train:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch: break
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Val:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
print "If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. "
```
# Final evaluation
Evaluate network over the entire test set
```
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Scores:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
```
Now tune the monster for least MSE you can get!
# Next time in our show
* Recurrent neural networks
* How to apply them to practical problems?
* What else can they do?
* Why so much hype around LSTM?
* Stay tuned!
| true | code | 0.343094 | null | null | null | null |
|
# Regarding this Notebook
This is a replication of the original analysis performed in the paper by [Waade & Enevoldsen 2020](missing). This replication script will not be updated as it is intended for reproducibility. Any deviations from the paper is marked with bold for transparency.
Footnotes and internal documentation references are removed from this example to avoid confusion.
---
# 2.2 Using tomsup
One of the advantages of computational models of cognitive processes is that the implications of the model can be worked out by simulating the model’s behavior in a variety of situations. tomsup in particular, allows to test the k-ToM model as it plays a wide set of game-theoretical situations (e.g. Matching Pennies or Prisoner’s Dilemma), in interaction with a variety of different agents (e.g. other k-ToM or less sophisticated agents), within different possible settings (e.g. repeated interactions with the same opponent, or round robin tournaments). In order to better understand the setup of the tomsup package, we start with the case of two simple agents interacting, followed by a simple exampleusing k-ToM agents, which will also illustrate how one might implement tomsup in an experiment. Lastly, we will show how to run a simulation using multiple agents as well as how to plot the evolving internal states of a k-ToM agent. In this simple scenario two agents are playing the Matching Pennies game. One agent hides a penny in one hand: let’s say chooses 0 for hiding in the left hand, and 1 in the right. The other agent has to guess where the penny is. If the second agent guesses (chooses the same hand as the first), it wins and the first loses. In other words, the first agent wants to choose the hand that the second will not choose and the second wants to choose the hand that the first chooses. In this example, one of the agents implements the Random Bias strategy (e.g. has a 60 percent probability of choosing right over left), while the other implements a classic Q-learning strategy (a model free reinforcement learning mechanism updating the expected reward of choosing a specific option on a trial by trial basis). The full list of strategies already implemented in tomsup is accessible using the function `valid_agents()`. The user first has to install the tomsup package developed using python 3.6 (Van Rossum & Drake, 2009). The package can be downloaded and installed using pip:
```pip3 install tomsup```
**However, in this notebook we will assume the user simply downloaded the git. Feel free to skip the next code chunk if that is not the case.**
```
# assuming you are in the github folder change the path - not relevant if tomsup is installed via. pip
import os
os.chdir("..") # go out of the tutorials folder
```
Both approaches will also install the required dependencies. Now tomsup can be imported into Python following the lines;
```
import tomsup as ts
```
We will also set a arbitrary seed for to ensure reproducibility;
```
import random
import numpy as np
np.random.seed(1995)
random.seed(1995) # The year of birth of the first author
```
First we need to set up the Matching Pennies game. As different games are defined by different payoff matrices, we set up the game by creating the appropriate payoff matrix using the ```PayoffMatrix``` class.
```
# initiate the competitive matching pennies game
penny = ts.PayoffMatrix(name="penny_competitive")
# print the payoff matrix
print(penny)
```
The Matching Pennies game is a zero sum game, meaning that for one agent to get a reward, the opponent has to lose. Agents have thus to predict their opponents' behavior, which is ideal for investigating \gls{tom}. Note that to explore other payoff matrices included in the package, or to learn how to specify a custom payoff matrix, the user can type the `help(ts.PayoffMatrix)` command.
Then we create the first of the two competing agents:
```
# define the random bias agent, which chooses 1 70 percent of the time, and call the agent "jung"
jung = ts.RB(bias=0.7)
# Examine Agent
print(f"jung is a class of type: {type(jung)}")
if isinstance(jung, ts.Agent):
print(f"but jung is also an instance of the parent class ts.Agent")
# let us have Jung make a choice
choice = jung.compete()
print(f"jung chose {choice} and its probability for choosing 1 was {jung.get_bias()}.")
```
Note that it is possible to create one or more agents simultaneously using the convenient `create\_agents()` and passing any starting parameters to it in the form of a dictionary.
```
# create a reinforcement learning agent
skinner = ts.create_agents(agents="QL", start_params={"save_history": True})
```
Now that both agents are created, we have them play against each other.
```
# have the agents compete for 30 rounds
results = ts.compete(jung, skinner, p_matrix=penny, n_rounds=30)
# examine results
print(results.head()) # inspect the first 5 rows of the dataframe
```
** Note: you can remove the print() to get a nicer printout of the dataframe **
```
results.head() # inspect the first 5 rows of the dataframe
```
The data frame stores the choice of each agent as well as their resulting payoff. Simply summing the payoff columns would determine the winner.
## k-ToM
Here we will present some simple examples of the k-ToM agent. For a more in-depth description we recommend checking the expanded introduction on the [Github repository](https://github.com/KennethEnevoldsen/tomsup/blob/master/tutorials/introduction_to_tom.ipynb).
We will start of by creating a 0-ToM with default priors and `save_history=True` to examine the workings of it. Notice that setting `save_history` is turned off by default to save on memory which is especially problematic for ToM agents with high sophistication level.
```
# Creating a simple 1-ToM with default parameters
tom_1 = ts.TOM(level=1, dilution=None, save_history=True)
# Extract the parameters
tom_1.print_parameters()
```
Note that k-ToM agents as default uses agnostic starting beliefs. These can be shown in detail and specified as desired, as shown in **appendix in the paper**.
To increase the agent's tendency to choose one we could simply increase its bias. Similarly, if we want the agent to behave in a more more deterministic fashion we can decrease the behavioural temperature. When the parameter values are set, we can play the agent against an opponent using the `.compete()` method. Where `agent` denote the agent in the payoff matrix (0 or 1) and the `op_choice` denote the choice of the opponent during the previous round.
```
tom_2 = ts.TOM(
level=2,
volatility=-2,
b_temp=-2, # more deterministic
bias=0,
dilution=None,
save_history=True,
)
choice = tom_2.compete(p_matrix=penny, agent=0, op_choice=None)
print("tom_2 chose:", choice)
```
The user is recommended to have the 1-ToM and the 2-ToM agents compete using the previously presented `ts.compete()` function for simplicity. However, to make the process more transparent for the user in the following we create a simple for-loop:
```
tom_2.reset() # reset before start
prev_choice_1tom = None
prev_choice_2tom = None
for trial in range(1, 4):
# note that op_choice is choice on previous turn
# and that agent is the agent you respond to in the payoff matrix
choice_1 = tom_1.compete(p_matrix=penny, agent=0, op_choice=prev_choice_1tom)
choice_2 = tom_2.compete(p_matrix=penny, agent=1, op_choice=prev_choice_2tom)
# update previous choice
prev_choice_1tom = choice_1
prev_choice_2tom = choice_2
print(
f"Round {trial}",
f" 1-ToM choose {choice_1}",
f" 2-ToM choose {choice_2}",
sep="\n",
)
```
A for loop like this can be used to implement k-ToM in an experimental setting by replacing the agent with the behavior of a participant. Examples of such implementations (interfacing with PsychoPy are available in the [documentation](https://github.com/KennethEnevoldsen/tomsup/tree/master/tutorials/psychopy_experiment)).
```
tom_2.print_internal(
keys=["p_k", "p_op"], level=[0, 1] # print these two states
) # for the agent simulated opponents 0-ToM and 1-ToM
```
For instance, we can note that the estimate of the opponent's sophistication level (\texttt{p\_k}) slightly favors a 1-ToM as opposed to a 0-ToM and that the average probability of the opponent choosing one (`p_op`) slightly favors 1 (which was indeed the option the opponent chose). These estimates are quite uncertain due to the few rounds played. More information on how to interpret the internal states of the ToM agent is available in the documentation of the package, e.g. by using the help function `help(tom_2.print_internal)`
## Multiple Agents and Visualizing Results
The above syntax is useful for small setups. However, the user might want to build larger simulations involving several agents to simulate data for experimental setup or test underlying assumptions. The package provides syntax for quickly iterating over multiple agents, rounds and even simulations. We will here show a quick example along with how to visualize the results and internal states of ToM agents.
```
# Create a list of agents
agents = ["RB", "QL", "WSLS", "1-TOM", "2-TOM"]
# And set their starting parameters. An empty dictionary denotes default values
start_params = [{"bias": 0.7}, {"learning_rate": 0.5}, {}, {}, {}]
group = ts.create_agents(agents, start_params) # create a group of agents
# Specify the environment
# round_robin e.g. each agent will play against all other agents
group.set_env(env="round_robin")
# Finally, we make the group compete 20 simulations of 30 rounds
results = group.compete(p_matrix=penny, n_rounds=30, n_sim=20, save_history=True)
```
Following the simulation, a data frame can be extracted as before, with additional columns reporting simulation number, competing agent pair (`agent0` and `agent1`) and if `save_history=True` it will also add two columns denoting the internal states of each agent, e.g. estimates and expectations at each trial.
```
res = group.get_results()
print(res.head(1)) # print the first row
```
**Again, removing the print statement gives you a more readable output**
```
res.head(1)
```
** to allow other authors to examine these results we have also saved the results to a new lines delimited .ndjson**
```
res.to_json("tutorials/paper.ndjson", orient="records", lines=True)
```
The package also provides convenient functions for plotting the agent's choices and performance.
> for nicer plots we will increase the figure size using the following code. This is excluded from the paper for simplicity
```
import matplotlib.pyplot as plt
# Set figure size
plt.rcParams["figure.figsize"] = [10, 10]
# plot a heatmap of the rewards for all agent in the tournament
group.plot_heatmap(cmap="RdBu_r")
plt.rcParams["figure.figsize"] = [5, 5]
# plot the choices of the 1-ToM agent when competing against the WSLS agent
group.plot_choice(agent0="WSLS", agent1="1-TOM", agent=1)
# plot the choices of the 1-ToM agent when competing against the WSLS agent
group.plot_choice(agent0="RB", agent1="1-TOM", agent=1)
# plot the score of the 1-ToM agent when competing against the WSLS agent
group.plot_score(agent0="WSLS", agent1="1-TOM", agent=1)
# plot the score of the 2-ToM agent when competing against the WSLS agent
group.plot_score(agent0="WSLS", agent1="2-TOM", agent=1)
```
As seen in the heatmap we see that the k-ToM model compares favorably against simpler agents
such as the QL. Furthermore notice that the 1-ToM and 2-ToM compares especially favorably
against the WSLS agent as this agent act as a deterministic 0-ToM. Similarly, we see that the
2-ToM agent incurs a cost for being more complex by being less able to take advantage of the
deterministic nature of WSLS. We can examine this further in the figures, where we see that the
1-ToM is almost perfectly able to predict the behaviour of the WSLS agent after a turn 5
across simulations while the 2-ToM, take longer to estimate the behaviour. The figures also show
that 1-ToM differs in behavioural patterns figures when playing against a RB agents showing
a bias estimation behaviour, while when playing against the WSLS it shows a oscillating
choice pattern. Ultimately these are meant for initial investigation and more elaborate plots
can be constructed from the results data frame.
> here we just refer to the figures, for more exact references please see the paper
Besides these general plots the package also contains a series of shortcuts for plotting $k$-ToM's internal states such as its estimate of its opponent's sophistication level, in which it is seen that the 2-ToM correctly estimates the opponents estimates as having a sophistication level of 1 on average.
```
# plot 2-ToM estimate of its opponent sophistication level
group.plot_p_k(agent0="1-TOM", agent1="2-TOM", agent=1, level=0)
group.plot_p_k(agent0="1-TOM", agent1="2-TOM", agent=1, level=1)
```
It is also easy to plot k-ToM's estimates of its opponent's model parameters. As an example, the following code plots the 2-ToM's estimate of 1-ToM's volatility and bias. We see that the ToM agent approaches a correct estimate of the default volatility of -2 as well as correctly estimated its opponent as having no inherent bias.
```
# plot 2-ToM estimate of its opponent's volatility while believing the opponent to be level 1.
group.plot_tom_op_estimate(
agent0="1-TOM", agent1="2-TOM", agent=1, estimate="volatility", level=1, plot="mean"
)
# plot 2-ToM estimate of its opponent's bias while believing the opponent to be level 1.
group.plot_tom_op_estimate(
agent0="1-TOM", agent1="2-TOM", agent=1, estimate="bias", level=1, plot="mean"
)
```
Use `help(ts.AgentGroup.plot_tom_op_estimate)` for information on how to plot the other estimated parameters or k-ToM's uncertainty in these parameters.
Additional information can be found in the history column in the results data frame, if needed. This includes all k-ToM's internal states (the changing variables in the model) which for example include choice probability, gradient, estimate uncertainties as well as k-ToM's estimates of its opponent's internal states. Documentation, examples and further tutorials can be found on the Github repository, this also includes a more in-depth description of the dynamics of **the k-ToM model implementation**.
---
## Are you left with any questions?
Feel free to open a github issue with questions and or bug reports.
Best,
*Enevoldsen and Waade*
| true | code | 0.830457 | null | null | null | null |
|
# Building the dataset
In this notebook, I'm going to be working with three datasets to create the dataset that the chatbot will be trained on.
```
import pandas as pd
files_path = 'D:/Sarcastic Chatbot/Input/'
```
# First dataset
**The Wordball Joke Dataset**, [link](https://www.kaggle.com/bfinan/jokes-question-and-answer/).
This dataset consists of three files, namely:
1. <i>qajokes1.1.2.csv</i>: with <i>75,114</i> pairs.
2. <i>t_lightbulbs.csv</i>: with <i>2,640</i> pairs.
3. <i>t_nosubject.csv</i>: with <i>32,120</i> pairs.
However, I'm not going to incorporate <i>t_lightbulbs.csv</i> in my dataset because I don't want that many examples of one topic. Besides, all the examples are similar in structure (they all start with <i>how many</i>).
Read the data files into pandas dataframes:
```
wordball_qajokes = pd.read_csv(files_path + 'qajokes1.1.2.csv', usecols=['Question', 'Answer'])
wordball_nosubj = pd.read_csv(files_path + 't_nosubject.csv', usecols=['Question', 'Answer'])
print(len(wordball_qajokes))
print(len(wordball_nosubj))
wordball_qajokes.head()
wordball_nosubj.head()
```
Concatenate both dataframes into one:
```
wordball = pd.concat([wordball_qajokes, wordball_nosubj], ignore_index=True)
wordball.head()
print(f"Number of question-answer pairs in the Wordball dataset: {len(wordball)}")
```
## Text Preprocessing
It turns out that not all cells are of type string. So, we can just apply the *str* function to make sure that all of them are of the same desired type.
```
wordball = wordball.applymap(str)
```
Let's look at the characters used in this dataset:
```
def distinct_chars(data, cols):
"""
This method takes in a pandas dataframe and prints all distinct characters.
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
"""
if cols is None:
cols = list(data.columns)
# join all questions into one string
questions = ' '.join(data[cols[0]])
# join all answers into one string
answers = ' '.join(data[cols[1]])
# get distinct characters used in the data (all questions and answers)
dis_chars = set(questions+answers)
# print the distinct characters that are used in the data
print(f"Number of distinct characters used in the dataset: {len(dis_chars)}")
# print(dis_chars)
dis_chars = list(dis_chars)
# Now let's print those characters in an organized way
digits = [char for char in dis_chars if char.isdigit()]
alphabets = [char for char in dis_chars if char.isalpha()]
special = [char for char in dis_chars if not (char.isdigit() | char.isalpha())]
# sort them to make them easier to read
digits = sorted(digits)
alphabets = sorted(alphabets)
special = sorted(special)
print(f"Digits: {digits}")
print(f"Alphabets: {alphabets}")
print(f"Special characters: {special}")
distinct_chars(wordball, ['Question', 'Answer'])
```
The following function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data.
```
def clean_text(text):
"""
This method takes a string, applies different text preprocessing (characters replacement, removal of unwanted characters,
removal of extra whitespaces) operations and returns a string.
text: a string.
"""
import re
text = str(text)
# REPLACEMENT
# replace " with ' (because they basically mean the same thing)
# text = text.replace('\"','\'')
text = re.sub('\"', '\'', text)
# replace “ and ” with '
# text = text.replace("“",'\'').replace("”",'\'')
text = re.sub("“", '\'', text)
text = re.sub("”", '\'', text)
# replace ’ with '
# text = text.replace('’','\'')
text = re.sub('’', '\'', text)
# replace [] and {} with ()
#text = text.replace('[','(').replace(']',')').replace('{','(').replace('}',')')
text = re.sub('\[','(', text)
text = re.sub('\]',')', text)
text = re.sub('\{','(', text)
text = re.sub('\}',')', text)
# replace ? with itself and a whitespace preceding it
# ex. what's your name? (we want the word name and question mark to be separate tokens)
# text = re.sub('\?', ' ?', text)
# creating a space between a word and the punctuation following it
# punctuation we're using: . , : ; ' ? ! + - * / = % $ @ & ( )
text = re.sub("([?.!,:;'?!+\-*/=%$@&()])", r" \1 ", text)
# REMOVAL OF UNWANTED CHARACTERS
# accept only alphanumeric and some special characters and remove all others
# a-zA-Z0-9 : matches any alphanumeric character and the underscore.
# \. : matches .
# \, : matches ,
# \: : matches :
# \; : matches ;
# \' : matches '
# \? : matches ?
# \! : matches !
# \+ : matches +
# \- : matches -
# \* : matches *
# \/ : matches /
# \= : matches =
# \% : matches %
# \$ : matches $
# \@ : matches @
# \& : matches &
# ^ is added to the beginning of the set to express that we want the regex to recognize all other characters except
# these that are explicitly specified, so that we can omit them.
# define the pattern
pattern = re.compile('[^a-zA-Z0-9_\.\,\:\;\'\?\!\+\-\*\/\=\%\$\@\&\(\)]')
# remove unwanted characters
text = re.sub(pattern, ' ', text)
# lower case the characters in the string
text = text.lower()
# REMOVAL OF EXTRA WHITESPACES
# remove duplicated spaces
text = re.sub(' +', ' ', text)
# remove leading and trailing spaces
text = text.strip()
return text
```
Let's try it out:
```
clean_text("A nice quote I read today: “Everything that you are going through is preparing you for what you asked for”. @hi % & =+-*/")
```
The following method prints a question-answer pair from the dataset, it will be helpful to give us a sense of what the *clean_text* function results in:
```
def print_question_answer(df, index, cols):
print(f"Question: ({index})")
print(df.loc[index][cols[0]])
print(f"Answer: ({index})")
print(df.loc[index][cols[1]])
print("Before applying text preprocessing:")
print_question_answer(wordball, 102, ['Question', 'Answer'])
print_question_answer(wordball, 200, ['Question', 'Answer'])
print_question_answer(wordball, 88376, ['Question', 'Answer'])
print_question_answer(wordball, 94351, ['Question', 'Answer'])
```
Apply text preprocessing (characters replacement, removal of unwanted characters, removal of extra whitespaces):
```
wordball = wordball.applymap(clean_text)
print("After applying text preprocessing:")
print_question_answer(wordball, 102, ['Question', 'Answer'])
print_question_answer(wordball, 200, ['Question', 'Answer'])
print_question_answer(wordball, 88376, ['Question', 'Answer'])
print_question_answer(wordball, 94351, ['Question', 'Answer'])
```
The following function applies some preprocessing operations on the data, concretely:
1. Drops unecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)*
2. Drops rows with empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) *
3. Drops rows with more than 30 words in either the question or the answer or if the answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)*
```
def preprocess_data(data, cols):
"""
This method preprocess data and does the following:
1. drops unecessary duplicate pairs.
2. drops rows with empty strings.
3. drops rows with more than 30 words in either the question or the answer,
or if the an answer has less than two characters.
Arguments:
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
Returns:
a pandas dataframe.
"""
# (1) Remove unecessary duplicate pairs but keep only one instance of all duplicates.
print('Removing unecessary duplicate pairs:')
data_len_before = len(data) # len of data before removing duplicates
print(f"# of examples before removing duplicates: {data_len_before}")
# drop duplicates
data = data.drop_duplicates(keep='first')
data_len_after = len(data) # len of data after removing duplicates
print(f"# of examples after removing duplicates: {data_len_after}")
print(f"# of removed duplicates: {data_len_before-data_len_after}")
# (2) Drop rows with empty strings.
print('Removing empty string rows:')
if cols is None:
cols = list(data.columns)
data_len_before = len(data) # len of data before removing empty strings
print(f"# of examples before removing rows with empty question/answers: {data_len_before}")
# I am going to use boolean masking to filter out rows with an empty question or answer
data = data[(data[cols[0]] != '') & (data[cols[1]] != '')]
# also, the following row results in the same as the above.
# data = data.query('Answer != "" and Question != ""')
data_len_after = len(data) # len of data after removing empty strings
print(f"# of examples after removing with empty question/answers: {data_len_after}")
print(f"# of removed empty string rows: {data_len_before-data_len_after}")
# (3) Drop rows with more than 30 words in either the question or the answer
# or if the an answer has less than two characters.
def accepted_length(qa_pair):
q_len = len(qa_pair[0].split(' '))
a_len = len(qa_pair[1].split(' '))
if (q_len <= 30) & ((a_len <= 30) & (len(qa_pair[1]) > 1)):
return True
return False
print('Removing rows with more than 30 words in either the question or the answer:')
data_len_before = len(data) # len of data before dropping those rows (30+ words)
print(f"# of examples before removing rows with more than 30 words: {data_len_before}")
# filter out rows with more than 30 words
accepted_mask = data.apply(accepted_length, axis=1)
data = data[accepted_mask]
data_len_after = len(data) # len of data after dropping those rows (50+ words)
print(f"# of examples after removing rows with more than 30 words: {data_len_after}")
print(f"# of removed empty rows with more than 30 words: {data_len_before-data_len_after}")
print("Data preprocessing is done.")
return data
wordball = preprocess_data(wordball, ['Question', 'Answer'])
print(f"# of question-answer pairs we have left in the Wordball dataset: {len(wordball)}")
```
Let's look at the characters after cleaning the data:
```
distinct_chars(wordball, ['Question', 'Answer'])
```
# Second Dataset
**reddit /r/Jokes**, [here](https://www.kaggle.com/cuddlefish/reddit-rjokes#jokes_score_name_clean.csv).
This dataset consists of two files, namely:
1. <i>jokes_score_name_clean.csv</i>: with <i>133,992</i> pairs.
2. <i>all_jokes.csv</i>
However, I'm not going to incorporate <i>all_jokes.csv</i> in the dataset because it's so messy.
```
reddit_jokes = pd.read_csv(files_path + 'jokes_score_name_clean.csv', usecols=['q', 'a'])
```
Let's rename the columns to have them aligned with the previous dataset:
```
reddit_jokes.rename(columns={'q':'Question', 'a':'Answer'}, inplace=True)
reddit_jokes.head()
print(len(reddit_jokes))
distinct_chars(reddit_jokes, ['Question', 'Answer'])
```
## Text Preprocessing
```
reddit_jokes = reddit_jokes.applymap(str)
```
Reddit data has some special tags like <i>[removed]</i> or <i>[deleted]</i> (these two mean that the comment has been removed/deleted). Also, they're written in an inconsistent way, i.e. you may find the tag <i>[removed]</i> capitalized or lowercased.<br>
The next function will address reddit tags as follows:
1. Drops rows with deleted, removed or censored tags.
2. Replaces other tags found in text with a whitespace. *(i.e. some comments have tags like <i>[censored], [gaming], [long], [request] and [dirty]</i> and we want to omit these tags from the text)*
```
def clean_reddit_tags(data, cols):
"""
This function removes reddit-related tags from the data and does the following:
1. drops rows with deleted, removed or censored tags.
2. replaces other tags found in text with a whitespace.
Arguments:
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
Returns:
a pandas dataframe.
"""
import re
if cols is None:
cols = list(data.columns)
# First, I'm going to lowercase all the text to address these tags
# however, I'm not going to alter the original dataframe because I don't want text to be lowercased.
data_copy = data.copy()
data_copy[cols[0]] = data_copy[cols[0]].str.lower()
data_copy[cols[1]] = data_copy[cols[1]].str.lower()
# drop rows with deleted, removed or censored tags.
# qa_pair[0] is the question, qa_pair[1] is the answer
mask = data_copy.apply(lambda qa_pair:
False if (qa_pair[0]=='[removed]') | (qa_pair[0]=='[deleted]') | (qa_pair[0]=='[censored]') |
(qa_pair[1]=='[removed]') | (qa_pair[1]=='[deleted]') | (qa_pair[1]=='[censored]')
else True, axis=1)
# drop the rows, notice we're using the mask to filter out those rows
# in the original dataframe 'data', because we don't need it anymore
data = data[mask]
print(f"# of rows dropped with [deleted], [removed] or [censored] tags: {mask.sum()}")
# replaces other tags found in text with a whitespace.
def sub_tag(pair):
"""
This method substitute tags (square brackets with words inside) with whitespace.
Arguments:
pair: a Pandas Series, where the first item is the question and the second is the answer.
Returns:
pair: a Pandas Series.
"""
# \[(.*?)\] is a regex to recognize square brackets [] with anything in between
p=re.compile("\[(.*?)\]")
pair[0] = re.sub(p, ' ', pair[0])
pair[1] = re.sub(p, ' ', pair[1])
return pair
# substitute tags with whitespaces.
data = data.apply(sub_tag, axis=1)
return data
print("Before addressing tags:")
print_question_answer(reddit_jokes, 1825, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 52906, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 59924, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 1489, ['Question', 'Answer'])
```
**Note:** the following cell may take multiple seconds to finish.
```
reddit_jokes = clean_reddit_tags(reddit_jokes, ['Question', 'Answer'])
reddit_jokes
print("After addressing tags:")
# because rows with [removed], [deleted] and [censored] tags have been dropped
# we're not going to print the rows (index=1825, index=59924) since they contain
# those tags, or we're going to have a KeyError
print_question_answer(reddit_jokes, 52906, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 1489, ['Question', 'Answer'])
```
**Note:** notice the question whose index is 52906, has some leading whitespaces. That's because it had the <i>[Corny]</i> tag and the function replaced it with whitespaces. Also, the question whose index is 1489 has an empty answer and that's because of the fact that the original answer just square brackets with some whitespaces in between. We're going to address all of that next!
Now, let's apply the *clean_text* function on the reddit data.<br>
**Remember:** the *clean_text* function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data.
```
reddit_jokes = reddit_jokes.applymap(clean_text)
print_question_answer(reddit_jokes, 52906, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 1489, ['Question', 'Answer'])
```
Everything looks good!<br>
Now, let's apply the *preprocess_data* function on the data.<br>
**Remember:** the *preprocess_data* function applies the following preprocessing operations:
1. Drops unecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)*
2. Drops rows with empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) *
3. Drops rows with more than 30 words in either the question or the answer or if the an answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)*
```
reddit_jokes = preprocess_data(reddit_jokes, ['Question', 'Answer'])
print(f"Number of question answer pairs in the reddit /r/Jokes dataset: {len(reddit_jokes)}")
distinct_chars(reddit_jokes, ['Question', 'Answer'])
```
# Third Dataset
**Question-Answer Jokes**, [here](https://www.kaggle.com/jiriroz/qa-jokes).
This dataset consists of one file, namely:
* <i>jokes_score_name_clean.csv</i>: with <i>38,269</i> pairs.
```
qa_jokes = pd.read_csv(files_path + 'jokes.csv', usecols=['Question', 'Answer'])
qa_jokes
print(len(qa_jokes))
distinct_chars(qa_jokes, ['Question', 'Answer'])
```
## Text Preprocessing
If you look at some examples in the dataset, you notice that some examples has 'Q:' at beginning of the question and 'A:' at the beginning of the answer, so we need to get rid of these prefixes because they don't convey useful information.<br>
You also notice some examples where both 'Q:' and 'A:' are found in either the question or the answer, although I'm not going to omit these because they probably convey information and are part of the answer. However, some of them have 'Q:' in the question and 'Q: question A: answer' where the question in the answer is the same question, so we need to fix that.
```
def clean_qa_prefixes(data, cols):
"""
This function removes special prefixes ('Q:' and 'A:') found in the data.
i.e. input="Q: how's your day?" --> output=" how's your day?"
Arguments:
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
Returns:
a pandas dataframe.
"""
def removes_prefixes(pair):
"""
This function removes prefixes ('Q:' and 'A:') from the question and answer.
Examples:
Input: qusetion="Q: what is your favorite Space movie?", answer='A: Interstellar!'
Output: qusetion=' what is your favorite Space movie?', answer=' Interstellar!'
Input: question="Q: how\'s your day?", answer='Q: how\'s your day? A: good, thanks.'
Output: qusetion=" how's your day?", answer='good, thanks.'
Input: qusetion='How old are you?', answer='old enough'
Output: qusetion='How old are you?', answer='old enough'
Arguments:
pair: a Pandas Series, where the first item is the question and the second is the answer.
Returns:
pair: a Pandas Series.
"""
# pair[0] corresponds to the question
# pair[1] corresponds to the answer
# if the question contains 'Q:' and the answer contains 'A:' but doesn't contain 'Q:'
if ('Q:' in pair[0]) and ('A:' in pair[1]) and ('Q:' not in pair[1]):
pair[0] = pair[0].replace('Q:','')
pair[1] = pair[1].replace('A:','')
# if the answer contains both 'Q:' and 'A:'
elif ('A:' in pair[1]) and ('Q:' in pair[1]):
pair[0] = pair[0].replace('Q:','')
# now we should check if the text between 'Q:' and 'A:' is the same text in the question (pair[0])
# because if they are, this means that the question is repeated in the answer and we should address that.
q_start = pair[1].find('Q:') + 2 # index of the start of the text that we want to extract
q_end = pair[1].find('A:') # index of the end of the text that we want to extract
q_txt = pair[1][q_start:q_end].strip()
# if the question is repeated in the answer
if q_txt == pair[0].strip():
# in case the question is repeated in the answer, removes it from the answer
pair[1] = pair[1][q_end+2:].strip()
return pair
return data.apply(removes_prefixes, axis=1)
print("Before removing unnecessary prefixes:")
print_question_answer(qa_jokes, 44, ['Question', 'Answer'])
print_question_answer(qa_jokes, 22, ['Question', 'Answer'])
print_question_answer(qa_jokes, 31867, ['Question', 'Answer'])
qa_jokes = clean_qa_prefixes(qa_jokes, ['Question', 'Answer'])
print("After removing unnecessary prefixes:")
print_question_answer(qa_jokes, 44, ['Question', 'Answer'])
print_question_answer(qa_jokes, 22, ['Question', 'Answer'])
print_question_answer(qa_jokes, 31867, ['Question', 'Answer'])
```
Notice that the third example both 'Q:' and 'A:' are part of the answer and conveys information.
Now, let's apply the *clean_text* function on the Question-Answer Jokes data.<br>
**Remember:** the *clean_text* function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data.
```
qa_jokes = qa_jokes.applymap(clean_text)
```
Now, let's apply the *preprocess_data* function on the data.<br>
**Remember:** the *preprocess_data* function applies the following preprocessing operations:
1. Drops unnecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)*
2. Drops rows with an empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) *
3. Drops rows with more than 30 words in either the question or the answer or if the an answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)*
```
qa_jokes = preprocess_data(qa_jokes, ['Question', 'Answer'])
print(f"Number of question-answer pairs in the Question-Answer Jokes dataset: {len(qa_jokes)}")
distinct_chars(qa_jokes, ['Question', 'Answer'])
```
# Putting it together
Let's concatenate all the data we have to create our final dataset.
```
dataset = pd.concat([wordball, reddit_jokes, qa_jokes], ignore_index=True)
dataset.head()
print(f"Number of question-answer pairs in the dataset: {len(dataset)}")
```
There may be duplicate examples in the data so let's drop them:
```
data_len_before = len(dataset) # len of data before removing duplicates
print(f"# of examples before removing duplicates: {data_len_before}")
# drop duplicates
dataset = dataset.drop_duplicates(keep='first')
data_len_after = len(dataset) # len of data after removing duplicates
print(f"# of examples after removing duplicates: {data_len_after}")
print(f"# of removed duplicates: {data_len_before-data_len_after}")
```
Let's drop rows with NaN values if there's any:
```
dataset.dropna(inplace=True)
dataset
```
Let's make sure that all our cells are of the same type:
```
dataset = dataset.applymap(str)
print(f"Number of question-answer pairs in the dataset: {len(dataset)}")
distinct_chars(dataset, ['Question', 'Answer'])
```
Finally, let's save the dataset:
```
dataset.to_csv(files_path + '/dataset.csv')
```
| true | code | 0.390243 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_10_3_text_generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 10: Time Series in Keras**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 10 Material
* Part 10.1: Time Series Data Encoding for Deep Learning [[Video]](https://www.youtube.com/watch?v=dMUmHsktl04&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_1_timeseries.ipynb)
* Part 10.2: Programming LSTM with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=wY0dyFgNCgY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_2_lstm.ipynb)
* **Part 10.3: Text Generation with Keras and TensorFlow** [[Video]](https://www.youtube.com/watch?v=6ORnRAz3gnA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_3_text_generation.ipynb)
* Part 10.4: Image Captioning with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=NmoW_AYWkb4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_4_captioning.ipynb)
* Part 10.5: Temporal CNN in Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=i390g8acZwk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_5_temporal_cnn.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 10.3: Text Generation with LSTM
Recurrent neural networks are also known for their ability to generate text. As a result, the output of the neural network can be free-form text. In this section, we will see how to train an LSTM can on a textual document, such as classic literature, and learn to output new text that appears to be of the same form as the training material. If you train your LSTM on [Shakespeare](https://en.wikipedia.org/wiki/William_Shakespeare), it will learn to crank out new prose similar to what Shakespeare had written.
Don't get your hopes up. You are not going to teach your deep neural network to write the next [Pulitzer Prize for Fiction](https://en.wikipedia.org/wiki/Pulitzer_Prize_for_Fiction). The prose generated by your neural network will be nonsensical. However, it will usually be nearly grammatically and of a similar style as the source training documents.
A neural network generating nonsensical text based on literature may not seem useful at first glance. However, this technology gets so much interest because it forms the foundation for many more advanced technologies. The fact that the LSTM will typically learn human grammar from the source document opens a wide range of possibilities. You can use similar technology to complete sentences when a user is entering text. Simply the ability to output free-form text becomes the foundation of many other technologies. In the next part, we will use this technique to create a neural network that can write captions for images to describe what is going on in the picture.
### Additional Information
The following are some of the articles that I found useful in putting this section together.
* [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)
* [Keras LSTM Generation Example](https://keras.io/examples/lstm_text_generation/)
### Character-Level Text Generation
There are several different approaches to teaching a neural network to output free-form text. The most basic question is if you wish the neural network to learn at the word or character level. In many ways, learning at the character level is the more interesting of the two. The LSTM is learning to construct its own words without even being shown what a word is. We will begin with character-level text generation. In the next module, we will see how we can use nearly the same technique to operate at the word level. We will implement word-level automatic captioning in the next module.
We begin by importing the needed Python packages and defining the sequence length, named **maxlen**. Time-series neural networks always accept their input as a fixed-length array. Because you might not use all of the sequence elements, it is common to fill extra elements with zeros. You will divide the text into sequences of this length, and the neural network will train to predict what comes after this sequence.
```
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import get_file
import numpy as np
import random
import sys
import io
import requests
import re
```
For this simple example, we will train the neural network on the classic children's book [Treasure Island](https://en.wikipedia.org/wiki/Treasure_Island). We begin by loading this text into a Python string and displaying the first 1,000 characters.
```
r = requests.get("https://data.heatonresearch.com/data/t81-558/text/"\
"treasure_island.txt")
raw_text = r.text
print(raw_text[0:1000])
```
We will extract all unique characters from the text and sort them. This technique allows us to assign a unique ID to each character. Because we sorted the characters, these IDs should remain the same. If we add new characters to the original text, then the IDs would change. We build two dictionaries. The first **char2idx** is used to convert a character into its ID. The second **idx2char** converts an ID back into its character.
```
processed_text = raw_text.lower()
processed_text = re.sub(r'[^\x00-\x7f]',r'', processed_text)
print('corpus length:', len(processed_text))
chars = sorted(list(set(processed_text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
```
We are now ready to build the actual sequences. Just like previous neural networks, there will be an $x$ and $y$. However, for the LSTM, $x$ and $y$ will both be sequences. The $x$ input will specify the sequences where $y$ are the expected output. The following code generates all possible sequences.
```
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(processed_text) - maxlen, step):
sentences.append(processed_text[i: i + maxlen])
next_chars.append(processed_text[i + maxlen])
print('nb sequences:', len(sentences))
sentences
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
x.shape
y.shape
```
The dummy variables for $y$ are shown below.
```
y[0:10]
```
Next, we create the neural network. This neural network's primary feature is the LSTM layer, which allows the sequences to be processed.
```
# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
```
The LSTM will produce new text character by character. We will need to sample the correct letter from the LSTM predictions each time. The **sample** function accepts the following two parameters:
* **preds** - The output neurons.
* **temperature** - 1.0 is the most conservative, 0.0 is the most confident (willing to make spelling and other errors).
The sample function below is essentially performing a [softmax]() on the neural network predictions. This causes each output neuron to become a probability of its particular letter.
```
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
Keras calls the following function at the end of each training Epoch. The code generates sample text generations that visually demonstrate the neural network better at text generation. As the neural network trains, the generations should look more realistic.
```
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print("******************************************************")
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(processed_text) - maxlen - 1)
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('----- temperature:', temperature)
generated = ''
sentence = processed_text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
We are now ready to train. It can take up to an hour to train this network, depending on how fast your computer is. If you have a GPU available, please make sure to use it.
```
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Fit the model
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y,
batch_size=128,
epochs=60,
callbacks=[print_callback])
```
| true | code | 0.582491 | null | null | null | null |
|
# Test For The Best Machine Learning Algorithm For Prediction
This notebook takes about 40 minutes to run, but we've already run it and saved the data for you. Please read through it, though, so that you understand how we came to the conclusions we'll use moving forward.
## Six Algorithms
We're going to compare six different algorithms to determine the best one to produce an accurate model for our predictions.
### Logistic Regression
Logistic Regression (LR) is a technique borrowed from the field of statistics. It is the go-to method for binary classification problems (problems with two class values).

Logistic Regression is named for the function used at the core of the method: the logistic function. The logistic function is a probablistic method used to determine whether or not the driver will be the winner. Logistic Regression predicts probabilities.
### Decision Tree
A tree has many analogies in real life, and it turns out that it has influenced a wide area of machine learning, covering both classification and regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making.

This methodology is more commonly known as a "learning decision tree" from data, and the above tree is called a Classification tree because the goal is to classify a driver as the winner or not.
### Random Forest
Random forest is a supervised learning algorithm. The "forest" it builds is an **ensemble of decision trees**, usually trained with the “bagging” method, a combination of learning models which increases the accuracy of the result.
A random forest eradicates the limitations of a decision tree algorithm. It reduces the overfitting of datasets and increases precision. It generates predictions without requiring many configurations.

Here's the difference between the Decision Tree and Random Forest methods:

### Support Vector Machine Algorithm (SVC)
Support Vector Machines (SVMs) are a set of supervised learning methods used for classification, regression and detection of outliers.
The advantages of support vector machines are:
- Effective in high dimensional spaces
- Still effective in cases where number of dimensions is greater than the number of samples
- Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient
- Versatile: different kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels
The objective of a SVC (Support Vector Classifier) is to fit to the data you provide, returning a "best fit" hyperplane that divides, or categorizes, your data.
### Gaussian Naive Bayes Algorithm
Naive Bayes is a classification algorithm for binary (two-class) and multi-class classification problems. The technique is easiest to understand when described using binary or categorical input values. The representation used for naive Bayes is probabilities.
A list of probabilities is stored to a file for a learned Naive Bayes model. This includes:
- **Class Probabilities:** The probabilities of each class in the training dataset.
- **Conditional Probabilities:** The conditional probabilities of each input value given each class value.
Naive Bayes can be extended to real-value attributes, most commonly by assuming a Gaussian distribution. This extension of Naive Bayes is called Gaussian Naive Bayes. Other functions can be used to estimate the distribution of the data, but the Gaussian (or normal distribution) is the easiest to work with because you only need to estimate the mean and the standard deviation from your training data.
### k Nearest Neighbor Algorithm (kNN)
The k-Nearest Neighbors (KNN) algorithm is a simple, supervised machine learning algorithm that can be used to solve both classification and regression problems.
kNN works by finding the distances between a query and all of the examples in the data, selecting the specified number examples (k) closest to the query, then voting for the most frequent label (in the case of classification) or averages the labels (in the case of regression).
The kNN algorithm assumes the similarity between the new case/data and available cases, and puts the new case into the category that is most similar to the available categories.

## Analyzing the Data
### Feature Importance
Another great quality of the random forest algorithm is that it's easy to measure the relative importance of each feature to the prediction.
The Scikit-learn Python Library provides a great tool for this which measures a feature's importance by looking at how much the tree nodes that use that feature reduce impurity across all trees in the forest. It computes this score automatically for each feature after training, and scales the results so the sum of all importance is equal to one.
### Data Visualization When Building a Model
How do you visualize the influence of the data? How do you frame the problem?
An important tool in the data scientist's toolkit is the power to visualize data using several excellent libraries such as Seaborn or MatPlotLib. Representing your data visually might allow you to uncover hidden correlations that you can leverage. Your visualizations might also help you to uncover bias or unbalanced data.

### Splitting the Dataset
Prior to training, you need to split your dataset into two or more parts of unequal size that still represent the data well.
1. Training. This part of the dataset is fit to your model to train it. This set constitutes the majority of the original dataset.
2. Testing. A test dataset is an independent group of data, often a subset of the original data, that you use to confirm the performance of the model you built.
3. Validating. A validation set is a smaller independent group of examples that you use to tune the model's hyperparameters, or architecture, to improve the model. Depending on your data's size and the question you are asking, you might not need to build this third set.
## Building the Model
Using your training data, your goal is to build a model, or a statistical representation of your data, using various algorithms to train it. Training a model exposes it to data and allows it to make assumptions about perceived patterns it discovers, validates, and accepts or rejects.
### Decide on a Training Method
Depending on your question and the nature of your data, you will choose a method to train it. Stepping through Scikit-learn's documentation, you can explore many ways to train a model. Depending on the results you get, you might have to try several different methods to build the best model. You are likely to go through a process whereby data scientists evaluate the performance of a model by feeding it unseen data, checking for accuracy, bias, and other quality-degrading issues, and selecting the most appropriate training method for the task at hand.
### Train a Model
Armed with your training data, you are ready to "fit" it to create a model. In many ML libraries you will find the code 'model.fit' - it is at this time that you send in your data as an array of values (usually 'X') and a feature variable (usually 'y').
### Evaluate the Model
Once the training process is complete, you will be able to evaluate the model's quality by using test data to gauge its performance. This data is a subset of the original data that the model has not previously analyzed. You can print out a table of metrics about your model's quality.
#### Model Fitting
In the Machine Learning context, model fitting refers to the accuracy of the model's underlying function as it attempts to analyze data with which it is not familiar.
#### Underfitting and Overfitting
Underfitting and overfitting are common problems that degrade the quality of the model, as the model either doesn't fit well enough, or it fits too well. This causes the model to make predictions either too closely aligned or too loosely aligned with its training data. An overfit model predicts training data too well because it has learned the data's details and noise too well. An underfit model is not accurate as it can neither accurately analyze its training data nor data it has not yet 'seen'.

Let's test out some algorithms to choose our path for modelling our predictions.
```
import warnings
warnings.filterwarnings("ignore")
import time
start = time.time()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn.metrics import confusion_matrix, precision_score
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler,LabelEncoder,OneHotEncoder
from sklearn.model_selection import cross_val_score,StratifiedKFold,RandomizedSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix,precision_score,f1_score,recall_score
from sklearn.neural_network import MLPClassifier, MLPRegressor
plt.style.use('seaborn')
np.set_printoptions(precision=4)
data = pd.read_csv('./data_f1/data_filtered.csv')
data.head()
len(data)
dnf_by_driver = data.groupby('driver').sum()['driver_dnf']
driver_race_entered = data.groupby('driver').count()['driver_dnf']
driver_dnf_ratio = (dnf_by_driver/driver_race_entered)
driver_confidence = 1-driver_dnf_ratio
driver_confidence_dict = dict(zip(driver_confidence.index,driver_confidence))
driver_confidence_dict
dnf_by_constructor = data.groupby('constructor').sum()['constructor_dnf']
constructor_race_entered = data.groupby('constructor').count()['constructor_dnf']
constructor_dnf_ratio = (dnf_by_constructor/constructor_race_entered)
constructor_reliability = 1-constructor_dnf_ratio
constructor_reliability_dict = dict(zip(constructor_reliability.index,constructor_reliability))
constructor_reliability_dict
data['driver_confidence'] = data['driver'].apply(lambda x:driver_confidence_dict[x])
data['constructor_reliability'] = data['constructor'].apply(lambda x:constructor_reliability_dict[x])
#removing retired drivers and constructors
active_constructors = ['Alpine F1', 'Williams', 'McLaren', 'Ferrari', 'Mercedes',
'AlphaTauri', 'Aston Martin', 'Alfa Romeo', 'Red Bull',
'Haas F1 Team']
active_drivers = ['Daniel Ricciardo', 'Mick Schumacher', 'Carlos Sainz',
'Valtteri Bottas', 'Lance Stroll', 'George Russell',
'Lando Norris', 'Sebastian Vettel', 'Kimi Räikkönen',
'Charles Leclerc', 'Lewis Hamilton', 'Yuki Tsunoda',
'Max Verstappen', 'Pierre Gasly', 'Fernando Alonso',
'Sergio Pérez', 'Esteban Ocon', 'Antonio Giovinazzi',
'Nikita Mazepin','Nicholas Latifi']
data['active_driver'] = data['driver'].apply(lambda x: int(x in active_drivers))
data['active_constructor'] = data['constructor'].apply(lambda x: int(x in active_constructors))
data.head()
data.columns
```
## Directory to store Models
```
import os
if not os.path.exists('./models'):
os.mkdir('./models')
def position_index(x):
if x<4:
return 1
if x>10:
return 3
else :
return 2
```
## Model considering only Drivers
```
x_d= data[['GP_name','quali_pos','driver','age_at_gp_in_days','position','driver_confidence','active_driver']]
x_d = x_d[x_d['active_driver']==1]
sc = StandardScaler()
le = LabelEncoder()
x_d['GP_name'] = le.fit_transform(x_d['GP_name'])
x_d['driver'] = le.fit_transform(x_d['driver'])
x_d['GP_name'] = le.fit_transform(x_d['GP_name'])
x_d['age_at_gp_in_days'] = sc.fit_transform(x_d[['age_at_gp_in_days']])
X_d = x_d.drop(['position','active_driver'],1)
y_d = x_d['position'].apply(lambda x: position_index(x))
#cross validation for diffrent models
models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()]
names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier']
model_dict = dict(zip(models,names))
mean_results_dri = []
results_dri = []
name = []
for model in models:
cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True)
result = cross_val_score(model,X_d,y_d,cv=cv,scoring='accuracy')
mean_results_dri.append(result.mean())
results_dri.append(result)
name.append(model_dict[model])
print(f'{model_dict[model]} : {result.mean()}')
plt.figure(figsize=(15,10))
plt.boxplot(x=results_dri,labels=name)
plt.xlabel('Models')
plt.ylabel('Accuracy')
plt.title('Model performance comparision (drivers only)')
plt.show()
```
## Model considering only Constructors
```
x_c = data[['GP_name','quali_pos','constructor','position','constructor_reliability','active_constructor']]
x_c = x_c[x_c['active_constructor']==1]
sc = StandardScaler()
le = LabelEncoder()
x_c['GP_name'] = le.fit_transform(x_c['GP_name'])
x_c['constructor'] = le.fit_transform(x_c['constructor'])
X_c = x_c.drop(['position','active_constructor'],1)
y_c = x_c['position'].apply(lambda x: position_index(x))
#cross validation for diffrent models
models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()]
names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier']
model_dict = dict(zip(models,names))
mean_results_const = []
results_const = []
name = []
for model in models:
cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True)
result = cross_val_score(model,X_c,y_c,cv=cv,scoring='accuracy')
mean_results_const.append(result.mean())
results_const.append(result)
name.append(model_dict[model])
print(f'{model_dict[model]} : {result.mean()}')
plt.figure(figsize=(15,10))
plt.boxplot(x=results_const,labels=name)
plt.xlabel('Models')
plt.ylabel('Accuracy')
plt.title('Model performance comparision (Teams only)')
plt.show()
```
# Model considering both Drivers and Constructors
```
cleaned_data = data[['GP_name','quali_pos','constructor','driver','position','driver_confidence','constructor_reliability','active_driver','active_constructor']]
cleaned_data = cleaned_data[(cleaned_data['active_driver']==1)&(cleaned_data['active_constructor']==1)]
cleaned_data.to_csv('./data_f1/cleaned_data.csv',index=False)
```
### Build your X dataset with next columns:
- GP_name
- quali_pos to predict the classification cluster (1,2,3)
- constructor
- driver
- position
- driver confidence
- constructor_reliability
- active_driver
- active_constructor
### Filter the dataset for this Model "Driver + Constructor" all active drivers and constructors
### Create Standard Scaler and Label Encoder for the different features in order to have a similar scale for all features
### Prepare the X (Features dataset) and y for predicted value.
In our case, we want to calculate the cluster of final position for ech driver using the "position_index" function
```
# Implement X, y
```
### Applied the same list of ML Algorithms for cross validation of different models
And Store the accuracy Mean Value in order to compare with previous ML Models
```
mean_results = []
results = []
name = []
# cross validation for different models
```
### Use the same boxplot plotter used in the previous Models
```
# Implement boxplot
```
# Comparing The 3 ML Models
Let's see mean score of our three assumptions.
```
lr = [mean_results[0],mean_results_dri[0],mean_results_const[0]]
dtc = [mean_results[1],mean_results_dri[1],mean_results_const[1]]
rfc = [mean_results[2],mean_results_dri[2],mean_results_const[2]]
svc = [mean_results[3],mean_results_dri[3],mean_results_const[3]]
gnb = [mean_results[4],mean_results_dri[4],mean_results_const[4]]
knn = [mean_results[5],mean_results_dri[5],mean_results_const[5]]
font1 = {
'family':'serif',
'color':'black',
'weight':'normal',
'size':18
}
font2 = {
'family':'serif',
'color':'black',
'weight':'bold',
'size':12
}
x_ax = np.arange(3)
plt.figure(figsize=(30,15))
bar1 = plt.bar(x_ax,lr,width=0.1,align='center', label="Logistic Regression")
bar2 = plt.bar(x_ax+0.1,dtc,width=0.1,align='center', label="DecisionTree")
bar3 = plt.bar(x_ax+0.2,rfc,width=0.1,align='center', label="RandomForest")
bar4 = plt.bar(x_ax+0.3,svc,width=0.1,align='center', label="SVC")
bar5 = plt.bar(x_ax+0.4,gnb,width=0.1,align='center', label="GaussianNB")
bar6 = plt.bar(x_ax+0.5,knn,width=0.1,align='center', label="KNN")
plt.text(0.05,1,'CV score for combined data',fontdict=font1)
plt.text(1.04,1,'CV score only driver data',fontdict=font1)
plt.text(2,1,'CV score only team data',fontdict=font1)
for bar in bar1.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar2.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar3.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar4.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar5.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar6.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
plt.legend(loc='center', bbox_to_anchor=(0.5, -0.10), shadow=False, ncol=6)
plt.show()
end = time.time()
import datetime
str(datetime.timedelta(seconds=(end - start)))
print(str(end - start)+" seconds")
```
| true | code | 0.362038 | null | null | null | null |
|
* 比较不同组合组合优化器在不同规模问题上的性能;
* 下面的结果主要比较``alphamind``和``python``中其他优化器的性能差别,我们将尽可能使用``cvxopt``中的优化器,其次选择``scipy``;
* 由于``scipy``在``ashare_ex``上面性能太差,所以一般忽略``scipy``在这个股票池上的表现;
* 时间单位都是毫秒。
* 请在环境变量中设置`DB_URI`指向数据库
```
import os
import timeit
import numpy as np
import pandas as pd
import cvxpy
from alphamind.api import *
from alphamind.portfolio.linearbuilder import linear_builder
from alphamind.portfolio.meanvariancebuilder import mean_variance_builder
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
pd.options.display.float_format = '{:,.2f}'.format
```
## 0. 数据准备
------------------
```
ref_date = '2018-02-08'
u_names = ['sh50', 'hs300', 'zz500', 'zz800', 'zz1000', 'ashare_ex']
b_codes = [16, 300, 905, 906, 852, None]
risk_model = 'short'
factor = 'EPS'
lb = 0.0
ub = 0.1
data_source = os.environ['DB_URI']
engine = SqlEngine(data_source)
universes = [Universe(u_name) for u_name in u_names]
codes_set = [engine.fetch_codes(ref_date, universe=universe) for universe in universes]
data_set = [engine.fetch_data(ref_date, factor, codes, benchmark=b_code, risk_model=risk_model) for codes, b_code in zip(codes_set, b_codes)]
```
## 1. 线性优化(带线性限制条件)
---------------------------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
factor_data = sample_data['factor']
er = factor_data[factor].values
n = len(er)
lbound = np.ones(n) * lb
ubound = np.ones(n) * ub
risk_constraints = np.ones((n, 1))
risk_target = (np.array([1.]), np.array([1.]))
status, y, x1 = linear_builder(er, lbound, ubound, risk_constraints, risk_target)
elasped_time1 = timeit.timeit("linear_builder(er, lbound, ubound, risk_constraints, risk_target)", number=number, globals=globals()) / number * 1000
A_eq = risk_constraints.T
b_eq = np.array([1.])
w = cvxpy.Variable(n)
curr_risk_exposure = w * risk_constraints
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0]]
objective = cvxpy.Minimize(-w.T * er)
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
prob.value
```
## 2. 线性优化(带L1限制条件)
-----------------------
```
from cvxpy import pnorm
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind (clp simplex)', 'alphamind (clp interior)', 'alphamind (ecos)'])
turn_over_target = 0.5
number = 1
for u_name, sample_data in zip(u_names, data_set):
factor_data = sample_data['factor']
er = factor_data[factor].values
n = len(er)
lbound = np.ones(n) * lb
ubound = np.ones(n) * ub
if 'weight' in factor_data:
current_position = factor_data.weight.values
else:
current_position = np.ones_like(er) / len(er)
risk_constraints = np.ones((len(er), 1))
risk_target = (np.array([1.]), np.array([1.]))
status, y, x1 = linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='interior')
elasped_time1 = timeit.timeit("""linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='interior')""", number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
curr_risk_exposure = risk_constraints.T @ w
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0],
pnorm(w - current_position, 1) <= turn_over_target]
objective = cvxpy.Minimize(-w.T * er)
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
status, y, x2 = linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='simplex')
elasped_time3 = timeit.timeit("""linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='simplex')""", number=number, globals=globals()) / number * 1000
status, y, x3 = linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='ecos')
elasped_time4 = timeit.timeit("""linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='ecos')""", number=number, globals=globals()) / number * 1000
np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4)
np.testing.assert_almost_equal(x2 @ er, np.array(w.value).flatten() @ er, 4)
np.testing.assert_almost_equal(x3 @ er, np.array(w.value).flatten() @ er, 4)
df.loc['alphamind (clp interior)', u_name] = elasped_time1
df.loc['alphamind (clp simplex)', u_name] = elasped_time3
df.loc['alphamind (ecos)', u_name] = elasped_time4
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 3. Mean - Variance 优化 (无约束)
-----------------------
```
from cvxpy import *
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
bm = np.zeros(n)
lbound = -np.ones(n) * np.inf
ubound = np.ones(n) * np.inf
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None,
lam=1)
elasped_time1 = timeit.timeit("""mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None,
lam=1)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er + 0.5 * risk)
prob = cvxpy.Problem(objective)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1
x2 = np.array(w.value).flatten()
u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 4. Mean - Variance 优化 (Box约束)
---------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
bm = np.zeros(n)
lbound = np.zeros(n)
ubound = np.ones(n) * 0.1
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None)
elasped_time1 = timeit.timeit("""mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er + 0.5 * risk)
constraints = [w >= lbound,
w <= ubound]
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1
x2 = np.array(w.value).flatten()
u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 5. Mean - Variance 优化 (Box约束以及线性约束)
----------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
bm = np.zeros(n)
lbound = np.zeros(n)
ubound = np.ones(n) * 0.1
risk_constraints = np.ones((len(er), 1))
risk_target = (np.array([1.]), np.array([1.]))
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target)
elasped_time1 = timeit.timeit("""mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er + 0.5 * risk)
curr_risk_exposure = risk_constraints.T @ w
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0]]
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1
x2 = np.array(w.value).flatten()
u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 6. 线性优化(带二次限制条件)
-------------------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
target_vol = 0.5
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
if 'weight' in factor_data:
bm = factor_data.weight.values
else:
bm = np.ones_like(er) / n
lbound = np.zeros(n)
ubound = np.ones(n) * 0.1
risk_constraints = np.ones((n, 1))
risk_target = (np.array([bm.sum()]), np.array([bm.sum()]))
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = target_vol_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target,
vol_target=target_vol)
elasped_time1 = timeit.timeit("""target_vol_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target,
vol_target=target_vol)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er)
curr_risk_exposure = risk_constraints.T @ w
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0],
risk <= target_vol * target_vol]
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er
x2 = np.array(w.value).flatten()
u2 = -x2 @ er
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
| true | code | 0.600598 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/Abhishekauti21/dsmp-pre-work/blob/master/practice_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
class test:
def __init__(self,a):
self.a=a
def display(self):
print(self.a)
obj=test()
obj.display()
def f1():
x=100
print(x)
x=+1
f1()
area = { 'living' : [400, 450], 'living' : [650, 800], 'kitchen' : [300, 250], 'garage' : [250, 0]}
print (area['living'])
List_1=[2,6,7,8]
List_2=[2,6,7,8]
print(List_1[-2] + List_2[2])
d = {0: 'a', 1: 'b', 2: 'c'}
for x, y in d.items():
print(x, y)
Numbers=[10,5,7,8,9,5]
print(max(Numbers)-min(Numbers))
fo = open("foo.txt", "read+")
print("Name of the file: ", fo.name)
# Assuming file has following 5 lines
# This is 1st line
# This is 2nd line
# This is 3rd line
# This is 4th line
# This is 5th line
for index in range(5):
line = fo.readline()
print("Line No {} - {}".format(index, line))
#Close opened file
fo.close()
x = "abcdef"
while i in x:
print(i, end=" ")
def cube(x):
return x * x * x
x = cube(3)
print (x)
print(((True) or (False) and (False) or (False)))
x1=int('16')
x2=8 + 8
x3= (4**2)
print(x1 is x2 is x3)
Word='warrior knights' ,A=Word[9:14],B=Word[-13:-16:-1]
B+A
def to_upper(k):
return k.upper()
x = ['ab', 'cd']
print(list(map(to_upper, x)))
my_string = "hello world"
k = [(i.upper(), len(i)) for i in my_string]
print(k)
from csv import reader
def explore_data(dataset, start, end, rows_and_columns=False):
"""Explore the elements of a list.
Print the elements of a list starting from the index 'start'(included) upto the index 'end' (excluded).
Keyword arguments:
dataset -- list of which we want to see the elements
start -- index of the first element we want to see, this is included
end -- index of the stopping element, this is excluded
rows_and_columns -- this parameter is optional while calling the function. It takes binary values, either True or False. If true, print the dimension of the list, else dont.
"""
dataset_slice = dataset[start:end]
for row in dataset_slice:
print(row)
print('\n') # adds a new (empty) line between rows
if rows_and_columns:
print('Number of rows:', len(dataset))
print('Number of columns:', len(dataset[0]))
def duplicate_and_unique_movies(dataset, index_):
"""Check the duplicate and unique entries.
We have nested list. This function checks if the rows in the list is unique or duplicated based on the element at index 'index_'.
It prints the Number of duplicate entries, along with some examples of duplicated entry.
Keyword arguments:
dataset -- two dimensional list which we want to explore
index_ -- column index at which the element in each row would be checked for duplicacy
"""
duplicate = []
unique = []
for movie in dataset:
name = movie[index_]
if name in unique:
duplicate.append(name)
else:
unique.append(name)
print('Number of duplicate Movies:', len(duplicate))
print('\n')
print('Examples of duplicate Movies:', duplicate[:15])
def movies_lang(dataset, index_, lang_):
"""Extract the movies of a particular language.
Of all the movies available in all languages, this function extracts all the movies in a particular laguage.
Once you ahve extracted the movies, call the explore_data() to print first few rows.
Keyword arguments:
dataset -- list containing the details of the movie
index_ -- index which is to be compared for langauges
lang_ -- desired language for which we want to filter out the movies
Returns:
movies_ -- list with details of the movies in selected language
"""
movies_ = []
for movie in movies:
lang = movie[index_]
if lang == lang_:
movies_.append(movie)
print("Examples of Movies in English Language:")
explore_data(movies_, 0, 3, True)
return movies_
def rate_bucket(dataset, rate_low, rate_high):
"""Extract the movies within the specified ratings.
This function extracts all the movies that has rating between rate_low and high_rate.
Once you ahve extracted the movies, call the explore_data() to print first few rows.
Keyword arguments:
dataset -- list containing the details of the movie
rate_low -- lower range of rating
rate_high -- higher range of rating
Returns:
rated_movies -- list of the details of the movies with required ratings
"""
rated_movies = []
for movie in dataset:
vote_avg = float(movie[-4])
if ((vote_avg >= rate_low) & (vote_avg <= rate_high)):
rated_movies.append(movie)
print("Examples of Movies in required rating bucket:")
explore_data(rated_movies, 0, 3, True)
return rated_movies
# Read the data file and store it as a list 'movies'
opened_file = open(path, encoding="utf8")
read_file = reader(opened_file)
movies = list(read_file)
# The first row is header. Extract and store it in 'movies_header'.
movies_header = movies[0]
print("Movies Header:\n", movies_header)
# Subset the movies dataset such that the header is removed from the list and store it back in movies
movies = movies[1:]
# Delete wrong data
# Explore the row #4553. You will see that as apart from the id, description, status and title, no other information is available.
# Hence drop this row.
print("Entry at index 4553:")
explore_data(movies, 4553, 4554)
del movies[4553]
# Using explore_data() with appropriate parameters, view the details of the first 5 movies.
print("First 5 Entries:")
explore_data(movies, 0, 5, True)
# Our dataset might have more than one entry for a movie. Call duplicate_and_unique_movies() with index of the name to check the same.
duplicate_and_unique_movies(movies, 13)
# We saw that there are 3 movies for which the there are multiple entries.
# Create a dictionary, 'reviews_max' that will have the name of the movie as key, and the maximum number of reviews as values.
reviews_max = {}
for movie in movies:
name = movie[13]
n_reviews = float(movie[12])
if name in reviews_max and reviews_max[name] < n_reviews:
reviews_max[name] = n_reviews
elif name not in reviews_max:
reviews_max[name] = n_reviews
len(reviews_max)
# Create a list 'movies_clean', which will filter out the duplicate movies and contain the rows with maximum number of reviews for duplicate movies, as stored in 'review_max'.
movies_clean = []
already_added = []
for movie in movies:
name = movie[13]
n_reviews = float(movie[12])
if (reviews_max[name] == n_reviews) and (name not in already_added):
movies_clean.append(movie)
already_added.append(name)
len(movies_clean)
# Calling movies_lang(), extract all the english movies and store it in movies_en.
movies_en = movies_lang(movies_clean, 3, 'en')
# Call the rate_bucket function to see the movies with rating higher than 8.
high_rated_movies = rate_bucket(movies_en, 8, 10)
```
| true | code | 0.452113 | null | null | null | null |
|
# Detecting COVID-19 with Chest X Ray using PyTorch
Image classification of Chest X Rays in one of three classes: Normal, Viral Pneumonia, COVID-19
Dataset from [COVID-19 Radiography Dataset](https://www.kaggle.com/tawsifurrahman/covid19-radiography-database) on Kaggle
# Importing Libraries
```
from google.colab import drive
drive.mount('/gdrive')
%matplotlib inline
import os
import shutil
import copy
import random
import torch
import torch.nn as nn
import torchvision
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import seaborn as sns
import time
from sklearn.metrics import confusion_matrix
from PIL import Image
import matplotlib.pyplot as plt
torch.manual_seed(0)
print('Using PyTorch version', torch.__version__)
```
# Preparing Training and Test Sets
```
class_names = ['Non-Covid', 'Covid']
root_dir = '/gdrive/My Drive/Research_Documents_completed/Data/Data/'
source_dirs = ['non', 'covid']
```
# Creating Custom Dataset
```
class ChestXRayDataset(torch.utils.data.Dataset):
def __init__(self, image_dirs, transform):
def get_images(class_name):
images = [x for x in os.listdir(image_dirs[class_name]) if x.lower().endswith('png') or x.lower().endswith('jpg')]
print(f'Found {len(images)} {class_name} examples')
return images
self.images = {}
self.class_names = ['Non-Covid', 'Covid']
for class_name in self.class_names:
self.images[class_name] = get_images(class_name)
self.image_dirs = image_dirs
self.transform = transform
def __len__(self):
return sum([len(self.images[class_name]) for class_name in self.class_names])
def __getitem__(self, index):
class_name = random.choice(self.class_names)
index = index % len(self.images[class_name])
image_name = self.images[class_name][index]
image_path = os.path.join(self.image_dirs[class_name], image_name)
image = Image.open(image_path).convert('RGB')
return self.transform(image), self.class_names.index(class_name)
```
# Image Transformations
```
train_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(size=(224, 224)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
test_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(size=(224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
```
# Prepare DataLoader
```
train_dirs = {
'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/non/',
'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/covid/'
}
#train_dirs = {
# 'Non-Covid': '/gdrive/My Drive/Data/Data/non/',
# 'Covid': '/gdrive/My Drive/Data/Data/covid/'
#}
train_dataset = ChestXRayDataset(train_dirs, train_transform)
test_dirs = {
'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/non/',
'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/covid/'
}
test_dataset = ChestXRayDataset(test_dirs, test_transform)
batch_size = 25
dl_train = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dl_test = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
print(dl_train)
print('Number of training batches', len(dl_train))
print('Number of test batches', len(dl_test))
```
# Data Visualization
```
class_names = train_dataset.class_names
def show_images(images, labels, preds):
plt.figure(figsize=(30, 20))
for i, image in enumerate(images):
plt.subplot(1, 25, i + 1, xticks=[], yticks=[])
image = image.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = image * std + mean
image = np.clip(image, 0., 1.)
plt.imshow(image)
col = 'green'
if preds[i] != labels[i]:
col = 'red'
plt.xlabel(f'{class_names[int(labels[i].numpy())]}')
plt.ylabel(f'{class_names[int(preds[i].numpy())]}', color=col)
plt.tight_layout()
plt.show()
images, labels = next(iter(dl_train))
show_images(images, labels, labels)
images, labels = next(iter(dl_test))
show_images(images, labels, labels)
```
# Creating the Model
```
resnet18 = torchvision.models.resnet18(pretrained=True)
print(resnet18)
resnet18.fc = torch.nn.Linear(in_features=512, out_features=2)
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(resnet18.parameters(), lr=3e-5)
print(resnet18)
def show_preds():
resnet18.eval()
images, labels = next(iter(dl_test))
outputs = resnet18(images)
_, preds = torch.max(outputs, 1)
show_images(images, labels, preds)
show_preds()
```
# Training the Model
```
def train(epochs):
best_model_wts = copy.deepcopy(resnet18.state_dict())
b_acc = 0.0
t_loss = []
t_acc = []
avg_t_loss=[]
avg_t_acc=[]
v_loss = []
v_acc=[]
avg_v_loss = []
avg_v_acc = []
ep = []
print('Starting training..')
for e in range(0, epochs):
ep.append(e+1)
print('='*20)
print(f'Starting epoch {e + 1}/{epochs}')
print('='*20)
train_loss = 0.
val_loss = 0.
train_accuracy = 0
total_train = 0
correct_train = 0
resnet18.train() # set model to training phase
for train_step, (images, labels) in enumerate(dl_train):
optimizer.zero_grad()
outputs = resnet18(images)
_, pred = torch.max(outputs, 1)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_loss /= (train_step + 1)
_, predicted = torch.max(outputs, 1)
total_train += labels.nelement()
correct_train += sum((predicted == labels).numpy())
train_accuracy = correct_train / total_train
t_loss.append(train_loss)
t_acc.append(train_accuracy)
if train_step % 20 == 0:
print('Evaluating at step', train_step)
print(f'Training Loss: {train_loss:.4f}, Training Accuracy: {train_accuracy:.4f}')
accuracy = 0.
resnet18.eval() # set model to eval phase
for val_step, (images, labels) in enumerate(dl_test):
outputs = resnet18(images)
loss = loss_fn(outputs, labels)
val_loss += loss.item()
_, preds = torch.max(outputs, 1)
accuracy += sum((preds == labels).numpy())
val_loss /= (val_step + 1)
accuracy = accuracy/len(test_dataset)
print(f'Validation Loss: {val_loss:.4f}, Validation Accuracy: {accuracy:.4f}')
v_loss.append(val_loss)
v_acc.append(accuracy)
show_preds()
resnet18.train()
if accuracy > b_acc:
b_acc = accuracy
avg_t_loss.append(sum(t_loss)/len(t_loss))
avg_v_loss.append(sum(v_loss)/len(v_loss))
avg_t_acc.append(sum(t_acc)/len(t_acc))
avg_v_acc.append(sum(v_acc)/len(v_acc))
best_model_wts = copy.deepcopy(resnet18.state_dict())
print('Best validation Accuracy: {:4f}'.format(b_acc))
print('Training complete..')
plt.plot(ep, avg_t_loss, 'g', label='Training loss')
plt.plot(ep, avg_v_loss, 'b', label='validation loss')
plt.title('Training and Validation loss for each epoch')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_loss.png')
plt.show()
plt.plot(ep, avg_t_acc, 'g', label='Training accuracy')
plt.plot(ep, avg_v_acc, 'b', label='validation accuracy')
plt.title('Training and Validation Accuracy for each epoch')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_accuarcy.png')
plt.show()
torch.save(resnet18.state_dict(),'/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18.pt')
%%time
train(epochs=5)
```
# Final Results
VALIDATION LOSS AND TRAINING LOSS VS EPOCH
VALIDATION ACCURACY AND TRAINING ACCURACY VS EPOCH
BEST ACCURACY ERROR..
```
show_preds()
```
| true | code | 0.723615 | null | null | null | null |
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/geemap/tree/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
```
import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
Map.add_minimap(position='bottomright')
Map
```
## Add tile layers
For example, you can Google Map tile layer:
```
url = 'https://mt1.google.com/vt/lyrs=m&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Map', attribution='Google')
```
Add Google Terrain tile layer:
```
url = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Terrain', attribution='Google')
```
## Add WMS layers
More WMS layers can be found at <https://viewer.nationalmap.gov/services/>.
For example, you can add NAIP imagery.
```
url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='0', name='NAIP Imagery', format='image/png')
```
Add USGS 3DEP Elevation Dataset
```
url = 'https://elevation.nationalmap.gov/arcgis/services/3DEPElevation/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='3DEPElevation:None', name='3DEP Elevation', format='image/png')
```
## Capture user inputs
```
import geemap
from ipywidgets import Label
from ipyleaflet import Marker
Map = geemap.Map(center=(40, -100), zoom=4)
label = Label()
display(label)
coordinates = []
def handle_interaction(**kwargs):
latlon = kwargs.get('coordinates')
if kwargs.get('type') == 'mousemove':
label.value = str(latlon)
elif kwargs.get('type') == 'click':
coordinates.append(latlon)
Map.add_layer(Marker(location=latlon))
Map.on_interaction(handle_interaction)
Map
print(coordinates)
```
## A simpler way for capturing user inputs
```
import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
cluster = Map.listening(event='click', add_marker=True)
Map
# Get the last mouse clicked coordinates
Map.last_click
# Get all the mouse clicked coordinates
Map.all_clicks
```
## SplitMap control
```
import geemap
from ipyleaflet import *
Map = geemap.Map(center=(47.50, -101), zoom=7)
right_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2017_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2017_CIR',
name = 'AerialImage_ND_2017_CIR',
format = 'image/png'
)
left_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2018_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2018_CIR',
name = 'AerialImage_ND_2018_CIR',
format = 'image/png'
)
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
Map.add_control(control)
Map.add_control(LayersControl(position='topright'))
Map.add_control(FullScreenControl())
Map
import geemap
Map = geemap.Map()
Map.split_map(left_layer='HYBRID', right_layer='ESRI')
Map
```
| true | code | 0.540439 | null | null | null | null |
|
## **Nigerian Music scraped from Spotify - an analysis**
Clustering is a type of [Unsupervised Learning](https://wikipedia.org/wiki/Unsupervised_learning) that presumes that a dataset is unlabelled or that its inputs are not matched with predefined outputs. It uses various algorithms to sort through unlabeled data and provide groupings according to patterns it discerns in the data.
[**Pre-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/27/)
### **Introduction**
[Clustering](https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-30164-8_124) is very useful for data exploration. Let's see if it can help discover trends and patterns in the way Nigerian audiences consume music.
> ✅ Take a minute to think about the uses of clustering. In real life, clustering happens whenever you have a pile of laundry and need to sort out your family members' clothes 🧦👕👖🩲. In data science, clustering happens when trying to analyze a user's preferences, or determine the characteristics of any unlabeled dataset. Clustering, in a way, helps make sense of chaos, like a sock drawer.
In a professional setting, clustering can be used to determine things like market segmentation, determining what age groups buy what items, for example. Another use would be anomaly detection, perhaps to detect fraud from a dataset of credit card transactions. Or you might use clustering to determine tumors in a batch of medical scans.
✅ Think a minute about how you might have encountered clustering 'in the wild', in a banking, e-commerce, or business setting.
> 🎓 Interestingly, cluster analysis originated in the fields of Anthropology and Psychology in the 1930s. Can you imagine how it might have been used?
Alternately, you could use it for grouping search results - by shopping links, images, or reviews, for example. Clustering is useful when you have a large dataset that you want to reduce and on which you want to perform more granular analysis, so the technique can be used to learn about data before other models are constructed.
✅ Once your data is organized in clusters, you assign it a cluster Id, and this technique can be useful when preserving a dataset's privacy; you can instead refer to a data point by its cluster id, rather than by more revealing identifiable data. Can you think of other reasons why you'd refer to a cluster Id rather than other elements of the cluster to identify it?
### Getting started with clustering
> 🎓 How we create clusters has a lot to do with how we gather up the data points into groups. Let's unpack some vocabulary:
>
> 🎓 ['Transductive' vs. 'inductive'](https://wikipedia.org/wiki/Transduction_(machine_learning))
>
> Transductive inference is derived from observed training cases that map to specific test cases. Inductive inference is derived from training cases that map to general rules which are only then applied to test cases.
>
> An example: Imagine you have a dataset that is only partially labelled. Some things are 'records', some 'cds', and some are blank. Your job is to provide labels for the blanks. If you choose an inductive approach, you'd train a model looking for 'records' and 'cds', and apply those labels to your unlabeled data. This approach will have trouble classifying things that are actually 'cassettes'. A transductive approach, on the other hand, handles this unknown data more effectively as it works to group similar items together and then applies a label to a group. In this case, clusters might reflect 'round musical things' and 'square musical things'.
>
> 🎓 ['Non-flat' vs. 'flat' geometry](https://datascience.stackexchange.com/questions/52260/terminology-flat-geometry-in-the-context-of-clustering)
>
> Derived from mathematical terminology, non-flat vs. flat geometry refers to the measure of distances between points by either 'flat' ([Euclidean](https://wikipedia.org/wiki/Euclidean_geometry)) or 'non-flat' (non-Euclidean) geometrical methods.
>
> 'Flat' in this context refers to Euclidean geometry (parts of which are taught as 'plane' geometry), and non-flat refers to non-Euclidean geometry. What does geometry have to do with machine learning? Well, as two fields that are rooted in mathematics, there must be a common way to measure distances between points in clusters, and that can be done in a 'flat' or 'non-flat' way, depending on the nature of the data. [Euclidean distances](https://wikipedia.org/wiki/Euclidean_distance) are measured as the length of a line segment between two points. [Non-Euclidean distances](https://wikipedia.org/wiki/Non-Euclidean_geometry) are measured along a curve. If your data, visualized, seems to not exist on a plane, you might need to use a specialized algorithm to handle it.
<p >
<img src="../../images/flat-nonflat.png"
width="600"/>
<figcaption>Infographic by Dasani Madipalli</figcaption>
> 🎓 ['Distances'](https://web.stanford.edu/class/cs345a/slides/12-clustering.pdf)
>
> Clusters are defined by their distance matrix, e.g. the distances between points. This distance can be measured a few ways. Euclidean clusters are defined by the average of the point values, and contain a 'centroid' or center point. Distances are thus measured by the distance to that centroid. Non-Euclidean distances refer to 'clustroids', the point closest to other points. Clustroids in turn can be defined in various ways.
>
> 🎓 ['Constrained'](https://wikipedia.org/wiki/Constrained_clustering)
>
> [Constrained Clustering](https://web.cs.ucdavis.edu/~davidson/Publications/ICDMTutorial.pdf) introduces 'semi-supervised' learning into this unsupervised method. The relationships between points are flagged as 'cannot link' or 'must-link' so some rules are forced on the dataset.
>
> An example: If an algorithm is set free on a batch of unlabelled or semi-labelled data, the clusters it produces may be of poor quality. In the example above, the clusters might group 'round music things' and 'square music things' and 'triangular things' and 'cookies'. If given some constraints, or rules to follow ("the item must be made of plastic", "the item needs to be able to produce music") this can help 'constrain' the algorithm to make better choices.
>
> 🎓 'Density'
>
> Data that is 'noisy' is considered to be 'dense'. The distances between points in each of its clusters may prove, on examination, to be more or less dense, or 'crowded' and thus this data needs to be analyzed with the appropriate clustering method. [This article](https://www.kdnuggets.com/2020/02/understanding-density-based-clustering.html) demonstrates the difference between using K-Means clustering vs. HDBSCAN algorithms to explore a noisy dataset with uneven cluster density.
Deepen your understanding of clustering techniques in this [Learn module](https://docs.microsoft.com/learn/modules/train-evaluate-cluster-models?WT.mc_id=academic-15963-cxa)
### **Clustering algorithms**
There are over 100 clustering algorithms, and their use depends on the nature of the data at hand. Let's discuss some of the major ones:
- **Hierarchical clustering**. If an object is classified by its proximity to a nearby object, rather than to one farther away, clusters are formed based on their members' distance to and from other objects. Hierarchical clustering is characterized by repeatedly combining two clusters.
<p >
<img src="../../images/hierarchical.png"
width="600"/>
<figcaption>Infographic by Dasani Madipalli</figcaption>
- **Centroid clustering**. This popular algorithm requires the choice of 'k', or the number of clusters to form, after which the algorithm determines the center point of a cluster and gathers data around that point. [K-means clustering](https://wikipedia.org/wiki/K-means_clustering) is a popular version of centroid clustering which separates a data set into pre-defined K groups. The center is determined by the nearest mean, thus the name. The squared distance from the cluster is minimized.
<p >
<img src="../../images/centroid.png"
width="600"/>
<figcaption>Infographic by Dasani Madipalli</figcaption>
- **Distribution-based clustering**. Based in statistical modeling, distribution-based clustering centers on determining the probability that a data point belongs to a cluster, and assigning it accordingly. Gaussian mixture methods belong to this type.
- **Density-based clustering**. Data points are assigned to clusters based on their density, or their grouping around each other. Data points far from the group are considered outliers or noise. DBSCAN, Mean-shift and OPTICS belong to this type of clustering.
- **Grid-based clustering**. For multi-dimensional datasets, a grid is created and the data is divided amongst the grid's cells, thereby creating clusters.
The best way to learn about clustering is to try it for yourself, so that's what you'll do in this exercise.
We'll require some packages to knock-off this module. You can have them installed as: `install.packages(c('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork'))`
Alternatively, the script below checks whether you have the packages required to complete this module and installs them for you in case some are missing.
```
suppressWarnings(if(!require("pacman")) install.packages("pacman"))
pacman::p_load('tidyverse', 'tidymodels', 'DataExplorer', 'summarytools', 'plotly', 'paletteer', 'corrplot', 'patchwork')
```
## Exercise - cluster your data
Clustering as a technique is greatly aided by proper visualization, so let's get started by visualizing our music data. This exercise will help us decide which of the methods of clustering we should most effectively use for the nature of this data.
Let's hit the ground running by importing the data.
```
# Load the core tidyverse and make it available in your current R session
library(tidyverse)
# Import the data into a tibble
df <- read_csv(file = "https://raw.githubusercontent.com/microsoft/ML-For-Beginners/main/5-Clustering/data/nigerian-songs.csv")
# View the first 5 rows of the data set
df %>%
slice_head(n = 5)
```
Sometimes, we may want some little more information on our data. We can have a look at the `data` and `its structure` by using the [*glimpse()*](https://pillar.r-lib.org/reference/glimpse.html) function:
```
# Glimpse into the data set
df %>%
glimpse()
```
Good job!💪
We can observe that `glimpse()` will give you the total number of rows (observations) and columns (variables), then, the first few entries of each variable in a row after the variable name. In addition, the *data type* of the variable is given immediately after each variable's name inside `< >`.
`DataExplorer::introduce()` can summarize this information neatly:
```
# Describe basic information for our data
df %>%
introduce()
# A visual display of the same
df %>%
plot_intro()
```
Awesome! We have just learnt that our data has no missing values.
While we are at it, we can explore common central tendency statistics (e.g [mean](https://en.wikipedia.org/wiki/Arithmetic_mean) and [median](https://en.wikipedia.org/wiki/Median)) and measures of dispersion (e.g [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation)) using `summarytools::descr()`
```
# Describe common statistics
df %>%
descr(stats = "common")
```
Let's look at the general values of the data. Note that popularity can be `0`, which show songs that have no ranking. We'll remove those shortly.
> 🤔 If we are working with clustering, an unsupervised method that does not require labeled data, why are we showing this data with labels? In the data exploration phase, they come in handy, but they are not necessary for the clustering algorithms to work.
### 1. Explore popular genres
Let's go ahead and find out the most popular genres 🎶 by making a count of the instances it appears.
```
# Popular genres
top_genres <- df %>%
count(artist_top_genre, sort = TRUE) %>%
# Encode to categorical and reorder the according to count
mutate(artist_top_genre = factor(artist_top_genre) %>% fct_inorder())
# Print the top genres
top_genres
```
That went well! They say a picture is worth a thousand rows of a data frame (actually nobody ever says that 😅). But you get the gist of it, right?
One way to visualize categorical data (character or factor variables) is using barplots. Let's make a barplot of the top 10 genres:
```
# Change the default gray theme
theme_set(theme_light())
# Visualize popular genres
top_genres %>%
slice(1:10) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5),
# Rotates the X markers (so we can read them)
axis.text.x = element_text(angle = 90))
```
Now it's way easier to identify that we have `missing` genres 🧐!
> A good visualisation will show you things that you did not expect, or raise new questions about the data - Hadley Wickham and Garrett Grolemund, [R For Data Science](https://r4ds.had.co.nz/introduction.html)
Note, when the top genre is described as `Missing`, that means that Spotify did not classify it, so let's get rid of it.
```
# Visualize popular genres
top_genres %>%
filter(artist_top_genre != "Missing") %>%
slice(1:10) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("rcartocolor::Vivid") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5),
# Rotates the X markers (so we can read them)
axis.text.x = element_text(angle = 90))
```
From the little data exploration, we learn that the top three genres dominate this dataset. Let's concentrate on `afro dancehall`, `afropop`, and `nigerian pop`, additionally filter the dataset to remove anything with a 0 popularity value (meaning it was not classified with a popularity in the dataset and can be considered noise for our purposes):
```
nigerian_songs <- df %>%
# Concentrate on top 3 genres
filter(artist_top_genre %in% c("afro dancehall", "afropop","nigerian pop")) %>%
# Remove unclassified observations
filter(popularity != 0)
# Visualize popular genres
nigerian_songs %>%
count(artist_top_genre) %>%
ggplot(mapping = aes(x = artist_top_genre, y = n,
fill = artist_top_genre)) +
geom_col(alpha = 0.8) +
paletteer::scale_fill_paletteer_d("ggsci::category10_d3") +
ggtitle("Top genres") +
theme(plot.title = element_text(hjust = 0.5))
```
Let's see whether there is any apparent linear relationship among the numerical variables in our data set. This relationship is quantified mathematically by the [correlation statistic](https://en.wikipedia.org/wiki/Correlation).
The correlation statistic is a value between -1 and 1 that indicates the strength of a relationship. Values above 0 indicate a *positive* correlation (high values of one variable tend to coincide with high values of the other), while values below 0 indicate a *negative* correlation (high values of one variable tend to coincide with low values of the other).
```
# Narrow down to numeric variables and fid correlation
corr_mat <- nigerian_songs %>%
select(where(is.numeric)) %>%
cor()
# Visualize correlation matrix
corrplot(corr_mat, order = 'AOE', col = c('white', 'black'), bg = 'gold2')
```
The data is not strongly correlated except between `energy` and `loudness`, which makes sense, given that loud music is usually pretty energetic. `Popularity` has a correspondence to `release date`, which also makes sense, as more recent songs are probably more popular. Length and energy seem to have a correlation too.
It will be interesting to see what a clustering algorithm can make of this data!
> 🎓 Note that correlation does not imply causation! We have proof of correlation but no proof of causation. An [amusing web site](https://tylervigen.com/spurious-correlations) has some visuals that emphasize this point.
### 2. Explore data distribution
Let's ask some more subtle questions. Are the genres significantly different in the perception of their danceability, based on their popularity? Let's examine our top three genres data distribution for popularity and danceability along a given x and y axis using [density plots](https://www.khanacademy.org/math/ap-statistics/density-curves-normal-distribution-ap/density-curves/v/density-curves).
```
# Perform 2D kernel density estimation
density_estimate_2d <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre)) +
geom_density_2d(bins = 5, size = 1) +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") +
xlim(-20, 80) +
ylim(0, 1.2)
# Density plot based on the popularity
density_estimate_pop <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, fill = artist_top_genre, color = artist_top_genre)) +
geom_density(size = 1, alpha = 0.5) +
paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry") +
theme(legend.position = "none")
# Density plot based on the danceability
density_estimate_dance <- nigerian_songs %>%
ggplot(mapping = aes(x = danceability, fill = artist_top_genre, color = artist_top_genre)) +
geom_density(size = 1, alpha = 0.5) +
paletteer::scale_fill_paletteer_d("RSkittleBrewer::wildberry") +
paletteer::scale_color_paletteer_d("RSkittleBrewer::wildberry")
# Patch everything together
library(patchwork)
density_estimate_2d / (density_estimate_pop + density_estimate_dance)
```
We see that there are concentric circles that line up, regardless of genre. Could it be that Nigerian tastes converge at a certain level of danceability for this genre?
In general, the three genres align in terms of their popularity and danceability. Determining clusters in this loosely-aligned data will be a challenge. Let's see whether a scatter plot can support this.
```
# A scatter plot of popularity and danceability
scatter_plot <- nigerian_songs %>%
ggplot(mapping = aes(x = popularity, y = danceability, color = artist_top_genre, shape = artist_top_genre)) +
geom_point(size = 2, alpha = 0.8) +
paletteer::scale_color_paletteer_d("futurevisions::mars")
# Add a touch of interactivity
ggplotly(scatter_plot)
```
A scatterplot of the same axes shows a similar pattern of convergence.
In general, for clustering, you can use scatterplots to show clusters of data, so mastering this type of visualization is very useful. In the next lesson, we will take this filtered data and use k-means clustering to discover groups in this data that see to overlap in interesting ways.
## **🚀 Challenge**
In preparation for the next lesson, make a chart about the various clustering algorithms you might discover and use in a production environment. What kinds of problems is the clustering trying to address?
## [**Post-lecture quiz**](https://white-water-09ec41f0f.azurestaticapps.net/quiz/28/)
## **Review & Self Study**
Before you apply clustering algorithms, as we have learned, it's a good idea to understand the nature of your dataset. Read more on this topic [here](https://www.kdnuggets.com/2019/10/right-clustering-algorithm.html)
Deepen your understanding of clustering techniques:
- [Train and Evaluate Clustering Models using Tidymodels and friends](https://rpubs.com/eR_ic/clustering)
- Bradley Boehmke & Brandon Greenwell, [*Hands-On Machine Learning with R*](https://bradleyboehmke.github.io/HOML/)*.*
## **Assignment**
[Research other visualizations for clustering](https://github.com/microsoft/ML-For-Beginners/blob/main/5-Clustering/1-Visualize/assignment.md)
## THANK YOU TO:
[Jen Looper](https://www.twitter.com/jenlooper) for creating the original Python version of this module ♥️
[`Dasani Madipalli`](https://twitter.com/dasani_decoded) for creating the amazing illustrations that make machine learning concepts more interpretable and easier to understand.
Happy Learning,
[Eric](https://twitter.com/ericntay), Gold Microsoft Learn Student Ambassador.
| true | code | 0.624408 | null | null | null | null |
|
# B - A Closer Look at Word Embeddings
We have very briefly covered how word embeddings (also known as word vectors) are used in the tutorials. In this appendix we'll have a closer look at these embeddings and find some (hopefully) interesting results.
Embeddings transform a one-hot encoded vector (a vector that is 0 in elements except one, which is 1) into a much smaller dimension vector of real numbers. The one-hot encoded vector is also known as a *sparse vector*, whilst the real valued vector is known as a *dense vector*.
The key concept in these word embeddings is that words that appear in similar _contexts_ appear nearby in the vector space, i.e. the Euclidean distance between these two word vectors is small. By context here, we mean the surrounding words. For example in the sentences "I purchased some items at the shop" and "I purchased some items at the store" the words 'shop' and 'store' appear in the same context and thus should be close together in vector space.
You may have also heard about *word2vec*. *word2vec* is an algorithm (actually a bunch of algorithms) that calculates word vectors from a corpus. In this appendix we use *GloVe* vectors, *GloVe* being another algorithm to calculate word vectors. If you want to know how *word2vec* works, check out a two part series [here](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) and [here](http://mccormickml.com/2017/01/11/word2vec-tutorial-part-2-negative-sampling/), and if you want to find out more about *GloVe*, check the website [here](https://nlp.stanford.edu/projects/glove/).
In PyTorch, we use word vectors with the `nn.Embedding` layer, which takes a _**[sentence length, batch size]**_ tensor and transforms it into a _**[sentence length, batch size, embedding dimensions]**_ tensor.
In tutorial 2 onwards, we also used pre-trained word embeddings (specifically the GloVe vectors) provided by TorchText. These embeddings have been trained on a gigantic corpus. We can use these pre-trained vectors within any of our models, with the idea that as they have already learned the context of each word they will give us a better starting point for our word vectors. This usually leads to faster training time and/or improved accuracy.
In this appendix we won't be training any models, instead we'll be looking at the word embeddings and finding a few interesting things about them.
A lot of the code from the first half of this appendix is taken from [here](https://github.com/spro/practical-pytorch/blob/master/glove-word-vectors/glove-word-vectors.ipynb). For more information about word embeddings, go [here](https://monkeylearn.com/blog/word-embeddings-transform-text-numbers/).
## Loading the GloVe vectors
First, we'll load the GloVe vectors. The `name` field specifies what the vectors have been trained on, here the `6B` means a corpus of 6 billion words. The `dim` argument specifies the dimensionality of the word vectors. GloVe vectors are available in 50, 100, 200 and 300 dimensions. There is also a `42B` and `840B` glove vectors, however they are only available at 300 dimensions.
```
import torchtext.vocab
glove = torchtext.vocab.GloVe(name = '6B', dim = 100)
print(f'There are {len(glove.itos)} words in the vocabulary')
```
As shown above, there are 400,000 unique words in the GloVe vocabulary. These are the most common words found in the corpus the vectors were trained on. **In these set of GloVe vectors, every single word is lower-case only.**
`glove.vectors` is the actual tensor containing the values of the embeddings.
```
glove.vectors.shape
```
We can see what word is associated with each row by checking the `itos` (int to string) list.
Below implies that row 0 is the vector associated with the word 'the', row 1 for ',' (comma), row 2 for '.' (period), etc.
```
glove.itos[:10]
```
We can also use the `stoi` (string to int) dictionary, in which we input a word and receive the associated integer/index. If you try get the index of a word that is not in the vocabulary, you receive an error.
```
glove.stoi['the']
```
We can get the vector of a word by first getting the integer associated with it and then indexing into the word embedding tensor with that index.
```
glove.vectors[glove.stoi['the']].shape
```
We'll be doing this a lot, so we'll create a function that takes in word embeddings and a word then returns the associated vector. It'll also throw an error if the word doesn't exist in the vocabulary.
```
def get_vector(embeddings, word):
assert word in embeddings.stoi, f'*{word}* is not in the vocab!'
return embeddings.vectors[embeddings.stoi[word]]
```
As before, we use a word to get the associated vector.
```
get_vector(glove, 'the').shape
```
## Similar Contexts
Now to start looking at the context of different words.
If we want to find the words similar to a certain input word, we first find the vector of this input word, then we scan through our vocabulary calculating the distance between the vector of each word and our input word vector. We then sort these from closest to furthest away.
The function below returns the closest 10 words to an input word vector:
```
import torch
def closest_words(embeddings, vector, n = 10):
distances = [(word, torch.dist(vector, get_vector(embeddings, word)).item())
for word in embeddings.itos]
return sorted(distances, key = lambda w: w[1])[:n]
```
Let's try it out with 'korea'. The closest word is the word 'korea' itself (not very interesting), however all of the words are related in some way. Pyongyang is the capital of North Korea, DPRK is the official name of North Korea, etc.
Interestingly, we also get 'Japan' and 'China', implies that Korea, Japan and China are frequently talked about together in similar contexts. This makes sense as they are geographically situated near each other.
```
word_vector = get_vector(glove, 'korea')
closest_words(glove, word_vector)
```
Looking at another country, India, we also get nearby countries: Thailand, Malaysia and Sri Lanka (as two separate words). Australia is relatively close to India (geographically), but Thailand and Malaysia are closer. So why is Australia closer to India in vector space? This is most probably due to India and Australia appearing in the context of [cricket](https://en.wikipedia.org/wiki/Cricket) matches together.
```
word_vector = get_vector(glove, 'india')
closest_words(glove, word_vector)
```
We'll also create another function that will nicely print out the tuples returned by our `closest_words` function.
```
def print_tuples(tuples):
for w, d in tuples:
print(f'({d:02.04f}) {w}')
```
A final word to look at, 'sports'. As we can see, the closest words are most of the sports themselves.
```
word_vector = get_vector(glove, 'sports')
print_tuples(closest_words(glove, word_vector))
```
## Analogies
Another property of word embeddings is that they can be operated on just as any standard vector and give interesting results.
We'll show an example of this first, and then explain it:
```
def analogy(embeddings, word1, word2, word3, n=5):
#get vectors for each word
word1_vector = get_vector(embeddings, word1)
word2_vector = get_vector(embeddings, word2)
word3_vector = get_vector(embeddings, word3)
#calculate analogy vector
analogy_vector = word2_vector - word1_vector + word3_vector
#find closest words to analogy vector
candidate_words = closest_words(embeddings, analogy_vector, n+3)
#filter out words already in analogy
candidate_words = [(word, dist) for (word, dist) in candidate_words
if word not in [word1, word2, word3]][:n]
print(f'{word1} is to {word2} as {word3} is to...')
return candidate_words
print_tuples(analogy(glove, 'man', 'king', 'woman'))
```
This is the canonical example which shows off this property of word embeddings. So why does it work? Why does the vector of 'woman' added to the vector of 'king' minus the vector of 'man' give us 'queen'?
If we think about it, the vector calculated from 'king' minus 'man' gives us a "royalty vector". This is the vector associated with traveling from a man to his royal counterpart, a king. If we add this "royality vector" to 'woman', this should travel to her royal equivalent, which is a queen!
We can do this with other analogies too. For example, this gets an "acting career vector":
```
print_tuples(analogy(glove, 'man', 'actor', 'woman'))
```
For a "baby animal vector":
```
print_tuples(analogy(glove, 'cat', 'kitten', 'dog'))
```
A "capital city vector":
```
print_tuples(analogy(glove, 'france', 'paris', 'england'))
```
A "musician's genre vector":
```
print_tuples(analogy(glove, 'elvis', 'rock', 'eminem'))
```
And an "ingredient vector":
```
print_tuples(analogy(glove, 'beer', 'barley', 'wine'))
```
## Correcting Spelling Mistakes
Another interesting property of word embeddings is that they can actually be used to correct spelling mistakes!
We'll put their findings into code and briefly explain them, but to read more about this, check out the [original thread](http://forums.fast.ai/t/nlp-any-libraries-dictionaries-out-there-for-fixing-common-spelling-errors/16411) and the associated [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).
First, we need to load up the much larger vocabulary GloVe vectors, this is due to the spelling mistakes not appearing in the smaller vocabulary.
**Note**: these vectors are very large (~2GB), so watch out if you have a limited internet connection.
```
glove = torchtext.vocab.GloVe(name = '840B', dim = 300)
```
Checking the vocabulary size of these embeddings, we can see we now have over 2 million unique words in our vocabulary!
```
glove.vectors.shape
```
As the vectors were trained with a much larger vocabulary on a larger corpus of text, the words that appear are a little different. Notice how the words 'north', 'south', 'pyongyang' and 'dprk' no longer appear in the most closest words to 'korea'.
```
word_vector = get_vector(glove, 'korea')
print_tuples(closest_words(glove, word_vector))
```
Our first step to correcting spelling mistakes is looking at the vector for a misspelling of the word 'reliable'.
```
word_vector = get_vector(glove, 'relieable')
print_tuples(closest_words(glove, word_vector))
```
Notice how the correct spelling, "reliable", does not appear in the top 10 closest words. Surely the misspellings of a word should appear next to the correct spelling of the word as they appear in the same context, right?
The hypothesis is that misspellings of words are all equally shifted away from their correct spelling. This is because articles of text that contain spelling mistakes are usually written in an informal manner where correct spelling doesn't matter as much (such as tweets/blog posts), thus spelling errors will appear together as they appear in context of informal articles.
Similar to how we created analogies before, we can create a "correct spelling" vector. This time, instead of using a single example to create our vector, we'll use the average of multiple examples. This will hopefully give better accuracy!
We first create a vector for the correct spelling, 'reliable', then calculate the difference between the "reliable vector" and each of the 8 misspellings of 'reliable'. As we are going to concatenate these 8 misspelling tensors together we need to unsqueeze a "batch" dimension to them.
```
reliable_vector = get_vector(glove, 'reliable')
reliable_misspellings = ['relieable', 'relyable', 'realible', 'realiable',
'relable', 'relaible', 'reliabe', 'relaiable']
diff_reliable = [(reliable_vector - get_vector(glove, s)).unsqueeze(0)
for s in reliable_misspellings]
```
We take the average of these 8 'difference from reliable' vectors to get our "misspelling vector".
```
misspelling_vector = torch.cat(diff_reliable, dim = 0).mean(dim = 0)
```
We can now correct other spelling mistakes using this "misspelling vector" by finding the closest words to the sum of the vector of a misspelled word and the "misspelling vector".
For a misspelling of "because":
```
word_vector = get_vector(glove, 'becuase')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a misspelling of "definitely":
```
word_vector = get_vector(glove, 'defintiely')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a misspelling of "consistent":
```
word_vector = get_vector(glove, 'consistant')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a misspelling of "package":
```
word_vector = get_vector(glove, 'pakage')
print_tuples(closest_words(glove, word_vector + misspelling_vector))
```
For a more in-depth look at this, check out the [write-up](https://blog.usejournal.com/a-simple-spell-checker-built-from-word-vectors-9f28452b6f26).
| true | code | 0.575827 | null | null | null | null |
|
```
import lifelines
import pymc as pm
from pyBMA.CoxPHFitter import CoxPHFitter
import matplotlib.pyplot as plt
import numpy as np
from numpy import log
from datetime import datetime
import pandas as pd
%matplotlib inline
```
The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
```
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
```
Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
```
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
```
Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
```
#### Adjust burn and thin, both paramters of the mcmc sample function
#### Narrow and broaden prior
```
Problems:
7 - Try testing whether the median is greater than a different values
```
#### Hypothesis testing
```
If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
```
### Fit a cox proprtional hazards model
```
Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
```
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
```
Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
```
#### BMA Coefficient values
#### Different priors
```
| true | code | 0.444505 | null | null | null | null |
|
# Probability Distributions
# Some typical stuff we'll likely use
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%config InlineBackend.figure_format = 'retina'
```
# [SciPy](https://scipy.org)
### [scipy.stats](https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html)
```
import scipy as sp
import scipy.stats as st
```
# Binomial Distribution
### <font color=darkred> **Example**: A couple, who are both carriers for a recessive disease, wish to have 5 children. They want to know the probability that they will have four healthy kids.</font>
In this case the random variable is the number of healthy kids.
```
# number of trials (kids)
n = 5
# probability of success on each trial
# i.e. probability that each child will be healthy = 1 - 0.5 * 0.5 = 0.75
p = 0.75
# a binomial distribution object
dist = st.binom(n, p)
# probability of four healthy kids
dist.pmf(4)
print(f"The probability of having four healthy kids is {dist.pmf(4):.3f}")
```
### <font color=darkred>Probability to have each of 0-5 healthy kids.</font>
```
# all possible # of successes out of n trials
# i.e. all possible outcomes of the random variable
# i.e. all possible number of healthy kids = 0-5
numHealthyKids = np.arange(n+1)
numHealthyKids
# probability of obtaining each possible number of successes
# i.e. probability of having each possible number of healthy children
pmf = dist.pmf(numHealthyKids)
pmf
```
### <font color=darkred>Visualize the probability to have each of 0-5 healthy kids.</font>
```
plt.bar(numHealthyKids, pmf)
plt.xlabel('# healthy children', fontsize=18)
plt.ylabel('probability', fontsize=18);
```
### <font color=darkred>Probability to have at least 4 healthy kids.</font>
```
# sum of probabilities of 4 and 5 healthy kids
pmf[-2:].sum()
# remaining probability after subtracting CDF for 3 kids
1 - dist.cdf(3)
# survival function for 3 kids
dist.sf(3)
```
### <font color=darkred>What is the expected number of healthy kids?</font>
```
print(f"The expected number of healthy kids is {dist.mean()}")
```
### <font color=darkred>How sure are we about the above estimate?</font>
```
print(f"The expected number of healthy kids is {dist.mean()} ± {dist.std():.2f}")
```
# <font color=red> Exercise</font>
Should the couple consider having six children?
1. Plot the *pmf* for the probability of each possible number of healthy children.
2. What's the probability that they will all be healthy?
# Poisson Distribution
### <font color=darkred> **Example**: Assume that the rate of deleterious mutations is ~1.2 per diploid genome. What is the probability that an individual has 8 or more spontaneous deleterious mutations?</font>
In this case the random variable is the number of deleterious mutations within an individuals genome.
```
# the rate of deleterious mutations is 1.2 per diploid genome
rate = 1.2
# poisson distribution describing the predicted number of spontaneous mutations
dist = st.poisson(rate)
# let's look at the probability for 0-10 mutations
numMutations = np.arange(11)
plt.bar(numMutations, dist.pmf(numMutations))
plt.xlabel('# mutations', fontsize=18)
plt.ylabel('probability', fontsize=18);
print(f"Probability of less than 8 mutations = {dist.cdf(7)}")
print(f"Probability of 8 or more mutations = {dist.sf(7)}")
dist.cdf(7) + dist.sf(7)
```
# <font color=red> Exercise</font>
For the above example, what is the probability that an individual has three or fewer mutations?
# Exponential Distribution
### <font color=darkred> **Example**: Assume that a neuron spikes 1.5 times per second on average. Plot the probability density function of interspike intervals from zero to five seconds with a resolution of 0.01 seconds.</font>
In this case the random variable is the interspike interval time.
```
# spike rate per second
rate = 1.5
# exponential distribution describing the neuron's predicted interspike intervals
dist = st.expon(loc=0, scale=1/rate)
# plot interspike intervals from 0-5 seconds at 0.01 sec resolution
intervalsSec = np.linspace(0, 5, 501)
# probability density for each interval
pdf = dist.pdf(intervalsSec)
plt.plot(intervalsSec, pdf)
plt.xlabel('interspike interval (sec)', fontsize=18)
plt.ylabel('pdf', fontsize=18);
```
### <font color=darkred>What is the average interval?</font>
```
print(f"Average interspike interval = {dist.mean():.2f} seconds.")
```
### <font color=darkred>time constant = 1 / rate = mean</font>
```
tau = 1 / rate
tau
```
### <font color=darkred> What is the probability that an interval will be between 1 and 2 seconds?</font>
```
prob1to2 = dist.cdf(2) - dist.cdf(1);
print(f"Probability of an interspike interval being between 1 and 2 seconds is {prob1to2:.2f}")
```
### <font color=darkred> For what time *T* is the probability that an interval is shorter than *T* equal to 25%?</font>
```
timeAtFirst25PercentOfDist = dist.ppf(0.25) # percent point function
print(f"There is a 25% chance that an interval is shorter than {timeAtFirst25PercentOfDist:.2f} seconds.")
```
# <font color=red> Exercise</font>
For the above example, what is the probability that 3 seconds will pass without any spikes?
# Normal Distribution
### <font color=darkred> **Example**: Under basal conditions the resting membrane voltage of a neuron fluctuates around -70 mV with a variance of 10 mV.</font>
In this case the random variable is the neuron's resting membrane voltage.
```
# mean resting membrane voltage (mV)
mu = -70
# standard deviation about the mean
sd = np.sqrt(10)
# normal distribution describing the neuron's predicted resting membrane voltage
dist = st.norm(mu, sd)
# membrane voltages from -85 to -55 mV
mV = np.linspace(-85, -55, 301)
# probability density for each membrane voltage in mV
pdf = dist.pdf(mV)
plt.plot(mV, pdf)
plt.xlabel('membrane voltage (mV)', fontsize=18)
plt.ylabel('pdf', fontsize=18);
```
### <font color=darkred> What range of membrane voltages (centered on the mean) account for 95% of the probability.</font>
```
low = dist.ppf(0.025) # first 2.5% of distribution
high = dist.ppf(0.975) # first 97.5% of distribution
print(f"95% of membrane voltages are expected to fall within {low :.1f} and {high :.1f} mV.")
```
# <font color=red> Exercise</font>
In a resting neuron, what's the probability that you would measure a membrane voltage greater than -65 mV?
If you meaassure -65 mV, is the neuron at rest?
# <font color=red> Exercise</font>
What probability distribution might best describe the number of synapses per millimeter of dendrite?
A) Binomial
B) Poisson
C) Exponential
D) Normal
# <font color=red> Exercise</font>
What probability distribution might best describe the time a protein spends in its active conformation?
A) Binomial
B) Poisson
C) Exponential
D) Normal
# <font color=red> Exercise</font>
What probability distribution might best describe the weights of adult mice in a colony?
A) Binomial
B) Poisson
C) Exponential
D) Normal
# <font color=red> Exercise</font>
What probability distribution might best describe the number of times a subject is able to identify the correct target in a series of trials?
A) Binomial
B) Poisson
C) Exponential
D) Normal
| true | code | 0.675336 | null | null | null | null |
|
# [모듈 2.1] SageMaker 클러스터에서 훈련 (No VPC에서 실행)
이 노트북은 아래의 작업을 실행 합니다.
- SageMaker Hosting Cluster 에서 훈련을 실행
- 훈련한 Job 이름을 저장
- 다음 노트북에서 모델 배포 및 추론시에 사용 합니다.
---
SageMaker의 세션을 얻고, role 정보를 가져옵니다.
- 위의 두 정보를 통해서 SageMaker Hosting Cluster에 연결합니다.
```
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
## 로컬의 데이터 S3 업로딩
로컬의 데이터를 S3에 업로딩하여 훈련시에 Input으로 사용 합니다.
```
# dataset_location = sagemaker_session.upload_data(path='data', key_prefix='data/DEMO-cifar10')
# display(dataset_location)
dataset_location = 's3://sagemaker-ap-northeast-2-057716757052/data/DEMO-cifar10'
dataset_location
# efs_dir = '/home/ec2-user/efs/data'
# ! ls {efs_dir} -al
# ! aws s3 cp {dataset_location} {efs_dir} --recursive
from sagemaker.inputs import FileSystemInput
# Specify EFS ile system id.
file_system_id = 'fs-38dc1558' # 'fs-xxxxxxxx'
print(f"EFS file-system-id: {file_system_id}")
# Specify directory path for input data on the file system.
# You need to provide normalized and absolute path below.
train_file_system_directory_path = '/data/train'
eval_file_system_directory_path = '/data/eval'
validation_file_system_directory_path = '/data/validation'
print(f'EFS file-system data input path: {train_file_system_directory_path}')
print(f'EFS file-system data input path: {eval_file_system_directory_path}')
print(f'EFS file-system data input path: {validation_file_system_directory_path}')
# Specify the access mode of the mount of the directory associated with the file system.
# Directory must be mounted 'ro'(read-only).
file_system_access_mode = 'ro'
# Specify your file system type
file_system_type = 'EFS'
train = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=train_file_system_directory_path,
file_system_access_mode=file_system_access_mode)
eval = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=eval_file_system_directory_path,
file_system_access_mode=file_system_access_mode)
validation = FileSystemInput(file_system_id=file_system_id,
file_system_type=file_system_type,
directory_path=validation_file_system_directory_path,
file_system_access_mode=file_system_access_mode)
aws_region = 'ap-northeast-2'# aws-region-code e.g. us-east-1
s3_bucket = 'sagemaker-ap-northeast-2-057716757052'# your-s3-bucket-name
prefix = "cifar10/efs" #prefix in your bucket
s3_output_location = f's3://{s3_bucket}/{prefix}/output'
print(f'S3 model output location: {s3_output_location}')
security_group_ids = ['sg-0192524ef63ec6138'] # ['sg-xxxxxxxx']
# subnets = ['subnet-0a84bcfa36d3981e6','subnet-0304abaaefc2b1c34','subnet-0a2204b79f378b178'] # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx']
subnets = ['subnet-0a84bcfa36d3981e6'] # [ 'subnet-xxxxxxx', 'subnet-xxxxxxx', 'subnet-xxxxxxx']
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs' : 1},
train_instance_count=1,
train_instance_type='ml.p3.2xlarge',
output_path=s3_output_location,
subnets=subnets,
security_group_ids=security_group_ids,
session = sagemaker.Session()
)
estimator.fit({'train': train,
'validation': validation,
'eval': eval,
})
# estimator.fit({'train': 'file://data/train',
# 'validation': 'file://data/validation',
# 'eval': 'file://data/eval'})
```
# VPC_Mode를 True, False 선택
#### **[중요] VPC_Mode에서 실행시에 True로 변경해주세요**
```
VPC_Mode = False
from sagemaker.tensorflow import TensorFlow
def retrieve_estimator(VPC_Mode):
if VPC_Mode:
# VPC 모드 경우에 subnets, security_group을 기술 합니다.
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs': 2},
train_instance_count=1,
train_instance_type='ml.p3.8xlarge',
subnets = ['subnet-090c1fad32165b0fa','subnet-0bd7cff3909c55018'],
security_group_ids = ['sg-0f45d634d80aef27e']
)
else:
estimator = TensorFlow(base_job_name='cifar10',
entry_point='cifar10_keras_sm_tf2.py',
source_dir='training_script',
role=role,
framework_version='2.0.0',
py_version='py3',
script_mode=True,
hyperparameters={'epochs': 2},
train_instance_count=1,
train_instance_type='ml.p3.8xlarge')
return estimator
estimator = retrieve_estimator(VPC_Mode)
```
학습을 수행합니다. 이번에는 각각의 채널(`train, validation, eval`)에 S3의 데이터 저장 위치를 지정합니다.<br>
학습 완료 후 Billable seconds도 확인해 보세요. Billable seconds는 실제로 학습 수행 시 과금되는 시간입니다.
```
Billable seconds: <time>
```
참고로, `ml.p2.xlarge` 인스턴스로 5 epoch 학습 시 전체 6분-7분이 소요되고, 실제 학습에 소요되는 시간은 3분-4분이 소요됩니다.
```
%%time
estimator.fit({'train':'{}/train'.format(dataset_location),
'validation':'{}/validation'.format(dataset_location),
'eval':'{}/eval'.format(dataset_location)})
```
## training_job_name 저장
현재의 training_job_name을 저장 합니다.
- training_job_name을 에는 훈련에 관련 내용 및 훈련 결과인 **Model Artifact** 파일의 S3 경로를 제공 합니다.
```
train_job_name = estimator._current_job_name
%store train_job_name
```
| true | code | 0.405537 | null | null | null | null |
|
<a href="https://colab.research.google.com/github/iotanalytics/IoTTutorial/blob/main/code/preprocessing_and_decomposition/Matrix_Profile.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Matrix Profile
## Introduction
The matrix profile (MP) is a data structure and associated algorithms that helps solve the dual problem of anomaly detection and motif discovery. It is robust, scalable and largely parameter-free.
MP can be combined with other algorithms to accomplish:
* Motif discovery
* Time series chains
* Anomaly discovery
* Joins
* Semantic segmentation
matrixprofile-ts offers 3 different algorithms to compute Matrix Profile:
* STAMP (Scalable Time Series Anytime Matrix Profile) - Each distance profile is independent of other distance profiles, the order in which they are computed can be random. It is an anytime algorithm.
* STOMP (Scalable Time Series Ordered Matrix Profile) - This algorithm is an exact ordered algorithm. It is significantly faster than STAMP.
* SCRIMP++ (Scalable Column Independent Matrix Profile) - This algorithm combines the anytime component of STAMP with the speed of STOMP.
See: https://towardsdatascience.com/introduction-to-matrix-profiles-5568f3375d90
## Code Example
```
!pip install matrixprofile-ts
import pandas as pd
## example data importing
data = pd.read_csv('https://raw.githubusercontent.com/iotanalytics/IoTTutorial/main/data/SCG_data.csv').drop('Unnamed: 0',1).to_numpy()[0:20,:1000]
import operator
import numpy as np
import matplotlib.pyplot as plt
from matrixprofile import *
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
# Pull a portion of the data
pattern = data[10,:] + max(abs(data[10,:]))
# Compute Matrix Profile
m = 10
mp = matrixProfile.stomp(pattern,m)
#Append np.nan to Matrix profile to enable plotting against raw data
mp_adj = np.append(mp[0],np.zeros(m-1)+np.nan)
#Plot the signal data
fig, (ax1, ax2) = plt.subplots(2,1,sharex=True,figsize=(20,10))
ax1.plot(np.arange(len(pattern)),pattern)
ax1.set_ylabel('Signal', size=22)
#Plot the Matrix Profile
ax2.plot(np.arange(len(mp_adj)),mp_adj, label="Matrix Profile", color='red')
ax2.set_ylabel('Matrix Profile', size=22)
ax2.set_xlabel('Time', size=22);
```
## Discussion
Pros:
* It is exact: For motif discovery, discord discovery, time series joins etc., the Matrix Profile based methods provide no false positives or false dismissals.
* It is simple and parameter-free: In contrast, the more general algorithms in this space
that typically require building and tuning spatial access methods and/or hash functions.
* It is space efficient: Matrix Profile construction algorithms requires an inconsequential
space overhead, just linear in the time series length with a small constant factor, allowing
massive datasets to be processed in main memory (for most data mining, disk is death).
* It allows anytime algorithms: While exact MP algorithms are extremely scalable, for
extremely large datasets we can compute the Matrix Profile in an anytime fashion, allowing
ultra-fast approximate solutions and real-time data interaction.
* It is incrementally maintainable: Having computed the Matrix Profile for a dataset,
we can incrementally update it very efficiently. In many domains this means we can effectively
maintain exact joins, motifs, discords on streaming data forever.
* It can leverage hardware: Matrix Profile construction is embarrassingly parallelizable,
both on multicore processors, GPUs, distributed systems etc.
* It is free of the curse of dimensionality: That is to say, It has time complexity that is
constant in subsequence length: This is a very unusual and desirable property; virtually all
existing algorithms in the time series scale poorly as the subsequence length grows.
* It can be constructed in deterministic time: Almost all algorithms for time series
data mining can take radically different times to finish on two (even slightly) different datasets.
In contrast, given only the length of the time series, we can precisely predict in advance how
long it will take to compute the Matrix Profile. (this allows resource planning)
* It can handle missing data: Even in the presence of missing data, we can provide
answers which are guaranteed to have no false negatives.
* Finally, and subjectively: Simplicity and Intuitiveness: Seeing the world through
the MP lens often invites/suggests simple and elegant solutions.
Cons:
* Larger datasets can take a long time to compute. Scalability needs to be addressed.
* Cannot be used with Dynamic time Warping as of now.
* DTW is used for one-to-all matching whereas MP is used for all-to-all matching.
* DTW is used for smaller datasets rather than large.
* Need to adjust window size manually for different datasets.
*How to read MP* :
* Where you see relatively low values, you know that the subsequence in the original time
series must have (at least one) relatively similar subsequence elsewhere in the data (such
regions are “motifs” or reoccurring patterns)
* Where you see relatively high values, you know that the subsequence in the original time
series must be unique in its shape (such areas are “discords” or anomalies). In fact, the highest point is exactly the definition of Time
Series Discord, perhaps the best anomaly detector for time series.
##References:
https://www.cs.ucr.edu/~eamonn/MatrixProfile.html (powerpoints on this site - a lot of examples)
https://towardsdatascience.com/introduction-to-matrix-profiles-5568f3375d90
Python implementation: https://github.com/TDAmeritrade/stumpy
| true | code | 0.665492 | null | null | null | null |
|
```
%matplotlib inline
```
What is `torch.nn` *really*?
============================
by Jeremy Howard, `fast.ai <https://www.fast.ai>`_. Thanks to Rachel Thomas and Francisco Ingham.
We recommend running this tutorial as a notebook, not a script. To download the notebook (.ipynb) file,
click `here <https://pytorch.org/tutorials/beginner/nn_tutorial.html#sphx-glr-download-beginner-nn-tutorial-py>`_ .
PyTorch provides the elegantly designed modules and classes `torch.nn <https://pytorch.org/docs/stable/nn.html>`_ ,
`torch.optim <https://pytorch.org/docs/stable/optim.html>`_ ,
`Dataset <https://pytorch.org/docs/stable/data.html?highlight=dataset#torch.utils.data.Dataset>`_ ,
and `DataLoader <https://pytorch.org/docs/stable/data.html?highlight=dataloader#torch.utils.data.DataLoader>`_
to help you create and train neural networks.
In order to fully utilize their power and customize
them for your problem, you need to really understand exactly what they're
doing. To develop this understanding, we will first train basic neural net
on the MNIST data set without using any features from these models; we will
initially only use the most basic PyTorch tensor functionality. Then, we will
incrementally add one feature from ``torch.nn``, ``torch.optim``, ``Dataset``, or
``DataLoader`` at a time, showing exactly what each piece does, and how it
works to make the code either more concise, or more flexible.
**This tutorial assumes you already have PyTorch installed, and are familiar
with the basics of tensor operations.** (If you're familiar with Numpy array
operations, you'll find the PyTorch tensor operations used here nearly identical).
MNIST data setup
----------------
We will use the classic `MNIST <http://deeplearning.net/data/mnist/>`_ dataset,
which consists of black-and-white images of hand-drawn digits (between 0 and 9).
We will use `pathlib <https://docs.python.org/3/library/pathlib.html>`_
for dealing with paths (part of the Python 3 standard library), and will
download the dataset using
`requests <http://docs.python-requests.org/en/master/>`_. We will only
import modules when we use them, so you can see exactly what's being
used at each point.
```
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
```
This dataset is in numpy array format, and has been stored using pickle,
a python-specific format for serializing data.
```
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
```
Each image is 28 x 28, and is being stored as a flattened row of length
784 (=28x28). Let's take a look at one; we need to reshape it to 2d
first.
```
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
```
PyTorch uses ``torch.tensor``, rather than numpy arrays, so we need to
convert our data.
```
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
```
Neural net from scratch (no torch.nn)
---------------------------------------------
Let's first create a model using nothing but PyTorch tensor operations. We're assuming
you're already familiar with the basics of neural networks. (If you're not, you can
learn them at `course.fast.ai <https://course.fast.ai>`_).
PyTorch provides methods to create random or zero-filled tensors, which we will
use to create our weights and bias for a simple linear model. These are just regular
tensors, with one very special addition: we tell PyTorch that they require a
gradient. This causes PyTorch to record all of the operations done on the tensor,
so that it can calculate the gradient during back-propagation *automatically*!
For the weights, we set ``requires_grad`` **after** the initialization, since we
don't want that step included in the gradient. (Note that a trailling ``_`` in
PyTorch signifies that the operation is performed in-place.)
<div class="alert alert-info"><h4>Note</h4><p>We are initializing the weights here with
`Xavier initialisation <http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf>`_
(by multiplying with 1/sqrt(n)).</p></div>
```
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
```
Thanks to PyTorch's ability to calculate gradients automatically, we can
use any standard Python function (or callable object) as a model! So
let's just write a plain matrix multiplication and broadcasted addition
to create a simple linear model. We also need an activation function, so
we'll write `log_softmax` and use it. Remember: although PyTorch
provides lots of pre-written loss functions, activation functions, and
so forth, you can easily write your own using plain python. PyTorch will
even create fast GPU or vectorized CPU code for your function
automatically.
```
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
```
In the above, the ``@`` stands for the dot product operation. We will call
our function on one batch of data (in this case, 64 images). This is
one *forward pass*. Note that our predictions won't be any better than
random at this stage, since we start with random weights.
```
bs = 64 # batch size
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb) # predictions
preds[0], preds.shape
print(preds[0], preds.shape)
```
As you see, the ``preds`` tensor contains not only the tensor values, but also a
gradient function. We'll use this later to do backprop.
Let's implement negative log-likelihood to use as the loss function
(again, we can just use standard Python):
```
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
```
Let's check our loss with our random model, so we can see if we improve
after a backprop pass later.
```
yb = y_train[0:bs]
print(loss_func(preds, yb))
```
Let's also implement a function to calculate the accuracy of our model.
For each prediction, if the index with the largest value matches the
target value, then the prediction was correct.
```
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
```
Let's check the accuracy of our random model, so we can see if our
accuracy improves as our loss improves.
```
print(accuracy(preds, yb))
```
We can now run a training loop. For each iteration, we will:
- select a mini-batch of data (of size ``bs``)
- use the model to make predictions
- calculate the loss
- ``loss.backward()`` updates the gradients of the model, in this case, ``weights``
and ``bias``.
We now use these gradients to update the weights and bias. We do this
within the ``torch.no_grad()`` context manager, because we do not want these
actions to be recorded for our next calculation of the gradient. You can read
more about how PyTorch's Autograd records operations
`here <https://pytorch.org/docs/stable/notes/autograd.html>`_.
We then set the
gradients to zero, so that we are ready for the next loop.
Otherwise, our gradients would record a running tally of all the operations
that had happened (i.e. ``loss.backward()`` *adds* the gradients to whatever is
already stored, rather than replacing them).
.. tip:: You can use the standard python debugger to step through PyTorch
code, allowing you to check the various variable values at each step.
Uncomment ``set_trace()`` below to try it out.
```
from IPython.core.debugger import set_trace
lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
```
That's it: we've created and trained a minimal neural network (in this case, a
logistic regression, since we have no hidden layers) entirely from scratch!
Let's check the loss and accuracy and compare those to what we got
earlier. We expect that the loss will have decreased and accuracy to
have increased, and they have.
```
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
```
Using torch.nn.functional
------------------------------
We will now refactor our code, so that it does the same thing as before, only
we'll start taking advantage of PyTorch's ``nn`` classes to make it more concise
and flexible. At each step from here, we should be making our code one or more
of: shorter, more understandable, and/or more flexible.
The first and easiest step is to make our code shorter by replacing our
hand-written activation and loss functions with those from ``torch.nn.functional``
(which is generally imported into the namespace ``F`` by convention). This module
contains all the functions in the ``torch.nn`` library (whereas other parts of the
library contain classes). As well as a wide range of loss and activation
functions, you'll also find here some convenient functions for creating neural
nets, such as pooling functions. (There are also functions for doing convolutions,
linear layers, etc, but as we'll see, these are usually better handled using
other parts of the library.)
If you're using negative log likelihood loss and log softmax activation,
then Pytorch provides a single function ``F.cross_entropy`` that combines
the two. So we can even remove the activation function from our model.
```
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
```
Note that we no longer call ``log_softmax`` in the ``model`` function. Let's
confirm that our loss and accuracy are the same as before:
```
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
```
Refactor using nn.Module
-----------------------------
Next up, we'll use ``nn.Module`` and ``nn.Parameter``, for a clearer and more
concise training loop. We subclass ``nn.Module`` (which itself is a class and
able to keep track of state). In this case, we want to create a class that
holds our weights, bias, and method for the forward step. ``nn.Module`` has a
number of attributes and methods (such as ``.parameters()`` and ``.zero_grad()``)
which we will be using.
<div class="alert alert-info"><h4>Note</h4><p>``nn.Module`` (uppercase M) is a PyTorch specific concept, and is a
class we'll be using a lot. ``nn.Module`` is not to be confused with the Python
concept of a (lowercase ``m``) `module <https://docs.python.org/3/tutorial/modules.html>`_,
which is a file of Python code that can be imported.</p></div>
```
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
```
Since we're now using an object instead of just using a function, we
first have to instantiate our model:
```
model = Mnist_Logistic()
```
Now we can calculate the loss in the same way as before. Note that
``nn.Module`` objects are used as if they are functions (i.e they are
*callable*), but behind the scenes Pytorch will call our ``forward``
method automatically.
```
print(loss_func(model(xb), yb))
```
Previously for our training loop we had to update the values for each parameter
by name, and manually zero out the grads for each parameter separately, like this:
::
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
Now we can take advantage of model.parameters() and model.zero_grad() (which
are both defined by PyTorch for ``nn.Module``) to make those steps more concise
and less prone to the error of forgetting some of our parameters, particularly
if we had a more complicated model:
::
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
We'll wrap our little training loop in a ``fit`` function so we can run it
again later.
```
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
```
Let's double-check that our loss has gone down:
```
print(loss_func(model(xb), yb))
```
Refactor using nn.Linear
-------------------------
We continue to refactor our code. Instead of manually defining and
initializing ``self.weights`` and ``self.bias``, and calculating ``xb @
self.weights + self.bias``, we will instead use the Pytorch class
`nn.Linear <https://pytorch.org/docs/stable/nn.html#linear-layers>`_ for a
linear layer, which does all that for us. Pytorch has many types of
predefined layers that can greatly simplify our code, and often makes it
faster too.
```
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
```
We instantiate our model and calculate the loss in the same way as before:
```
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
```
We are still able to use our same ``fit`` method as before.
```
fit()
print(loss_func(model(xb), yb))
```
Refactor using optim
------------------------------
Pytorch also has a package with various optimization algorithms, ``torch.optim``.
We can use the ``step`` method from our optimizer to take a forward step, instead
of manually updating each parameter.
This will let us replace our previous manually coded optimization step:
::
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
and instead use just:
::
opt.step()
opt.zero_grad()
(``optim.zero_grad()`` resets the gradient to 0 and we need to call it before
computing the gradient for the next minibatch.)
```
from torch import optim
```
We'll define a little function to create our model and optimizer so we
can reuse it in the future.
```
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
```
Refactor using Dataset
------------------------------
PyTorch has an abstract Dataset class. A Dataset can be anything that has
a ``__len__`` function (called by Python's standard ``len`` function) and
a ``__getitem__`` function as a way of indexing into it.
`This tutorial <https://pytorch.org/tutorials/beginner/data_loading_tutorial.html>`_
walks through a nice example of creating a custom ``FacialLandmarkDataset`` class
as a subclass of ``Dataset``.
PyTorch's `TensorDataset <https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#TensorDataset>`_
is a Dataset wrapping tensors. By defining a length and way of indexing,
this also gives us a way to iterate, index, and slice along the first
dimension of a tensor. This will make it easier to access both the
independent and dependent variables in the same line as we train.
```
from torch.utils.data import TensorDataset
```
Both ``x_train`` and ``y_train`` can be combined in a single ``TensorDataset``,
which will be easier to iterate over and slice.
```
train_ds = TensorDataset(x_train, y_train)
```
Previously, we had to iterate through minibatches of x and y values separately:
::
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
Now, we can do these two steps together:
::
xb,yb = train_ds[i*bs : i*bs+bs]
```
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
```
Refactor using DataLoader
------------------------------
Pytorch's ``DataLoader`` is responsible for managing batches. You can
create a ``DataLoader`` from any ``Dataset``. ``DataLoader`` makes it easier
to iterate over batches. Rather than having to use ``train_ds[i*bs : i*bs+bs]``,
the DataLoader gives us each minibatch automatically.
```
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
```
Previously, our loop iterated over batches (xb, yb) like this:
::
for i in range((n-1)//bs + 1):
xb,yb = train_ds[i*bs : i*bs+bs]
pred = model(xb)
Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:
::
for xb,yb in train_dl:
pred = model(xb)
```
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
```
Thanks to Pytorch's ``nn.Module``, ``nn.Parameter``, ``Dataset``, and ``DataLoader``,
our training loop is now dramatically smaller and easier to understand. Let's
now try to add the basic features necessary to create effecive models in practice.
Add validation
-----------------------
In section 1, we were just trying to get a reasonable training loop set up for
use on our training data. In reality, you **always** should also have
a `validation set <https://www.fast.ai/2017/11/13/validation-sets/>`_, in order
to identify if you are overfitting.
Shuffling the training data is
`important <https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks>`_
to prevent correlation between batches and overfitting. On the other hand, the
validation loss will be identical whether we shuffle the validation set or not.
Since shuffling takes extra time, it makes no sense to shuffle the validation data.
We'll use a batch size for the validation set that is twice as large as
that for the training set. This is because the validation set does not
need backpropagation and thus takes less memory (it doesn't need to
store the gradients). We take advantage of this to use a larger batch
size and compute the loss more quickly.
```
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
```
We will calculate and print the validation loss at the end of each epoch.
(Note that we always call ``model.train()`` before training, and ``model.eval()``
before inference, because these are used by layers such as ``nn.BatchNorm2d``
and ``nn.Dropout`` to ensure appropriate behaviour for these different phases.)
```
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
```
Create fit() and get_data()
----------------------------------
We'll now do a little refactoring of our own. Since we go through a similar
process twice of calculating the loss for both the training set and the
validation set, let's make that into its own function, ``loss_batch``, which
computes the loss for one batch.
We pass an optimizer in for the training set, and use it to perform
backprop. For the validation set, we don't pass an optimizer, so the
method doesn't perform backprop.
```
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
```
``fit`` runs the necessary operations to train our model and compute the
training and validation losses for each epoch.
```
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
```
``get_data`` returns dataloaders for the training and validation sets.
```
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
```
Now, our whole process of obtaining the data loaders and fitting the
model can be run in 3 lines of code:
```
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
You can use these basic 3 lines of code to train a wide variety of models.
Let's see if we can use them to train a convolutional neural network (CNN)!
Switch to CNN
-------------
We are now going to build our neural network with three convolutional layers.
Because none of the functions in the previous section assume anything about
the model form, we'll be able to use them to train a CNN without any modification.
We will use Pytorch's predefined
`Conv2d <https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d>`_ class
as our convolutional layer. We define a CNN with 3 convolutional layers.
Each convolution is followed by a ReLU. At the end, we perform an
average pooling. (Note that ``view`` is PyTorch's version of numpy's
``reshape``)
```
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
```
`Momentum <https://cs231n.github.io/neural-networks-3/#sgd>`_ is a variation on
stochastic gradient descent that takes previous updates into account as well
and generally leads to faster training.
```
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
nn.Sequential
------------------------
``torch.nn`` has another handy class we can use to simply our code:
`Sequential <https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential>`_ .
A ``Sequential`` object runs each of the modules contained within it, in a
sequential manner. This is a simpler way of writing our neural network.
To take advantage of this, we need to be able to easily define a
**custom layer** from a given function. For instance, PyTorch doesn't
have a `view` layer, and we need to create one for our network. ``Lambda``
will create a layer that we can then use when defining a network with
``Sequential``.
```
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
```
The model created with ``Sequential`` is simply:
```
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
Wrapping DataLoader
-----------------------------
Our CNN is fairly concise, but it only works with MNIST, because:
- It assumes the input is a 28\*28 long vector
- It assumes that the final CNN grid size is 4\*4 (since that's the average
pooling kernel size we used)
Let's get rid of these two assumptions, so our model works with any 2d
single channel image. First, we can remove the initial Lambda layer but
moving the data preprocessing into a generator:
```
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
```
Next, we can replace ``nn.AvgPool2d`` with ``nn.AdaptiveAvgPool2d``, which
allows us to define the size of the *output* tensor we want, rather than
the *input* tensor we have. As a result, our model will work with any
size input.
```
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
```
Let's try it out:
```
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
Using your GPU
---------------
If you're lucky enough to have access to a CUDA-capable GPU (you can
rent one for about $0.50/hour from most cloud providers) you can
use it to speed up your code. First check that your GPU is working in
Pytorch:
```
print(torch.cuda.is_available())
```
And then create a device object for it:
```
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
```
Let's update ``preprocess`` to move batches to the GPU:
```
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
```
Finally, we can move our model to the GPU.
```
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
```
You should find it runs faster now:
```
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
```
Closing thoughts
-----------------
We now have a general data pipeline and training loop which you can use for
training many types of models using Pytorch. To see how simple training a model
can now be, take a look at the `mnist_sample` sample notebook.
Of course, there are many things you'll want to add, such as data augmentation,
hyperparameter tuning, monitoring training, transfer learning, and so forth.
These features are available in the fastai library, which has been developed
using the same design approach shown in this tutorial, providing a natural
next step for practitioners looking to take their models further.
We promised at the start of this tutorial we'd explain through example each of
``torch.nn``, ``torch.optim``, ``Dataset``, and ``DataLoader``. So let's summarize
what we've seen:
- **torch.nn**
+ ``Module``: creates a callable which behaves like a function, but can also
contain state(such as neural net layer weights). It knows what ``Parameter`` (s) it
contains and can zero all their gradients, loop through them for weight updates, etc.
+ ``Parameter``: a wrapper for a tensor that tells a ``Module`` that it has weights
that need updating during backprop. Only tensors with the `requires_grad` attribute set are updated
+ ``functional``: a module(usually imported into the ``F`` namespace by convention)
which contains activation functions, loss functions, etc, as well as non-stateful
versions of layers such as convolutional and linear layers.
- ``torch.optim``: Contains optimizers such as ``SGD``, which update the weights
of ``Parameter`` during the backward step
- ``Dataset``: An abstract interface of objects with a ``__len__`` and a ``__getitem__``,
including classes provided with Pytorch such as ``TensorDataset``
- ``DataLoader``: Takes any ``Dataset`` and creates an iterator which returns batches of data.
| true | code | 0.647241 | null | null | null | null |
|
# ART for TensorFlow v2 - Keras API
This notebook demonstrate applying ART with the new TensorFlow v2 using the Keras API. The code follows and extends the examples on www.tensorflow.org.
```
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import numpy as np
from matplotlib import pyplot as plt
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import FastGradientMethod, CarliniLInfMethod
if tf.__version__[0] != '2':
raise ImportError('This notebook requires TensorFlow v2.')
```
# Load MNIST dataset
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_test = x_test[0:100]
y_test = y_test[0:100]
```
# TensorFlow with Keras API
Create a model using Keras API. Here we use the Keras Sequential model and add a sequence of layers. Afterwards the model is compiles with optimizer, loss function and metrics.
```
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy']);
```
Fit the model on training data.
```
model.fit(x_train, y_train, epochs=3);
```
Evaluate model accuracy on test data.
```
loss_test, accuracy_test = model.evaluate(x_test, y_test)
print('Accuracy on test data: {:4.2f}%'.format(accuracy_test * 100))
```
Create a ART Keras classifier for the TensorFlow Keras model.
```
classifier = KerasClassifier(model=model, clip_values=(0, 1))
```
## Fast Gradient Sign Method attack
Create a ART Fast Gradient Sign Method attack.
```
attack_fgsm = FastGradientMethod(estimator=classifier, eps=0.3)
```
Generate adversarial test data.
```
x_test_adv = attack_fgsm.generate(x_test)
```
Evaluate accuracy on adversarial test data and calculate average perturbation.
```
loss_test, accuracy_test = model.evaluate(x_test_adv, y_test)
perturbation = np.mean(np.abs((x_test_adv - x_test)))
print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100))
print('Average perturbation: {:4.2f}'.format(perturbation))
```
Visualise the first adversarial test sample.
```
plt.matshow(x_test_adv[0])
plt.show()
```
## Carlini&Wagner Infinity-norm attack
Create a ART Carlini&Wagner Infinity-norm attack.
```
attack_cw = CarliniLInfMethod(classifier=classifier, eps=0.3, max_iter=100, learning_rate=0.01)
```
Generate adversarial test data.
```
x_test_adv = attack_cw.generate(x_test)
```
Evaluate accuracy on adversarial test data and calculate average perturbation.
```
loss_test, accuracy_test = model.evaluate(x_test_adv, y_test)
perturbation = np.mean(np.abs((x_test_adv - x_test)))
print('Accuracy on adversarial test data: {:4.2f}%'.format(accuracy_test * 100))
print('Average perturbation: {:4.2f}'.format(perturbation))
```
Visualise the first adversarial test sample.
```
plt.matshow(x_test_adv[0, :, :])
plt.show()
```
| true | code | 0.772547 | null | null | null | null |
|
# Prophet
Time serie forecasting using Prophet
Official documentation: https://facebook.github.io/prophet/docs/quick_start.html
Procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It is released by Facebook’s Core Data Science team.
Additive model is a model like:
$Data = seasonal\space effect + trend + residual$
and, multiplicative model:
$Data = seasonal\space effect * trend * residual$
The algorithm provides useful statistics that help visualize the tuning process, e.g. trend, week trend, year trend and their max and min errors.
### Data
The data on which the algorithms will be trained and tested upon comes from Kaggle Hourly Energy Consumption database. It is collected by PJM Interconnection, a company coordinating the continuous buying, selling, and delivery of wholesale electricity through the Energy Market from suppliers to customers in the reagon of South Carolina, USA. All .csv files contains rows with a timestamp and a value. The name of the value column corresponds to the name of the contractor. the timestamp represents a single hour and the value represents the total energy, cunsumed during that hour.
The data we will be using is hourly power consumption data from PJM. Energy consumtion has some unique charachteristics. It will be interesting to see how prophet picks them up.
https://www.kaggle.com/robikscube/hourly-energy-consumption
Pulling the PJM East which has data from 2002-2018 for the entire east region.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from fbprophet import Prophet
from sklearn.metrics import mean_squared_error, mean_absolute_error
plt.style.use('fivethirtyeight') # For plots
dataset_path = './data/hourly-energy-consumption/PJME_hourly.csv'
df = pd.read_csv(dataset_path, index_col=[0], parse_dates=[0])
print("Dataset path:",df.shape)
df.head(10)
# VISUALIZE DATA
# Color pallete for plotting
color_pal = ["#F8766D", "#D39200", "#93AA00",
"#00BA38", "#00C19F", "#00B9E3",
"#619CFF", "#DB72FB"]
df.plot(style='.', figsize=(20,10), color=color_pal[0], title='PJM East Dataset TS')
plt.show()
#Decompose the seasonal data
def create_features(df, label=None):
"""
Creates time series features from datetime index.
"""
df = df.copy()
df['date'] = df.index
df['hour'] = df['date'].dt.hour
df['dayofweek'] = df['date'].dt.dayofweek
df['quarter'] = df['date'].dt.quarter
df['month'] = df['date'].dt.month
df['year'] = df['date'].dt.year
df['dayofyear'] = df['date'].dt.dayofyear
df['dayofmonth'] = df['date'].dt.day
df['weekofyear'] = df['date'].dt.weekofyear
X = df[['hour','dayofweek','quarter','month','year',
'dayofyear','dayofmonth','weekofyear']]
if label:
y = df[label]
return X, y
return X
df.columns
X, y = create_features(df, label='PJME_MW')
features_and_target = pd.concat([X, y], axis=1)
print("Shape",features_and_target.shape)
features_and_target.head(10)
sns.pairplot(features_and_target.dropna(),
hue='hour',
x_vars=['hour','dayofweek',
'year','weekofyear'],
y_vars='PJME_MW',
height=5,
plot_kws={'alpha':0.15, 'linewidth':0}
)
plt.suptitle('Power Use MW by Hour, Day of Week, Year and Week of Year')
plt.show()
```
## Train and Test Split
We use a temporal split, keeping old data and use only new period to do the prediction
```
split_date = '01-Jan-2015'
pjme_train = df.loc[df.index <= split_date].copy()
pjme_test = df.loc[df.index > split_date].copy()
# Plot train and test so you can see where we have split
pjme_test \
.rename(columns={'PJME_MW': 'TEST SET'}) \
.join(pjme_train.rename(columns={'PJME_MW': 'TRAINING SET'}),
how='outer') \
.plot(figsize=(15,5), title='PJM East', style='.')
plt.show()
```
To use prophet you need to correctly rename features and label to correctly pass the input to the engine.
```
# Format data for prophet model using ds and y
pjme_train.reset_index() \
.rename(columns={'Datetime':'ds',
'PJME_MW':'y'})
print(pjme_train.columns)
pjme_train.head(5)
```
### Create and train the model
```
# Setup and train model and fit
model = Prophet()
model.fit(pjme_train.reset_index() \
.rename(columns={'Datetime':'ds',
'PJME_MW':'y'}))
# Predict on training set with model
pjme_test_fcst = model.predict(df=pjme_test.reset_index() \
.rename(columns={'Datetime':'ds'}))
pjme_test_fcst.head()
```
### Plot the results and forecast
```
# Plot the forecast
f, ax = plt.subplots(1)
f.set_figheight(5)
f.set_figwidth(15)
fig = model.plot(pjme_test_fcst,
ax=ax)
plt.show()
# Plot the components of the model
fig = model.plot_components(pjme_test_fcst)
```
| true | code | 0.578567 | null | null | null | null |
|
# Scalable GP Classification in 1D (w/ KISS-GP)
This example shows how to use grid interpolation based variational classification with an `ApproximateGP` using a `GridInterpolationVariationalStrategy` module. This classification module is designed for when the inputs of the function you're modeling are one-dimensional.
The use of inducing points allows for scaling up the training data by making computational complexity linear instead of cubic.
In this example, we’re modeling a function that is periodically labeled cycling every 1/8 (think of a square wave with period 1/4)
This notebook doesn't use cuda, in general we recommend GPU use if possible and most of our notebooks utilize cuda as well.
Kernel interpolation for scalable structured Gaussian processes (KISS-GP) was introduced in this paper:
http://proceedings.mlr.press/v37/wilson15.pdf
KISS-GP with SVI for classification was introduced in this paper:
https://papers.nips.cc/paper/6426-stochastic-variational-deep-kernel-learning.pdf
```
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
from math import exp
%matplotlib inline
%load_ext autoreload
%autoreload 2
train_x = torch.linspace(0, 1, 26)
train_y = torch.sign(torch.cos(train_x * (2 * math.pi))).add(1).div(2)
from gpytorch.models import ApproximateGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import GridInterpolationVariationalStrategy
class GPClassificationModel(ApproximateGP):
def __init__(self, grid_size=128, grid_bounds=[(0, 1)]):
variational_distribution = CholeskyVariationalDistribution(grid_size)
variational_strategy = GridInterpolationVariationalStrategy(self, grid_size, grid_bounds, variational_distribution)
super(GPClassificationModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self,x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
latent_pred = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return latent_pred
model = GPClassificationModel()
likelihood = gpytorch.likelihoods.BernoulliLikelihood()
from gpytorch.mlls.variational_elbo import VariationalELBO
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
# "Loss" for GPs - the marginal log likelihood
# n_data refers to the number of training datapoints
mll = VariationalELBO(likelihood, model, num_data=train_y.numel())
def train():
num_iter = 100
for i in range(num_iter):
optimizer.zero_grad()
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f' % (i + 1, num_iter, loss.item()))
optimizer.step()
# Get clock time
%time train()
# Set model and likelihood into eval mode
model.eval()
likelihood.eval()
# Initialize axes
f, ax = plt.subplots(1, 1, figsize=(4, 3))
with torch.no_grad():
test_x = torch.linspace(0, 1, 101)
predictions = likelihood(model(test_x))
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
pred_labels = predictions.mean.ge(0.5).float()
ax.plot(test_x.data.numpy(), pred_labels.numpy(), 'b')
ax.set_ylim([-1, 2])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
```
| true | code | 0.810404 | null | null | null | null |
|
# Showing uncertainty
> Uncertainty occurs everywhere in data science, but it's frequently left out of visualizations where it should be included. Here, we review what a confidence interval is and how to visualize them for both single estimates and continuous functions. Additionally, we discuss the bootstrap resampling technique for assessing uncertainty and how to visualize it properly. This is the Summary of lecture "Improving Your Data Visualizations in Python", via datacamp.
- toc: true
- badges: true
- comments: true
- author: Chanseok Kang
- categories: [Python, Datacamp, Visualization]
- image: images/so2_compare.png
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (10, 5)
```
### Point estimate intervals
- When is uncertainty important?
- Estimates from sample
- Average of a subset
- Linear model coefficients
- Why is uncertainty important?
- Helps inform confidence in estimate
- Neccessary for decision making
- Acknowledges limitations of data
### Basic confidence intervals
You are a data scientist for a fireworks manufacturer in Des Moines, Iowa. You need to make a case to the city that your company's large fireworks show has not caused any harm to the city's air. To do this, you look at the average levels for pollutants in the week after the fourth of July and how they compare to readings taken after your last show. By showing confidence intervals around the averages, you can make a case that the recent readings were well within the normal range.
```
average_ests = pd.read_csv('./dataset/average_ests.csv', index_col=0)
average_ests
# Construct CI bounds for averages
average_ests['lower'] = average_ests['mean'] - 1.96 * average_ests['std_err']
average_ests['upper'] = average_ests['mean'] + 1.96 * average_ests['std_err']
# Setup a grid of plots, with non-shared x axes limits
g = sns.FacetGrid(average_ests, row='pollutant', sharex=False, aspect=2);
# Plot CI for average estimate
g.map(plt.hlines, 'y', 'lower', 'upper');
# Plot observed values for comparison and remove axes labels
g.map(plt.scatter, 'seen', 'y', color='orangered').set_ylabels('').set_xlabels('');
```
This simple visualization shows that all the observed values fall well within the confidence intervals for all the pollutants except for $O_3$.
### Annotating confidence intervals
Your data science work with pollution data is legendary, and you are now weighing job offers in both Cincinnati, Ohio and Indianapolis, Indiana. You want to see if the SO2 levels are significantly different in the two cities, and more specifically, which city has lower levels. To test this, you decide to look at the differences in the cities' SO2 values (Indianapolis' - Cincinnati's) over multiple years.
Instead of just displaying a p-value for a significant difference between the cities, you decide to look at the 95% confidence intervals (columns `lower` and `upper`) of the differences. This allows you to see the magnitude of the differences along with any trends over the years.
```
diffs_by_year = pd.read_csv('./dataset/diffs_by_year.csv', index_col=0)
diffs_by_year
# Set start and ends according to intervals
# Make intervals thicker
plt.hlines(y='year', xmin='lower', xmax='upper',
linewidth=5, color='steelblue', alpha=0.7,
data=diffs_by_year);
# Point estimates
plt.plot('mean', 'year', 'k|', data=diffs_by_year);
# Add a 'null' reference line at 0 and color orangered
plt.axvline(x=0, color='orangered', linestyle='--');
# Set descriptive axis labels and title
plt.xlabel('95% CI');
plt.title('Avg SO2 differences between Cincinnati and Indianapolis');
```
By looking at the confidence intervals you can see that the difference flipped from generally positive (more pollution in Cincinnati) in 2013 to negative (more pollution in Indianapolis) in 2014 and 2015. Given that every year's confidence interval contains the null value of zero, no P-Value would be significant, and a plot that only showed significance would have been entirely hidden this trend.
## Confidence bands
### Making a confidence band
Vandenberg Air Force Base is often used as a location to launch rockets into space. You have a theory that a recent increase in the pace of rocket launches could be harming the air quality in the surrounding region. To explore this, you plotted a 25-day rolling average line of the measurements of atmospheric $NO_2$. To help decide if any pattern observed is random-noise or not, you decide to add a 99% confidence band around your rolling mean. Adding a confidence band to a trend line can help shed light on the stability of the trend seen. This can either increase or decrease the confidence in the discovered trend.
```
vandenberg_NO2 = pd.read_csv('./dataset/vandenberg_NO2.csv', index_col=0)
vandenberg_NO2.head()
# Draw 99% interval bands for average NO2
vandenberg_NO2['lower'] = vandenberg_NO2['mean'] - 2.58 * vandenberg_NO2['std_err']
vandenberg_NO2['upper'] = vandenberg_NO2['mean'] + 2.58 * vandenberg_NO2['std_err']
# Plot mean estimate as a white semi-transparent line
plt.plot('day', 'mean', data=vandenberg_NO2, color='white', alpha=0.4);
# Fill between the upper and lower confidence band values
plt.fill_between(x='day', y1='lower', y2='upper', data=vandenberg_NO2);
```
This plot shows that the middle of the year's $NO_2$ values are not only lower than the beginning and end of the year but also are less noisy. If just the moving average line were plotted, then this potentially interesting observation would be completely missed. (Can you think of what may cause reduced variance at the lower values of the pollutant?)
### Separating a lot of bands
It is relatively simple to plot a bunch of trend lines on top of each other for rapid and precise comparisons. Unfortunately, if you need to add uncertainty bands around those lines, the plot becomes very difficult to read. Figuring out whether a line corresponds to the top of one class' band or the bottom of another's can be hard due to band overlap. Luckily in Seaborn, it's not difficult to break up the overlapping bands into separate faceted plots.
To see this, explore trends in SO2 levels for a few cities in the eastern half of the US. If you plot the trends and their confidence bands on a single plot - it's a mess. To fix, use Seaborn's `FacetGrid()` function to spread out the confidence intervals to multiple panes to ease your inspection.
```
eastern_SO2 = pd.read_csv('./dataset/eastern_SO2.csv', index_col=0)
eastern_SO2.head()
# setup a grid of plots with columns divided by location
g = sns.FacetGrid(eastern_SO2, col='city', col_wrap=2);
# Map interval plots to each cities data with coral colored ribbons
g.map(plt.fill_between, 'day', 'lower', 'upper', color='coral');
# Map overlaid mean plots with white line
g.map(plt.plot, 'day', 'mean', color='white');
```
By separating each band into its own plot you can investigate each city with ease. Here, you see that Des Moines and Houston on average have lower SO2 values for the entire year than the two cities in the Midwest. Cincinnati has a high and variable peak near the beginning of the year but is generally more stable and lower than Indianapolis.
### Cleaning up bands for overlaps
You are working for the city of Denver, Colorado and want to run an ad campaign about how much cleaner Denver's air is than Long Beach, California's air. To investigate this claim, you will compare the SO2 levels of both cities for the year 2014. Since you are solely interested in how the cities compare, you want to keep the bands on the same plot. To make the bands easier to compare, decrease the opacity of the confidence bands and set a clear legend.
```
SO2_compare = pd.read_csv('./dataset/SO2_compare.csv', index_col=0)
SO2_compare.head()
for city, color in [('Denver', '#66c2a5'), ('Long Beach', '#fc8d62')]:
# Filter data to desired city
city_data = SO2_compare[SO2_compare.city == city]
# Set city interval color to desired and lower opacity
plt.fill_between(x='day', y1='lower', y2='upper', data=city_data, color=color, alpha=0.4);
# Draw a faint mean line for reference and give a label for legend
plt.plot('day', 'mean', data=city_data, label=city, color=color, alpha=0.25);
plt.legend();
```
From these two curves you can see that during the first half of the year Long Beach generally has a higher average SO2 value than Denver, in the middle of the year they are very close, and at the end of the year Denver seems to have higher averages. However, by showing the confidence intervals, you can see however that almost none of the year shows a statistically meaningful difference in average values between the two cities.
## Beyond 95%
### 90, 95, and 99% intervals
You are a data scientist for an outdoor adventure company in Fairbanks, Alaska. Recently, customers have been having issues with SO2 pollution, leading to costly cancellations. The company has sensors for CO, NO2, and O3 but not SO2 levels.
You've built a model that predicts SO2 values based on the values of pollutants with sensors (loaded as `pollution_model`, a `statsmodels` object). You want to investigate which pollutant's value has the largest effect on your model's SO2 prediction. This will help you know which pollutant's values to pay most attention to when planning outdoor tours. To maximize the amount of information in your report, show multiple levels of uncertainty for the model estimates.
```
from statsmodels.formula.api import ols
pollution = pd.read_csv('./dataset/pollution_wide.csv')
pollution = pollution.query("city == 'Fairbanks' & year == 2014 & month == 11")
pollution_model = ols(formula='SO2 ~ CO + NO2 + O3 + day', data=pollution)
res = pollution_model.fit()
# Add interval percent widths
alphas = [ 0.01, 0.05, 0.1]
widths = [ '99% CI', '95%', '90%']
colors = ['#fee08b','#fc8d59','#d53e4f']
for alpha, color, width in zip(alphas, colors, widths):
# Grab confidence interval
conf_ints = res.conf_int(alpha)
# Pass current interval color and legend label to plot
plt.hlines(y = conf_ints.index, xmin = conf_ints[0], xmax = conf_ints[1],
colors = color, label = width, linewidth = 10)
# Draw point estimates
plt.plot(res.params, res.params.index, 'wo', label = 'Point Estimate')
plt.legend(loc = 'upper right')
```
### 90 and 95% bands
You are looking at a 40-day rolling average of the $NO_2$ pollution levels for the city of Cincinnati in 2013. To provide as detailed a picture of the uncertainty in the trend you want to look at both the 90 and 99% intervals around this rolling estimate.
To do this, set up your two interval sizes and an orange ordinal color palette. Additionally, to enable precise readings of the bands, make them semi-transparent, so the Seaborn background grids show through.
```
cinci_13_no2 = pd.read_csv('./dataset/cinci_13_no2.csv', index_col=0);
cinci_13_no2.head()
int_widths = ['90%', '99%']
z_scores = [1.67, 2.58]
colors = ['#fc8d59', '#fee08b']
for percent, Z, color in zip(int_widths, z_scores, colors):
# Pass lower and upper confidence bounds and lower opacity
plt.fill_between(
x = cinci_13_no2.day, alpha = 0.4, color = color,
y1 = cinci_13_no2['mean'] - Z * cinci_13_no2['std_err'],
y2 = cinci_13_no2['mean'] + Z * cinci_13_no2['std_err'],
label = percent);
plt.legend();
```
This plot shows us that throughout 2013, the average NO2 values in Cincinnati followed a cyclical pattern with the seasons. However, the uncertainty bands show that for most of the year you can't be sure this pattern is not noise at both a 90 and 99% confidence level.
### Using band thickness instead of coloring
You are a researcher investigating the elevation a rocket reaches before visual is lost and pollutant levels at Vandenberg Air Force Base. You've built a model to predict this relationship, and since you are working independently, you don't have the money to pay for color figures in your journal article. You need to make your model results plot work in black and white. To do this, you will plot the 90, 95, and 99% intervals of the effect of each pollutant as successively smaller bars.
```
rocket_model = pd.read_csv('./dataset/rocket_model.csv', index_col=0)
rocket_model
# Decrase interval thickness as interval widens
sizes = [ 15, 10, 5]
int_widths = ['90% CI', '95%', '99%']
z_scores = [ 1.67, 1.96, 2.58]
for percent, Z, size in zip(int_widths, z_scores, sizes):
plt.hlines(y = rocket_model.pollutant,
xmin = rocket_model['est'] - Z * rocket_model['std_err'],
xmax = rocket_model['est'] + Z * rocket_model['std_err'],
label = percent,
# Resize lines and color them gray
linewidth = size,
color = 'gray');
# Add point estimate
plt.plot('est', 'pollutant', 'wo', data = rocket_model, label = 'Point Estimate');
plt.legend(loc = 'center left', bbox_to_anchor = (1, 0.5));
```
While less elegant than using color to differentiate interval sizes, this plot still clearly allows the reader to access the effect each pollutant has on rocket visibility. You can see that of all the pollutants, O3 has the largest effect and also the tightest confidence bounds
## Visualizing the bootstrap
### The bootstrap histogram
You are considering a vacation to Cincinnati in May, but you have a severe sensitivity to NO2. You pull a few years of pollution data from Cincinnati in May and look at a bootstrap estimate of the average $NO_2$ levels. You only have one estimate to look at the best way to visualize the results of your bootstrap estimates is with a histogram.
While you like the intuition of the bootstrap histogram by itself, your partner who will be going on the vacation with you, likes seeing percent intervals. To accommodate them, you decide to highlight the 95% interval by shading the region.
```
# Perform bootstrapped mean on a vector
def bootstrap(data, n_boots):
return [np.mean(np.random.choice(data,len(data))) for _ in range(n_boots) ]
pollution = pd.read_csv('./dataset/pollution_wide.csv')
cinci_may_NO2 = pollution.query("city == 'Cincinnati' & month == 5").NO2
# Generate bootstrap samples
boot_means = bootstrap(cinci_may_NO2, 1000)
# Get lower and upper 95% interval bounds
lower, upper = np.percentile(boot_means, [2.5, 97.5])
# Plot shaded area for interval
plt.axvspan(lower, upper, color = 'gray', alpha = 0.2);
# Draw histogram of bootstrap samples
sns.distplot(boot_means, bins = 100, kde = False);
```
Your bootstrap histogram looks stable and uniform. You're now confident that the average NO2 levels in Cincinnati during your vacation should be in the range of 16 to 23.
### Bootstrapped regressions
While working for the Long Beach parks and recreation department investigating the relationship between $NO_2$ and $SO_2$ you noticed a cluster of potential outliers that you suspect might be throwing off the correlations.
Investigate the uncertainty of your correlations through bootstrap resampling to see how stable your fits are. For convenience, the bootstrap sampling is complete and is provided as `no2_so2_boot` along with `no2_so2` for the non-resampled data.
```
no2_so2 = pd.read_csv('./dataset/no2_so2.csv', index_col=0)
no2_so2_boot = pd.read_csv('./dataset/no2_so2_boot.csv', index_col=0)
sns.lmplot('NO2', 'SO2', data = no2_so2_boot,
# Tell seaborn to a regression line for each sample
hue = 'sample',
# Make lines blue and transparent
line_kws = {'color': 'steelblue', 'alpha': 0.2},
# Disable built-in confidence intervals
ci = None, legend = False, scatter = False);
# Draw scatter of all points
plt.scatter('NO2', 'SO2', data = no2_so2);
```
The outliers appear to drag down the regression lines as evidenced by the cluster of lines with more severe slopes than average. In a single plot, you have not only gotten a good idea of the variability of your correlation estimate but also the potential effects of outliers.
### Lots of bootstraps with beeswarms
As a current resident of Cincinnati, you're curious to see how the average NO2 values compare to Des Moines, Indianapolis, and Houston: a few other cities you've lived in.
To look at this, you decide to use bootstrap estimation to look at the mean NO2 values for each city. Because the comparisons are of primary interest, you will use a swarm plot to compare the estimates.
```
pollution_may = pollution.query("month == 5")
pollution_may
# Initialize a holder DataFrame for bootstrap results
city_boots = pd.DataFrame()
for city in ['Cincinnati', 'Des Moines', 'Indianapolis', 'Houston']:
# Filter to city
city_NO2 = pollution_may[pollution_may.city == city].NO2
# Bootstrap city data & put in DataFrame
cur_boot = pd.DataFrame({'NO2_avg': bootstrap(city_NO2, 100), 'city': city})
# Append to other city's bootstraps
city_boots = pd.concat([city_boots,cur_boot])
# Beeswarm plot of averages with citys on y axis
sns.swarmplot(y = "city", x = "NO2_avg", data = city_boots, color = 'coral');
```
The beeswarm plots show that Indianapolis and Houston both have the highest average NO2 values, with Cincinnati falling roughly in the middle. Interestingly, you can rather confidently say that Des Moines has the lowest as nearly all its sample estimates fall below those of the other cities.
| true | code | 0.606382 | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.