markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info. In this config file several key things are set: * maxTrials - How many training trials should be attempted to optimize the specified hyperparameters. * maxParallelTrials: 5 - The number of training trials to run concurrently. * params - The set of parameters to tune.. These are the different parameters to pass into your model and the specified ranges you wish to try. * parameterName - The parameter name must be unique amongst all ParameterConfigs * type - The type of the parameter. [INTEGER, DOUBLE, ...] * minValue & maxValue - The range of values that this parameter could be. * scaleType - How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).
%%writefile ./hptuning_config.yaml #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # hyperparam.yaml trainingInput: hyperparameters: goal: MAXIMIZE maxTrials: 30 maxParallelTrials: 5 hyperparameterMetricTag: my_metric_tag enableTrialEarlyStopping: TRUE params: - parameterName: max_depth type: INTEGER minValue: 3 maxValue: 8 - parameterName: n_estimators type: INTEGER minValue: 50 maxValue: 200 - parameterName: booster type: CATEGORICAL categoricalValues: [ "gbtree", "gblinear", "dart" ]
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info. To do this, AI Platform uses a setup.py file to install your dependencies.
%%writefile ./setup.py #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['cloudml-hypertune'] setup( name='auto_mpg_hp_tuning', version='0.1', install_requires=REQUIRED_PACKAGES, packages=find_packages(), include_package_data=True, description='Auto MPG XGBoost HP tuning training application' )
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Submit the training job.
! gcloud ml-engine jobs submit training auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S") \ --job-dir $JOB_DIR \ --package-path $TRAINER_PACKAGE_PATH \ --module-name $MAIN_TRAINER_MODULE \ --region $REGION \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --scale-tier basic \ --config $HPTUNING_CONFIG
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
[Optional] StackDriver Logging You can view the logs for your training job: 1. Go to https://console.cloud.google.com/ 1. Select "Logging" in left-hand pane 1. In left-hand pane, go to "AI Platform" and select Jobs 1. In filter by prefix, use the value of $JOB_NAME to view the logs On the logging page of your model, you can view the different results for each HP tuning job. Example: { "trialId": "15", "hyperparameters": { "booster": "dart", "max_depth": "7", "n_estimators": "102" }, "finalMetric": { "trainingStep": "1000", "objectiveValue": 0.9259230441279733 } } [Optional] Verify Model File in GCS View the contents of the destination model folder to verify that all 30 model files have indeed been uploaded to GCS. Note: The model can take a few minutes to train and show up in GCS.
! gsutil ls $JOB_DIR/*
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
GoogleCloudPlatform/cloudml-samples
apache-2.0
Going from a vector back to the metadata reference: By keeping an 'id_list', we can look up the identifier for any vector in the list from the database we've made for this clustering attemp. This lets us look up what the reference for that is, and where we can find it:
from clustering import ClusterDB db = ClusterDB(DBFILE) print(dict(db.vecidtoitem(id_list[-1]))) print(data.toarray()[-1]) from burney_data import BurneyDB bdb = BurneyDB("burney.db") bdb.get_title_row(titleAbbreviation="B0574REMEMBRA")
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Initial data woes There was a considerable discrepancy between the x1 average indent and the column "box" left edge. Looking at the data, the presence of a few outliers can really affect this value. Omitting the 2 smallest and largest x values might be enough to avoid this biasing the sample too badly. Also, the initial 'drift correction' (adjustments made to correct warped or curved columns) seemed to add more issues than it solved, so the dataset was remade without it.
from scipy import cluster from matplotlib import pyplot as plt import numpy as np # Where is the K-means 'elbow'? # Try between 1 and 10 # use only the x1 and x2 variences vset = [cluster.vq.kmeans(data.toarray()[:, [3,6]], i) for i in range(1,10)] plt.plot([v for (c,v) in vset]) plt.show()
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Seems the elbow is quite wide and not sharply defined, based on just the line variences. Let's see what it looks like in general.
# Mask off leaving just the front and end variance columns npdata = data.toarray() mask = np.ones((8), dtype=bool) mask[[0,1,2,4,5,7]] = False marray = npdata[:,mask]
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Attempting K-Means What sort of clustering algorithm to employ is actually a good question. K-means can give fairly meaningless responses if the data is of a given sort. Generally, it can be useful but cannot be used blindly. Given the data above, it might be a good start however.
#trying a different KMeans from sklearn.cluster import KMeans estimators = {'k_means_3': KMeans(n_clusters=3), 'k_means_5': KMeans(n_clusters=5), 'k_means_8': KMeans(n_clusters=8),} fignum = 1 for name, est in estimators.items(): fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() plt.cla() est.fit(marray) labels = est.labels_ plt.scatter(marray[:,0], marray[:,1], c=labels.astype(np.float)) fignum = fignum + 1 plt.show()
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
Interesting! The lack of really well defined clusters bolstered the "elbow" test above. K-means is likely not put to good use here, with just these two variables. The left edge of the scatterplot is a region that contains those blocks of text with lines aligned to the left edge of the paper's column, but have some considerable variation to the length of the line. For example, I'd expect text looking like the following: Qui quis at ex voluptatibus cupiditate quod quia. Quas fuga quasi sit mollitia quos atque. Saepe atque officia sed dolorem. Numquam quas aperiam eaque nam sunt itaque est. Sed expedita maxime fugiat mollitia error necessitatibus quam soluta. Amet laborum eius sequi quae sit sit. This is promising (as long as the data is realistic and there isn't a bug in generating that...) Now, I wonder if including the "margin" (x1ave-ledge: average x1 coordinate minus the leftmost edge) might help find or distinguish these further?
mpld3.disable_notebook() # switch off the interactive graph functionality which doesn't work well with the 3D library from mpl_toolkits.mplot3d import Axes3D X = npdata[:, [3,5,6]] fignum = 1 for name, est in estimators.items(): fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=5, azim=30) plt.cla() est.fit(X) labels = est.labels_ ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float)) ax.set_xlabel('x1 varience') ax.set_ylabel('x2 varience') ax.set_zlabel('Average indent') fignum = fignum + 1 plt.show()
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
How about the area density? In other words, what does it look like if the total area of the block is compared to the area taken up by just the words themselves?
X = npdata[:, [3,0,6]] fignum = 1 for name, est in estimators.items(): fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40) plt.cla() est.fit(X) labels = est.labels_ ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float)) ax.set_xlabel('x1 varience') ax.set_ylabel('x2 varience') ax.set_zlabel('Density') fignum = fignum + 1 plt.show()
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
More outliers skewing the results. This time for blocks with nearly zero varience at either end, but a huge amount of letter area attributed to it by the ocr, but sweeping out a very small overall area. Perhaps mask out the columns which aren't actually columns but dividers mistaken for text? ie skip all blocks that are narrow under 100px perhaps. Another way might be to ignore blocks which are under approximately 40 words (40 words * 5 characters)
mask = npdata[:,1] > 40 * 5 # mask based on the ltcount value print(mask) print("Amount of vectors: {0}, Vectors with ltcount < 50: {1}".format(len(npdata), sum([1 for item in mask if item == False]))) m_npdata = npdata[mask, :] X = m_npdata[:, [3,0,6]] # Let's just plot one graph to see: est = estimators['k_means_8'] fig = plt.figure(fignum, figsize=(8, 8)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40) plt.cla() est.fit(X) labels = est.labels_ ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float)) ax.set_xlabel('x1 varience') ax.set_ylabel('x2 varience') ax.set_zlabel('Density') plt.show()
Clustering running notepad.ipynb
BL-Labs/poetryhunt
mit
What country are most billionaires from? For the top ones, how many billionaires per billion people?
df['citizenship'].value_counts().head() df.groupby('citizenship')['networthusbillion'].sum().sort_values(ascending=False) us_pop = 318.9 #billion (2014) us_bill = df[df['citizenship'] == 'United States'] print("There are", us_pop/len(us_bill), "billionaires per billion people in the United States.") germ_pop = 0.08062 #(2013) germ_bill = df[df['citizenship'] == 'Germany'] print("There are", germ_pop/len(germ_bill), "billionaires per billion people in Germany.") china_pop = 1.357 #(2013) china_bill = df[df['citizenship'] == 'China'] print("There are", china_pop/len(china_bill), "billionaires per billion people in China.") russia_pop = 0.1435 #(2013) russia_bill = df[df['citizenship'] == 'Russia'] print("There are", russia_pop/len(russia_bill), "billionaires per billion people in Russia.") japan_pop = 0.1273 # 2013 japan_bill = df[df['citizenship'] == 'Japan'] print("There are", japan_pop/len(japan_bill), "billionaires per billion people in Japan.") print(df.columns)
07/billionaires.ipynb
M0nica/python-foundations-hw
mit
Who are the top 10 richest billionaires?
recent = df[df['year'] == 2014] # if it is not recent then there are duplicates for diff years recent.sort_values('rank').head(10) recent['networthusbillion'].describe()
07/billionaires.ipynb
M0nica/python-foundations-hw
mit
Maybe plot their net worth vs age (scatterplot) Make a bar graph of the top 10 or 20 richest
recent.plot(kind='scatter', x='networthusbillion', y='age') recent.plot(kind='scatter', x='age', y='networthusbillion', alpha = 0.2)
07/billionaires.ipynb
M0nica/python-foundations-hw
mit
Designing a music-generating class The rhythm-makers we studied yesterday help us think about rhythm in a formal way. Today we'll extend the rhythm-makers' pattern with pitches, articulations and dynamics. In this notebook we'll develop the code we need; in the next notebook we'll encapsulate our work in a class. Making notes and rests We start by re-implementing the basic note-making functionality of the talea rhythm-maker "by hand." Beginning with a talea object that models a cycle of durations:
pairs = [(4, 4), (3, 4), (7, 16), (6, 8)] time_signatures = [abjad.TimeSignature(_) for _ in pairs] durations = [_.duration for _ in time_signatures] time_signature_total = sum(durations) counts = [1, 2, -3, 4] denominator = 16 talea = rmakers.Talea(counts, denominator) talea_index = 0
day-3/1-making-music.ipynb
Abjad/intensive
mit
We can ask our talea for as many durations as we want. (Taleas output nonreduced fractions instead of durations. This is to allow talea output to model either durations or time signatures, depending on the application.) We include some negative values, which we will later interpret as rests. We can ask our talea for ten durations like this:
talea[:10]
day-3/1-making-music.ipynb
Abjad/intensive
mit
Let's use our talea to make notes and rests, stopping when the duration of the accumulated notes and rests sums to that of the four time signatures defined above:
events = [] accumulated_duration = abjad.Duration(0) while accumulated_duration < time_signature_total: duration = talea[talea_index] if 0 < duration: pitch = abjad.NamedPitch("c'") else: pitch = None duration = abs(duration) if time_signature_total < (duration + accumulated_duration): duration = time_signature_total - accumulated_duration events_ = abjad.LeafMaker()([pitch], [duration]) events.extend(events_) accumulated_duration += duration talea_index += 1 staff = abjad.Staff(events) abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
To attach the four time signatures defined above, we must split our notes and rests at measure boundaries. Then we can attach a time signature to the first note or rest in each of the four selections that result:
selections = abjad.mutate.split(staff[:], time_signatures, cyclic=True) for time_signature, selection in zip(time_signatures, selections): first_leaf = abjad.get.leaf(selection, 0) abjad.attach(time_signature, first_leaf) abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Then we group our notes and rests by measure, and metrically respell each group:
measure_selections = abjad.select(staff).leaves().group_by_measure() for time_signature, measure_selection in zip(time_signatures, measure_selections): abjad.Meter.rewrite_meter(measure_selection, time_signature) abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Pitching notes We can pitch our notes however we like. First we define a cycle of pitches:
string = "d' fs' a' d'' g' ef'" strings = string.split() pitches = abjad.CyclicTuple(strings)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Then we loop through pitched logical ties, pitching notes as we go:
plts = abjad.select(staff).logical_ties(pitched=True) for i, plt in enumerate(plts): pitch = pitches[i] for note in plt: note.written_pitch = pitch abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Attaching articulations and dynamics Abjad's run selector selects notes and chords, separated by rests:
for selection in abjad.select(staff).runs(): print(selection)
day-3/1-making-music.ipynb
Abjad/intensive
mit
We can use Abjad's run selector to loop through the runs in our music, attaching articulations and dynamics along the way:
for selection in abjad.select(staff).runs(): articulation = abjad.Articulation("tenuto") abjad.attach(articulation, selection[0]) if 3 <= len(selection): abjad.hairpin("p < f", selection) else: dynamic = abjad.Dynamic("ppp") abjad.attach(dynamic, selection[0]) abjad.override(staff).dynamic_line_spanner.staff_padding = 4 abjad.show(staff)
day-3/1-making-music.ipynb
Abjad/intensive
mit
Read in the total SFRs from https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/sfrs.html . These SFRs are derived from spectra but later aperture corrected using Salim et al.(2007)'s method.
# data with the galaxy information data_gals = mrdfits(UT.dat_dir()+'gal_info_dr7_v5_2.fit.gz') # data with the SFR information data_sfrs = mrdfits(UT.dat_dir()+'gal_totsfr_dr7_v5_2.fits.gz') if len(data_gals.ra) != len(data_sfrs.median): raise ValueError("the data should have the same number of galaxies")
centralms/notebooks/notes_SFRmpajhu_uncertainty.ipynb
changhoonhahn/centralMS
mit
spherematch using 3'' for 10,000 galaxies. Otherwise laptop explodes.
#ngal = len(data_gals.ra) ngal = 10000 matches = spherematch(data_gals.ra[:10000], data_gals.dec[:10000], data_gals.ra[:10000], data_gals.dec[:10000], 0.000833333, maxmatch=0) m0, m1, d_m = matches n_matches = np.zeros(ngal) sfr_list = [[] for i in range(ngal)] for i in range(ngal): ism = (i == m0) n_matches[i] = np.sum(ism) if n_matches[i] > 1: #print '#', data_gals.ra[i], data_gals.dec[i], data_sfrs.median[i] sfr_list[i] = data_sfrs.median[m1[np.where(ism)]] #for r,d,s in zip(data_gals.ra[m1[np.where(ism)]], data_gals.dec[m1[np.where(ism)]], data_sfrs.median[m1[np.where(ism)]]): # print r, d, s #sfr_list[i] = data_sfrs.median[:10000][ism] for i in np.where(n_matches > 1)[0][:5]: print sfr_list[i] print np.mean(sfr_list[i]), np.std(sfr_list[i]) fig = plt.figure() sub = fig.add_subplot(111) sigs = [] for i in np.where(n_matches > 1)[0]: if -99. in sfr_list[i]: continue sub.scatter([np.mean(sfr_list[i])], [np.std(sfr_list[i], ddof=1)], c='k', s=2) sigs.append(np.std(sfr_list[i], ddof=1)) sub.set_xlim([-3., 3.]) sub.set_xlabel('log SFR', fontsize=25) sub.set_ylim([0., 0.6]) sub.set_ylabel('$\sigma_\mathrm{log\,SFR}$', fontsize=25) plt.show() plt.hist(np.array(sigs), bins=40, range=[0.0, 0.6], normed=True, histtype='step') plt.xlim([0., 0.6]) plt.xlabel('$\sigma_\mathrm{log\,SFR}$', fontsize=25)
centralms/notebooks/notes_SFRmpajhu_uncertainty.ipynb
changhoonhahn/centralMS
mit
The task: Identify patients with pulmonary embolism from radiology reports Step 1: how is the concept of pulmonary embolism represented in the reports - fill in the list below with literals you want to use.
mytargets = itemData.itemData() mytargets.extend([["pulmonary embolism", "CRITICAL_FINDING", "", ""], ["pneumonia", "CRITICAL_FINDING", "", ""]]) print(mytargets) !pip install -U radnlp==0.2.0.8
IntroductionToPyConTextNLP.ipynb
chapmanbe/nlm_clinical_nlp
mit
Sentence Splitting pyConTextNLP operates on a sentence level and so the first step we need to take is to split our document into individual sentences. pyConTextNLP comes with a simple sentence splitter class.
import pyConTextNLP.helpers as helpers spliter = helpers.sentenceSplitter() spliter.splitSentences("This is Dr. Chapman's first sentence. This is the 2.0 sentence.")
IntroductionToPyConTextNLP.ipynb
chapmanbe/nlm_clinical_nlp
mit
However, sentence splitting is a common NLP task and so most full-fledged NLP applications provide sentence splitters. We usually rely on the sentence splitter that is part of the TextBlob package, which in turn relies on the Natural Language Toolkit (NLTK). So before proceeding we need to download some NLTK resources with the following command.
!python -m textblob.download_corpora
IntroductionToPyConTextNLP.ipynb
chapmanbe/nlm_clinical_nlp
mit
Combining cross_fields and best_fields Based on previous tuning, we have the following optimal parameters for each multi_match query type.
cross_fields_params = { 'operator': 'OR', 'minimum_should_match': 50, 'tie_breaker': 0.25, 'url|boost': 1.0129720302556104, 'title|boost': 5.818478716515356, 'body|boost': 3.736613263685484, } best_fields_params = { 'tie_breaker': 0.3936135232328522, 'url|boost': 0.0, 'title|boost': 8.63280262513067, 'body|boost': 10.0, }
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
We've seen the process to optimize field boosts on two different multi_match queries but it would be interesting to see if combining them in some way might actually result in even better MRR@100. Let's give it a shot and find out. Side note: Combining queries where each sub-query is always executed may improve relevance but it will hurt performance and the query times will be quite a lot higher than with a single, simpler query. Keep this in mind when building complex queries for production!
def prefix_keys(d, prefix): return {f'{prefix}{k}': v for k, v in d.items()} # prefix key of each sub-query # add default boosts all_params = { **prefix_keys(cross_fields_params, 'cross_fields|'), 'cross_fields|boost': 1.0, **prefix_keys(best_fields_params, 'best_fields|'), 'best_fields|boost': 1.0, } all_params
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Baseline evaluation
%%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=all_params)
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Query tuning Here we'll just tune the boosts for each sub-query. Note that this takes twice as long as tuning individual queries because we have two queries combined.
%%time _, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id, config_space=Config.parse({ 'num_iterations': 30, 'num_initial_points': 15, 'space': { 'cross_fields|boost': { 'low': 0.0, 'high': 5.0 }, 'best_fields|boost': { 'low': 0.0, 'high': 5.0 }, }, 'default': all_params, })) _ = plot_objective(metadata_boosts, sample_source='result')
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Seems that there's not much to tune here, but let's keep going.
%%time _ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
So that's the same as without tuning. What's going on? Debugging Plot scores from each sub-query to determine why we don't really see an improvement over individual queries.
import matplotlib.pyplot as plt from itertools import chain from qopt.notebooks import ROOT_DIR from qopt.search import temporary_search_template, search_template from qopt.trec import load_queries_as_tuple_list, load_qrels def collect_scores(): def _search(template_id, query_string, params, doc_id): res = search_template(es, index, template_id, query={ 'id': 0, 'params': { 'query_string': query_string, **params, }, }) return [hit['score'] for hit in res['hits'] if hit['id'] == doc_id] queries = load_queries_as_tuple_list(os.path.join(ROOT_DIR, 'data', 'msmarco-document-sampled-queries.1000.tsv')) qrels = load_qrels(os.path.join(ROOT_DIR, 'data', 'msmarco', 'document', 'msmarco-doctrain-qrels.tsv')) template_file = os.path.join(ROOT_DIR, 'config', 'msmarco-document-templates.json') size = 100 cross_field_scores = [] best_field_scores = [] with temporary_search_template(es, template_file, 'cross_fields', size) as cross_fields_template_id: with temporary_search_template(es, template_file, 'best_fields', size) as best_fields_template_id: for query in queries: doc_id = list(qrels[query[0]].keys())[0] cfs = _search(cross_fields_template_id, query[1], cross_fields_params, doc_id) bfs = _search(best_fields_template_id, query[1], best_fields_params, doc_id) # keep just n scores to make sure the lists are the same length length = min(len(cfs), len(bfs)) cross_field_scores.append(cfs[:length]) best_field_scores.append(bfs[:length]) return cross_field_scores, best_field_scores cfs, bfs = collect_scores() # plot scores cfs_flat = list(chain(*cfs)) bfs_flat = list(chain(*bfs)) plt.scatter(cfs_flat, bfs_flat) plt.show()
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
elastic/examples
apache-2.0
Check the dataframe to see which columns contain 0's. Based on the data type of each column, do these 0's all make sense? Which 0's are suspicious?
for name in names: print(name, ':', any(df.loc[:, name] == 0))
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Answer: Columns 2-6 (glucose, blood pressure, skin fold thickness, insulin, and BMI) all contain zeros, but none of these measurements should ever be 0 in a human. Assume that 0s indiciate missing values, and fix them in the dataset by eliminating samples with missing features. Then run a logistic regression, and measure the performance of the model.
import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression for i in range(1,6): df.loc[df.loc[:, names[i]] == 0, names[i]] = np.nan df_no_nan = df.dropna(axis=0, how='any') X = df_no_nan.iloc[:, :8].values y = df_no_nan.iloc[:, 8].values def fit_and_score_rlr(X, y, normalize=True): if normalize: scaler = StandardScaler().fit(X) X_std = scaler.transform(X) else: X_std = X X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=0.33, random_state=42) rlr = LogisticRegression(C=1) rlr.fit(X_train, y_train) return rlr.score(X_test, y_test) fit_and_score_rlr(X, y)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Next, replace missing features through mean imputation. Run a regression and measure the performance of the model.
from sklearn.preprocessing import Imputer imputer = Imputer(missing_values='NaN', strategy='mean', axis=1) X = imputer.fit_transform(df.iloc[:, :8].values) y = df.iloc[:, 8].values fit_and_score_rlr(X, y)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Comment on your results. Answer: Interestingly, there's not a huge performance improvement between the two approaches! In my run, using mean imputation corresponded to about a 3 point increase in model performance. Some ideas for why this might be: This is a small dataset to start out with, so removing ~half its samples doesn't change performance very much There's not much information contained in the features with missing data There are other effects underlying poor performance of the model (e.g. regularization parameters) that are having a greater impact Preprocessing categorical variables Load the TA evaluation dataset. As before, the data and header are split into two files, so you'll have to combine them yourself.
data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/tae/tae.data' names = ['native_speaker', 'instructor', 'course', 'season', 'class_size', 'rating'] df = pandas.read_csv(data_url, header=None, index_col=False, names=names) print(df)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Which of the features are categorical? Are they ordinal, or nominal? Which features are numeric? Answer: According to the documentation: Native speaker: categorical (nominal) Instructor: categorical (nominal) Course: categorical (nominal) Season: categorical (nominal) Class size: numeric Rating: categorical (ordinal) Encode the categorical variables in a naive fashion, by leaving them in place as numerics. Run a classification and measure performance against a test set.
X = df.iloc[:, :-1].values y = df.iloc[:, -1].values fit_and_score_rlr(X, y, normalize=True)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Now, encode the categorical variables with a one-hot encoder. Again, run a classification and measure performance.
from sklearn.preprocessing import OneHotEncoder enc = OneHotEncoder(categorical_features=range(5)) X_encoded = enc.fit_transform(X) fit_and_score_rlr(X_encoded, y, normalize=False)
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Comment on your results. Feature scaling Raschka mentions that decision trees and random forests do not require standardized features prior to classification, while the rest of the classifiers we've seen so far do. Why might that be? Explain the intuition behind this idea based on the differences between tree-based classifiers and the other classifiers we've seen. Now, we'll test the two scaling algorithms on the wine dataset. Start by loading the wine dataset. Scale the features via "standardization" (as Raschka describes it). Classify and measure performance. Scale the features via "normalization" (as Raschka describes it). Again, classify and measure performance. Comment on your results. Feature selection Implement SBS below. Then, run the tests.
class SBS(object): """ Class to select the k-best features in a dataset via sequential backwards selection. """ def __init__(self): """ Initialize the SBS model. """ pass def fit(self): """ Fit SBS to a dataset. """ pass def transform(self): """ Transform a dataset based on the model. """ pass def fit_transform(self): """ Fit SBS to a dataset and transform it, returning the k-best features. """ pass
python-machine-learning/code/ch04exercises.ipynb
jeancochrane/learning
mit
Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
# Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [5e4, 5e5, 5e6] #learning_rates = list(map(lambda x: x*1e-9, np.arange(0.9, 2, 0.1))) #regularization_strengths = list(map(lambda x: x*1e4, np.arange(1, 10))) results = {} best_val = -1 best_svm = None iters = 2000 ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for lr in learning_rates: for reg in regularization_strengths: print('Training with lr={0}, reg={1}'.format(lr, reg)) svm = LinearSVM() loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=iters) y_train_pred = svm.predict(X_train_feats) y_val_pred = svm.predict(X_val_feats) train_accuracy = np.mean(y_train == y_train_pred) validation_accuracy = np.mean(y_val == y_val_pred) if validation_accuracy > best_val: best_val = validation_accuracy best_svm = svm results[(lr, reg)] = (validation_accuracy, train_accuracy) ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print(test_accuracy) # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show()
assignment1/features.ipynb
miguelfrde/stanford-cs231n
mit
Inline question 1: Describe the misclassification results that you see. Do they make sense? It makes sense given that we are using color histogram features, so for some results the background seems to affect. For example, blue background/flat background for a plane, trucks as cars (street + background) and the other way, etc. Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
print(X_train_feats.shape) from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 best_net = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ learning_rates = np.arange(0.1, 1.6, 0.1) regularization_params = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1] results = {} best_val_accuracy = 0 for lr in learning_rates: for reg in regularization_params: net = TwoLayerNet(input_dim, hidden_dim, num_classes) stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=2000, batch_size=200, learning_rate=lr, learning_rate_decay=0.95, reg=reg) val_accuracy = (net.predict(X_val_feats) == y_val).mean() if val_accuracy > best_val_accuracy: best_val_accuracy = val_accuracy best_net = net print('LR: {0} REG: {1} ACC: {2}'.format(lr, reg, val_accuracy)) print('best validation accuracy achieved during cross-validation: {0}'.format(best_val_accuracy)) net = best_net ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (best_net.predict(X_test_feats) == y_test).mean() print(test_acc)
assignment1/features.ipynb
miguelfrde/stanford-cs231n
mit
First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it. Importing and preparing your data Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
df=pd.read_csv("311-2014.csv", nrows=200000) dateutil.parser.parse(df['Created Date'][0]) def parse_date(str_date): return dateutil.parser.parse(str_date) df['created_datetime']=df['Created Date'].apply(parse_date) df.index=df['created_datetime']
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What was the most popular type of complaint, and how many times was it filed?
df['Complaint Type'].describe()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Make a horizontal bar graph of the top 5 most frequent complaint types.
df.groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5).plot(kind='barh').invert_yaxis()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
df.groupby(by='Borough')['Borough'].count() boro_pop={ 'BRONX': 1438159, 'BROOKLYN': 2621793, 'MANHATTAN': 1636268, 'QUEENS': 2321580, 'STATEN ISLAND': 473279} boro_df=pd.Series.to_frame(df.groupby(by='Borough')['Borough'].count()) boro_df['Population']=pd.DataFrame.from_dict(boro_pop, orient='index') boro_df['Complaints']=boro_df['Borough'] boro_df.drop('Borough', axis=1, inplace=True) boro_df['Per Capita']=boro_df['Complaints']/boro_df['Population'] boro_df['Per Capita'].plot(kind='bar')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
According to your selection of data, how many cases were filed in March? How about May?
df['2015-03']['Created Date'].count() df['2015-05']['Created Date'].count()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What was the most popular type of complaint on April 1st?
df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(1)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What were the most popular three types of complaint on April 1st
df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(3)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What month has the most reports filed? How many? Graph it.
df.resample('M')['Unique Key'].count().sort_values(ascending=False) df.resample('M').count().plot(y='Unique Key')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What week of the year has the most reports filed? How many? Graph the weekly complaints.
df.resample('W')['Unique Key'].count().sort_values(ascending=False).head(5) df.resample('W').count().plot(y='Unique Key')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
noise_df=df[df['Complaint Type'].str.contains('Noise')] noise_df.resample('M').count().plot(y='Unique Key') noise_df.groupby(by=noise_df.index.hour).count().plot(y='Unique Key')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
What hour of the day are the most complaints? Graph a day of complaints.
df['Unique Key'].groupby(by=df.index.hour).count().sort_values(ascending=False) df['Unique Key'].groupby(df.index.hour).count().plot()
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
df[df.index.hour==0].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) df[df.index.hour==1].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) df[df.index.hour==11].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
midnight_df = df[df.index.hour==0] midnight_df.groupby(midnight_df.index.minute)['Unique Key'].count().sort_values(ascending=False)
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
df.groupby('Agency')['Unique Key'].count().sort_values(ascending=False).head(5) ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.hour)['Unique Key'].count().plot(legend=True, label='NYPD') df[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD') df[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DOT') df[df['Agency']=='DPR'].groupby(df[df['Agency']=='DPR'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DPR') df[df['Agency']=='DOHMH'].groupby(df[df['Agency']=='DOHMH'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DOHMH')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.week)['Unique Key'].count().plot(legend=True, label='NYPD') df[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD') df[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DOT') df[df['Agency']=='DPR'].groupby(df[df['Agency']=='DPR'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DPR') df[df['Agency']=='DOHMH'].groupby(df[df['Agency']=='DOHMH'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DOHMH')
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
nypd=df[df['Agency']=='NYPD'] nypd[(nypd.index.month==7) | (nypd.index.month==8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) nypd[nypd.index.month==5].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) # seems like mostly noise complaints and bad parking to me hpd=df[df['Agency']=='HPD'] hpd[(hpd.index.month>=6) & (hpd.index.month<=8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) # i would consider summer to be june to august. hpd[(hpd.index.month==12) | (hpd.index.month<=2)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5) # pretty similar list, but people probably notice a draft from their bad window or door in the winter more easily than summer
12/homework 12 - 311 time series homework.ipynb
spm2164/foundations-homework
artistic-2.0
You should see the output "Hello World!". Once you've verfied this, interupt the above running cell by hitting the stop button. Create and build a Docker image Now we will create a docker image called hello_node.docker that will do the following: Start from the node image found on the Docker hub by inhereting from node:6.9.2 Expose port 8000 Copy the ./src/server.js file to the image Start the node server as we previously did manually Save your Dockerfile in the folder labeled dockerfiles. Your finished Dockerfile should look something like this ```bash FROM node:6.9.2 EXPOSE 8000 COPY ./src/server.js . CMD node server.js ``` Next, build the image in your project using docker build.
import os PROJECT_ID = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME os.environ["PROJECT_ID"] = PROJECT_ID %%bash docker build -f dockerfiles/hello_node.docker -t gcr.io/${PROJECT_ID}/hello-node:v1 .
notebooks/docker_and_kubernetes/solutions/3_k8s_hello_node.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
A declarative approach is being used here. Rather than starting or stopping new instances, you declare how many instances should be running at all times. Kubernetes reconciliation loops makes sure that reality matches what you requested and takes action if needed. Here's a diagram summarizing the state of your Kubernetes cluster: <img src='../assets/k8s_cluster.png' width='60%'> Roll out an upgrade to your service At some point the application that you've deployed to production will require bug fixes or additional features. Kubernetes helps you deploy a new version to production without impacting your users. First, modify the application by opening server.js so that the response is bash response.end("Hello Kubernetes World!"); Now you can build and publish a new container image to the registry with an incremented tag (v2 in this case). Note: Building and pushing this updated image should be quicker since caching is being taken advantage of.
%%bash docker build -f dockerfiles/hello_node.docker -t gcr.io/${PROJECT_ID}/hello-node:v2 . docker push gcr.io/${PROJECT_ID}/hello-node:v2
notebooks/docker_and_kubernetes/solutions/3_k8s_hello_node.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
OpenSpiel This Colab gets you started the basics of OpenSpiel. OpenSpiel is a framework for reinforcement learning in games. The code is hosted on github. There is an accompanying video tutorial that works through this colab. It will be linked here once it is live. There is also an OpenSpiel paper with more detail. Install The following command will install OpenSpiel via pip. Only the required dependencies are installed. You may need other dependencies if you use some of the algorithms. There is a the complete list of packages and versions we install for the CI tests, which can be installed as necessary.
!pip install --upgrade open_spiel
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 1. OpenSpiel API Basics.
# Importing pyspiel and showing the list of supported games. import pyspiel print(pyspiel.registered_names()) # Loading a game (with no/default parameters). game = pyspiel.load_game("tic_tac_toe") print(game) # Some properties of the games. print(game.num_players()) print(game.max_utility()) print(game.min_utility()) print(game.num_distinct_actions()) # Creating initial states. state = game.new_initial_state() print(state) # Basic information about states. print(state.current_player()) print(state.is_terminal()) print(state.returns()) print(state.legal_actions()) # Playing the game: applying actions. state = game.new_initial_state() state.apply_action(1) print(state) print(state.current_player()) state.apply_action(2) state.apply_action(4) state.apply_action(0) state.apply_action(7) print(state) print(state.is_terminal()) print(state.player_return(0)) # win for x (player 0) print(state.current_player()) # Different game: Breakthrough with default parameters (number of rows and columns are both 8) game = pyspiel.load_game("breakthrough") state = game.new_initial_state() print(state) # Parameterized games: loading a 6x6 Breakthrough. game = pyspiel.load_game("breakthrough(rows=6,columns=6)") state = game.new_initial_state() print(state) print(state.legal_actions()) print(game.num_distinct_actions()) for action in state.legal_actions(): print(f"{action} {state.action_to_string(action)}")
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 2. Normal-form Games and Evolutionary Dynamics in OpenSpiel.
import pyspiel game = pyspiel.create_matrix_game([[1, -1], [-1, 1]], [[-1, 1], [1, -1]]) print(game) # name not provided: uses a default state = game.new_initial_state() print(state) # action names also not provided; defaults used # Normal-form games are 1-step simultaneous-move games. print(state.current_player()) # special player id print(state.legal_actions(0)) # query legal actions for each player print(state.legal_actions(1)) print(state.is_terminal()) # Applying a joint action (one action per player) state.apply_actions([0, 0]) print(state.is_terminal()) print(state.returns()) # Evolutionary dynamics in Rock, Paper, Scissors from open_spiel.python.egt import dynamics from open_spiel.python.egt.utils import game_payoffs_array import numpy as np game = pyspiel.load_matrix_game("matrix_rps") # load the Rock, Paper, Scissors matrix game payoff_matrix = game_payoffs_array(game) # convert any normal-form game to a numpy payoff matrix dyn = dynamics.SinglePopulationDynamics(payoff_matrix, dynamics.replicator) x = np.array([0.2, 0.2, 0.6]) # population heavily-weighted toward scissors dyn(x) # Choose a step size and apply the dynamic alpha = 0.01 x += alpha * dyn(x) print(x) x += alpha * dyn(x) print(x) x += alpha * dyn(x) x += alpha * dyn(x) x += alpha * dyn(x) x += alpha * dyn(x) print(x)
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 3. Chance Nodes and Partially-Observable Games.
# Kuhn poker: simplified poker with a 3-card deck (https://en.wikipedia.org/wiki/Kuhn_poker) import pyspiel game = pyspiel.load_game("kuhn_poker") print(game.num_distinct_actions()) # bet and fold # Chance nodes. state = game.new_initial_state() print(state.current_player()) # special chance player id print(state.is_chance_node()) print(state.chance_outcomes()) # distibution over outcomes as a list of (outcome, probability) pairs # Applying chance node outcomes: same function as applying actions. state.apply_action(0) # let's choose the first card (jack) print(state.is_chance_node()) # still at a chance node (player 2's card). print(state.chance_outcomes()) # jack no longer a possible outcome state.apply_action(1) # second player gets the queen print(state.current_player()) # no longer chance node, time to play! # States vs. information states print(state) # ground/world state (all information open) print(state.legal_actions()) for action in state.legal_actions(): print(state.action_to_string(action)) print(state.information_state_string()) # only current player's information! # Take an action (pass / check), second player's turn. # Information state tensor is vector of floats (often bits) representing the information state. state.apply_action(0) print(state.current_player()) print(state.information_state_string()) # now contains second player's card and the public action sequence print(state.information_state_tensor()) # Leduc poker is a larger game (6 cards, two suits), 3 actions: fold, check/call, raise. game = pyspiel.load_game("leduc_poker") print(game.num_distinct_actions()) state = game.new_initial_state() print(state) state.apply_action(0) # first player gets first jack state.apply_action(1) # second player gets second jack print(state.current_player()) print(state.information_state_string()) print(state.information_state_tensor()) # Let's check until the second round. print(state.legal_actions_mask()) # Helper function for neural networks. state.apply_action(1) # check state.apply_action(1) # check print(state) print(state.chance_outcomes()) # public card (4 left in the deck) state.apply_action(2) print(state.information_state_string()) # player 0's turn again.
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
Part 4. Basic RL: Self-play Q-Learning in Tic-Tac-Toe.
# Let's do independent Q-learning in Tic-Tac-Toe, and play it against random. # RL is based on python/examples/independent_tabular_qlearning.py from open_spiel.python import rl_environment from open_spiel.python import rl_tools from open_spiel.python.algorithms import tabular_qlearner # Create the environment env = rl_environment.Environment("tic_tac_toe") num_players = env.num_players num_actions = env.action_spec()["num_actions"] # Create the agents agents = [ tabular_qlearner.QLearner(player_id=idx, num_actions=num_actions) for idx in range(num_players) ] # Train the Q-learning agents in self-play. for cur_episode in range(25000): if cur_episode % 1000 == 0: print(f"Episodes: {cur_episode}") time_step = env.reset() while not time_step.last(): player_id = time_step.observations["current_player"] agent_output = agents[player_id].step(time_step) time_step = env.step([agent_output.action]) # Episode is over, step all agents with final info state. for agent in agents: agent.step(time_step) print("Done!") # Evaluate the Q-learning agent against a random agent. from open_spiel.python.algorithms import random_agent eval_agents = [agents[0], random_agent.RandomAgent(1, num_actions, "Entropy Master 2000") ] time_step = env.reset() while not time_step.last(): print("") print(env.get_state) player_id = time_step.observations["current_player"] # Note the evaluation flag. A Q-learner will set epsilon=0 here. agent_output = eval_agents[player_id].step(time_step, is_evaluation=True) print(f"Agent {player_id} chooses {env.get_state.action_to_string(agent_output.action)}") time_step = env.step([agent_output.action]) print("") print(env.get_state) print(time_step.rewards)
open_spiel/colabs/OpenSpielTutorial.ipynb
deepmind/open_spiel
apache-2.0
<a id = "section1">Data Generation and Visualization</a> Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1DenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html">BinaryLables</a> classes.
shogun_feats_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat'))) shogun_labels_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat'))) shogun_feats_non_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat'))) shogun_labels_non_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat'))) feats_linear = shogun_feats_linear.get('feature_matrix') labels_linear = shogun_labels_linear.get('labels') feats_non_linear = shogun_feats_non_linear.get('feature_matrix') labels_non_linear = shogun_labels_non_linear.get('labels')
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
Data visualization methods.
def plot_binary_data(plot,X_train, y_train): """ This function plots 2D binary data with different colors for different labels. """ plot.xlabel(r"$x$") plot.ylabel(r"$y$") plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro') plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo') def compute_plot_isolines(classifier,feats,size=200,fading=True): """ This function computes the classification of points on the grid to get the decision boundaries used in plotting """ x1 = np.linspace(1.2*min(feats[0]), 1.2*max(feats[0]), size) x2 = np.linspace(1.2*min(feats[1]), 1.2*max(feats[1]), size) x, y = np.meshgrid(x1, x2) plot_features = sg.create_features(np.array((np.ravel(x), np.ravel(y)))) if fading == True: plot_labels = classifier.apply_binary(plot_features).get('current_values') else: plot_labels = classifier.apply(plot_features).get('labels') z = plot_labels.reshape((size, size)) return x,y,z def plot_model(plot,classifier,features,labels,fading=True): """ This function plots an input classification model """ x,y,z = compute_plot_isolines(classifier,features,fading=fading) plot.pcolor(x,y,z,cmap='RdBu_r') plot.contour(x, y, z, linewidths=1, colors='black') plot_binary_data(plot,features, labels) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Linear Features") plot_binary_data(plt,feats_linear, labels_linear) plt.subplot(122) plt.title("Non Linear Features") plot_binary_data(plt,feats_non_linear, labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVM.html">Support Vector Machine</a> <a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Linear SVM</a> Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification
plt.figure(figsize=(15,5)) c = 0.5 epsilon = 1e-3 svm_linear = sg.create_machine("LibLinear", C1=c, C2=c, labels=shogun_labels_linear, epsilon=epsilon, liblinear_solver_type="L2R_L2LOSS_SVC") svm_linear.train(shogun_feats_linear) classifiers_linear.append(svm_linear) classifiers_names.append("SVM Linear") fadings.append(True) plt.subplot(121) plt.title("Linear SVM - Linear Features") plot_model(plt,svm_linear,feats_linear,labels_linear) svm_non_linear = sg.create_machine("LibLinear", C1=c, C2=c, labels=shogun_labels_non_linear, epsilon=epsilon, liblinear_solver_type="L2R_L2LOSS_SVC") svm_non_linear.train(shogun_feats_non_linear) classifiers_non_linear.append(svm_non_linear) plt.subplot(122) plt.title("Linear SVM - Non Linear Features") plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
SVM - Kernels Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html">Kernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelMachine.html">KernelMachine</a> base class. <a id ="section2b" href = "http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html">Gaussian Kernel</a>
gaussian_c = 0.7 gaussian_kernel_linear = sg.create_kernel("GaussianKernel", width=20) gaussian_svm_linear = sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_linear, labels=shogun_labels_linear) gaussian_svm_linear.train(shogun_feats_linear) classifiers_linear.append(gaussian_svm_linear) fadings.append(True) gaussian_kernel_non_linear = sg.create_kernel("GaussianKernel", width=10) gaussian_svm_non_linear=sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c, kernel=gaussian_kernel_non_linear, labels=shogun_labels_non_linear) gaussian_svm_non_linear.train(shogun_feats_non_linear) classifiers_non_linear.append(gaussian_svm_non_linear) classifiers_names.append("SVM Gaussian Kernel") plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("SVM Gaussian Kernel - Linear Features") plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear) plt.subplot(122) plt.title("SVM Gaussian Kernel - Non Linear Features") plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a>
sigmoid_c = 0.9 sigmoid_kernel_linear = sg.create_kernel("SigmoidKernel", cache_size=200, gamma=1, coef0=0.5) sigmoid_kernel_linear.init(shogun_feats_linear, shogun_feats_linear) sigmoid_svm_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_linear, labels=shogun_labels_linear) sigmoid_svm_linear.train() classifiers_linear.append(sigmoid_svm_linear) classifiers_names.append("SVM Sigmoid Kernel") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("SVM Sigmoid Kernel - Linear Features") plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear) sigmoid_kernel_non_linear = sg.create_kernel("SigmoidKernel", cache_size=400, gamma=2.5, coef0=2) sigmoid_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear) sigmoid_svm_non_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, kernel=sigmoid_kernel_non_linear, labels=shogun_labels_non_linear) sigmoid_svm_non_linear.train() classifiers_non_linear.append(sigmoid_svm_non_linear) plt.subplot(122) plt.title("SVM Sigmoid Kernel - Non Linear Features") plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a>
poly_c = 0.5 degree = 4 poly_kernel_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0) poly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear) poly_svm_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_linear, labels=shogun_labels_linear) poly_svm_linear.train() classifiers_linear.append(poly_svm_linear) classifiers_names.append("SVM Polynomial kernel") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("SVM Polynomial Kernel - Linear Features") plot_model(plt,poly_svm_linear,feats_linear,labels_linear) poly_kernel_non_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0) poly_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear) poly_svm_non_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c, kernel=poly_kernel_non_linear, labels=shogun_labels_non_linear) poly_svm_non_linear.train() classifiers_non_linear.append(poly_svm_non_linear) plt.subplot(122) plt.title("SVM Polynomial Kernel - Non Linear Features") plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianNaiveBayes.html">Naive Bayes</a>
multiclass_labels_linear = shogun_labels_linear.get('labels') for i in range(0,len(multiclass_labels_linear)): if multiclass_labels_linear[i] == -1: multiclass_labels_linear[i] = 0 multiclass_labels_non_linear = shogun_labels_non_linear.get('labels') for i in range(0,len(multiclass_labels_non_linear)): if multiclass_labels_non_linear[i] == -1: multiclass_labels_non_linear[i] = 0 shogun_multiclass_labels_linear = sg.MulticlassLabels(multiclass_labels_linear) shogun_multiclass_labels_non_linear = sg.MulticlassLabels(multiclass_labels_non_linear) naive_bayes_linear = sg.create_machine("GaussianNaiveBayes") naive_bayes_linear.put('features', shogun_feats_linear) naive_bayes_linear.put('labels', shogun_multiclass_labels_linear) naive_bayes_linear.train() classifiers_linear.append(naive_bayes_linear) classifiers_names.append("Naive Bayes") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Naive Bayes - Linear Features") plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False) naive_bayes_non_linear = sg.create_machine("GaussianNaiveBayes") naive_bayes_non_linear.put('features', shogun_feats_non_linear) naive_bayes_non_linear.put('labels', shogun_multiclass_labels_non_linear) naive_bayes_non_linear.train() classifiers_non_linear.append(naive_bayes_non_linear) plt.subplot(122) plt.title("Naive Bayes - Non Linear Features") plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1KNN.html">Nearest Neighbors</a>
number_of_neighbors = 10 distances_linear = sg.create_distance('EuclideanDistance') distances_linear.init(shogun_feats_linear, shogun_feats_linear) knn_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_linear, labels=shogun_labels_linear) knn_linear.train() classifiers_linear.append(knn_linear) classifiers_names.append("Nearest Neighbors") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Nearest Neighbors - Linear Features") plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False) distances_non_linear = sg.create_distance('EuclideanDistance') distances_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear) knn_non_linear = sg.create_machine("KNN", k=number_of_neighbors, distance=distances_non_linear, labels=shogun_labels_non_linear) knn_non_linear.train() classifiers_non_linear.append(knn_non_linear) plt.subplot(122) plt.title("Nearest Neighbors - Non Linear Features") plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a>
gamma = 0.1 lda_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_linear) lda_linear.train(shogun_feats_linear) classifiers_linear.append(lda_linear) classifiers_names.append("LDA") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("LDA - Linear Features") plot_model(plt,lda_linear,feats_linear,labels_linear) lda_non_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_non_linear) lda_non_linear.train(shogun_feats_non_linear) classifiers_non_linear.append(lda_non_linear) plt.subplot(122) plt.title("LDA - Non Linear Features") plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1QDA.html">Quadratic Discriminant Analysis</a>
qda_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_linear) qda_linear.train(shogun_feats_linear) classifiers_linear.append(qda_linear) classifiers_names.append("QDA") fadings.append(False) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("QDA - Linear Features") plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False) qda_non_linear = sg.create_machine("QDA", labels=shogun_multiclass_labels_non_linear) qda_non_linear.train(shogun_feats_non_linear) classifiers_non_linear.append(qda_non_linear) plt.subplot(122) plt.title("QDA - Non Linear Features") plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1GaussianProcessBinaryClassification.html">Gaussian Process</a> <a id ="section7a">Logit Likelihood model</a> Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1LogitLikelihood.html">LogitLikelihood</a> and <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1EPInferenceMethod.html">EPInferenceMethod</a> classes are used.
# create Gaussian kernel with width = 5.0 kernel = sg.create_kernel("GaussianKernel", width=5.0) # create zero mean function zero_mean = sg.create_gp_mean("ZeroMean") # create logit likelihood model likelihood = sg.create_gp_likelihood("LogitLikelihood") # specify EP approximation inference method inference_model_linear = sg.create_gp_inference("EPInferenceMethod",kernel=kernel, features=shogun_feats_linear, mean_function=zero_mean, labels=shogun_labels_linear, likelihood_model=likelihood) # create and train GP classifier, which uses Laplace approximation gaussian_logit_linear = sg.create_gaussian_process("GaussianProcessClassification", inference_method=inference_model_linear) gaussian_logit_linear.train() classifiers_linear.append(gaussian_logit_linear) classifiers_names.append("Gaussian Process Logit") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Gaussian Process - Logit - Linear Features") plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear) inference_model_non_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel, features=shogun_feats_non_linear, mean_function=zero_mean, labels=shogun_labels_non_linear, likelihood_model=likelihood) gaussian_logit_non_linear = sg.create_gaussian_process("GaussianProcessClassification", inference_method=inference_model_non_linear) gaussian_logit_non_linear.train() classifiers_non_linear.append(gaussian_logit_non_linear) plt.subplot(122) plt.title("Gaussian Process - Logit - Non Linear Features") plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
<a id ="section7b">Probit Likelihood model</a> Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1ProbitLikelihood.html">ProbitLikelihood</a> class is used.
likelihood = sg.create_gp_likelihood("ProbitLikelihood") inference_model_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel, features=shogun_feats_linear, mean_function=zero_mean, labels=shogun_labels_linear, likelihood_model=likelihood) gaussian_probit_linear = sg.create_gaussian_process("GaussianProcessClassification", inference_method=inference_model_linear) gaussian_probit_linear.train() classifiers_linear.append(gaussian_probit_linear) classifiers_names.append("Gaussian Process Probit") fadings.append(True) plt.figure(figsize=(15,5)) plt.subplot(121) plt.title("Gaussian Process - Probit - Linear Features") plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear) inference_model_non_linear = sg.create_gp_inference("EPInferenceMethod", kernel=kernel, features=shogun_feats_non_linear, mean_function=zero_mean, labels=shogun_labels_non_linear, likelihood_model=likelihood) gaussian_probit_non_linear = sg.create_gaussian_process("GaussianProcessClassification", inference_method=inference_model_non_linear) gaussian_probit_non_linear.train() classifiers_non_linear.append(gaussian_probit_non_linear) plt.subplot(122) plt.title("Gaussian Process - Probit - Non Linear Features") plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)
doc/ipython-notebooks/classification/Classification.ipynb
geektoni/shogun
bsd-3-clause
Run the modification of check_test_score.py so that it can work with superclass representation.
import numpy as np import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import sklearn.metrics import argparse import os import pylearn2.config.yaml_parse
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Check which core is free.
%env THEANO_FLAGS = 'device=gpu3,floatX=float32,base_compiledir=~/.theano/stonesoup3' verbose = False augment = 1 settings = neukrill_net.utils.Settings("settings.json")
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Give the path to .json.
run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json', settings, force=True) model = pylearn2.utils.serial.load(run_settings['pickle abspath']) # format the YAML yaml_string = neukrill_net.utils.format_yaml(run_settings, settings) # load proxied objects proxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False) # pull out proxied dataset proxdata = proxied.keywords['dataset'] # force loading of dataset and switch to test dataset proxdata.keywords['force'] = True proxdata.keywords['training_set_mode'] = 'test' proxdata.keywords['verbose'] = False # then instantiate the dataset dataset = pylearn2.config.yaml_parse._instantiate(proxdata) if hasattr(dataset.X, 'shape'): N_examples = dataset.X.shape[0] else: N_examples = len(dataset.X) batch_size = 500 while N_examples%batch_size != 0: batch_size += 1 n_batches = int(N_examples/batch_size) model.set_batch_size(batch_size) X = model.get_input_space().make_batch_theano() Y = model.fprop(X) f = theano.function([X],Y) import neukrill_net.encoding as enc hier = enc.get_hierarchy() lengths = sum([len(array) for array in hier]) y = np.zeros((N_examples*augment,lengths)) # get the data specs from the cost function using the model pcost = proxied.keywords['algorithm'].keywords['cost'] cost = pylearn2.config.yaml_parse._instantiate(pcost) data_specs = cost.get_data_specs(model) i = 0 for _ in range(augment): # make sequential iterator iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches, mode='even_sequential', data_specs=data_specs) for batch in iterator: if verbose: print(" Batch {0} of {1}".format(i+1,n_batches*augment)) y[i*batch_size:(i+1)*batch_size,:] = f(batch[0]) i += 1
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Best .pkl scores as:
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss))
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Recent .pkl scores as: (rerun relevant cells with a different path)
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss)) %env THEANO_FLAGS = device=gpu2,floatX=float32,base_compiledir=~/.theano/stonesoup2 %env
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Check the same model with 8 augmentation.
import numpy as np import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import sklearn.metrics import argparse import os import pylearn2.config.yaml_parse verbose = False augment = 1 settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses_aug.json', settings, force=True) model = pylearn2.utils.serial.load(run_settings['pickle abspath']) # format the YAML yaml_string = neukrill_net.utils.format_yaml(run_settings, settings) # load proxied objects proxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False) # pull out proxied dataset proxdata = proxied.keywords['dataset'] # force loading of dataset and switch to test dataset proxdata.keywords['force'] = True proxdata.keywords['training_set_mode'] = 'test' proxdata.keywords['verbose'] = False # then instantiate the dataset dataset = pylearn2.config.yaml_parse._instantiate(proxdata) if hasattr(dataset.X, 'shape'): N_examples = dataset.X.shape[0] else: N_examples = len(dataset.X) batch_size = 500 while N_examples%batch_size != 0: batch_size += 1 n_batches = int(N_examples/batch_size) model.set_batch_size(batch_size) X = model.get_input_space().make_batch_theano() Y = model.fprop(X) f = theano.function([X],Y) import neukrill_net.encoding as enc hier = enc.get_hierarchy() lengths = sum([len(array) for array in hier]) y = np.zeros((N_examples*augment,lengths)) # get the data specs from the cost function using the model pcost = proxied.keywords['algorithm'].keywords['cost'] cost = pylearn2.config.yaml_parse._instantiate(pcost) data_specs = cost.get_data_specs(model) i = 0 for _ in range(augment): # make sequential iterator iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches, mode='even_sequential', data_specs=data_specs) for batch in iterator: if verbose: print(" Batch {0} of {1}".format(i+1,n_batches*augment)) y[i*batch_size:(i+1)*batch_size,:] = f(batch[0]) i += 1
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Best .pkl scored as:
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss))
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Strange. Not as good as we hoped. Is there a problem with augmentation? Let's plot the nll.
import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import numpy as np %matplotlib inline import matplotlib.pyplot as plt #import holoviews as hl #load_ext holoviews.ipython import sklearn.metrics m = pylearn2.utils.serial.load( "/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses_aug_recent.pkl") channel = m.monitor.channels["valid_y_y_1_nll"] plt.plot(channel.example_record,channel.val_record)
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
Looks like it's pretty stable at 4 and had this random strange glitch which gave the best result. Look at the best pkl of the none-aug model again: (just to confirm that it was indeed good)
import numpy as np import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import sklearn.metrics import argparse import os import pylearn2.config.yaml_parse verbose = False augment = 1 settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json', settings, force=True) model = pylearn2.utils.serial.load(run_settings['pickle abspath']) # format the YAML yaml_string = neukrill_net.utils.format_yaml(run_settings, settings) # load proxied objects proxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False) # pull out proxied dataset proxdata = proxied.keywords['dataset'] # force loading of dataset and switch to test dataset proxdata.keywords['force'] = True proxdata.keywords['training_set_mode'] = 'test' proxdata.keywords['verbose'] = False # then instantiate the dataset dataset = pylearn2.config.yaml_parse._instantiate(proxdata) if hasattr(dataset.X, 'shape'): N_examples = dataset.X.shape[0] else: N_examples = len(dataset.X) batch_size = 500 while N_examples%batch_size != 0: batch_size += 1 n_batches = int(N_examples/batch_size) model.set_batch_size(batch_size) X = model.get_input_space().make_batch_theano() Y = model.fprop(X) f = theano.function([X],Y) import neukrill_net.encoding as enc hier = enc.get_hierarchy() lengths = sum([len(array) for array in hier]) y = np.zeros((N_examples*augment,lengths)) # get the data specs from the cost function using the model pcost = proxied.keywords['algorithm'].keywords['cost'] cost = pylearn2.config.yaml_parse._instantiate(pcost) data_specs = cost.get_data_specs(model) i = 0 for _ in range(augment): # make sequential iterator iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches, mode='even_sequential', data_specs=data_specs) for batch in iterator: if verbose: print(" Batch {0} of {1}".format(i+1,n_batches*augment)) y[i*batch_size:(i+1)*batch_size,:] = f(batch[0]) i += 1 logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)]) print("Log loss: {0}".format(logloss))
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
It was. Annoying. Let's plot the nll too:
m = pylearn2.utils.serial.load( "/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses.pkl") channel = m.monitor.channels["valid_y_y_1_nll"] plt.plot(channel.example_record,channel.val_record)
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
Neuroglycerin/neukrill-net-work
mit
The SeqRecord Object The SeqRecord (Sequence Record) class is defined in the Bio.SeqRecord module. This class allows higher level features such as identifiers and features to be associated with a sequence, and is the basic data type for the Bio.SeqIO sequence input/output interface. The SeqRecord class itself is quite simple, and offers the following information as attributes: .seq - The sequence itself, typically a Seq object. .id - The primary ID used to identify the sequence - a string. In most cases this is something like an accession number. .name - A 'common' name/id for the sequence - a string. In some cases this will be the same as the accession number, but it could also be a clone name. I think of this as being analogous to the LOCUS id in a GenBank record. .description - A human readable description or expressive name for the sequence - a string. .letter_annotations - Holds per-letter-annotations using a (restricted) dictionary of additional information about the letters in the sequence. The keys are the name of the information, and the information is contained in the value as a Python sequence (i.e. a list, tuple or string) with the same length as the sequence itself. This is often used for quality scores or secondary structure information (e.g. from Stockholm/PFAM alignment files). .annotations - A dictionary of additional information about the sequence. The keys are the name of the information, and the information is contained in the value. This allows the addition of more 'unstructured' information to the sequence. .features - A list of SeqFeature objects with more structured information about the features on a sequence (e.g. position of genes on a genome, or domains on a protein sequence). .dbxrefs - A list of database cross-references as strings. Creating a SeqRecord Using a SeqRecord object is not very complicated, since all of the information is presented as attributes of the class. Usually you won't create a SeqRecord 'by hand', but instead use Bio.SeqIO to read in a sequence file for you and the examples below). However, creating SeqRecord can be quite simple. SeqRecord objects from scratch To create a SeqRecord at a minimum you just need a Seq object:
from Bio.Seq import Seq simple_seq = Seq("GATC") simple_seq_r = SeqRecord(simple_seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Additionally, you can also pass the id, name and description to the initialization function, but if not they will be set as strings indicating they are unknown, and can be modified subsequently:
simple_seq_r.id simple_seq_r.id = "AC12345" simple_seq_r.description = "Made up sequence I wish I could write a paper about" print(simple_seq_r.description) simple_seq_r.seq print(simple_seq_r.seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Including an identifier is very important if you want to output your SeqRecord to a file. You would normally include this when creating the object:
simple_seq = Seq("GATC") simple_seq_r = SeqRecord(simple_seq, id="AC12345")
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
As mentioned above, the SeqRecord has an dictionary attribute annotations. This is used for any miscellaneous annotations that doesn't fit under one of the other more specific attributes. Adding annotations is easy, and just involves dealing directly with the annotation dictionary:
simple_seq_r.annotations["evidence"] = "None. I just made it up." print(simple_seq_r.annotations) print(simple_seq_r.annotations["evidence"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Working with per-letter-annotations is similar, letter_annotations is a dictionary like attribute which will let you assign any Python sequence (i.e. a string, list or tuple) which has the same length as the sequence:
simple_seq_r.letter_annotations["phred_quality"] = [40, 40, 38, 30] print(simple_seq_r.letter_annotations) print(simple_seq_r.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The dbxrefs and features attributes are just Python lists, and should be used to store strings and SeqFeature objects (discussed later) respectively. SeqRecord objects from FASTA files This example uses a fairly large FASTA file containing the whole sequence for \textit{Yersinia pestis biovar Microtus} str. 91001 plasmid pPCP1, originally downloaded from the NCBI. This file is included with the Biopython unit tests under the GenBank folder, or online (http://biopython.org/SRC/biopython/Tests/GenBank/NC_005816.fna) from our website. The file starts like this - and you can check there is only one record present (i.e. only one line starting with a greater than symbol): &gt;gi|45478711|ref|NC_005816.1| Yersinia pestis biovar Microtus ... pPCP1, complete sequence TGTAACGAACGGTGCAATAGTGATCCACACCCAACGCCTGAAATCAGATCCAGGGGGTAATCTGCTCTCC ... In a previous notebook you will have seen the function Bio.SeqIO.parse used to loop over all the records in a file as SeqRecord objects. The Bio.SeqIO module has a sister function for use on files which contain just one record which we'll use here:
from Bio import SeqIO record = SeqIO.read("data/NC_005816.fna", "fasta") record
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Now, let's have a look at the key attributes of this SeqRecord individually - starting with the seq attribute which gives you a Seq object:
record.seq
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Next, the identifiers and description:
record.id record.name record.description
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
As you can see above, the first word of the FASTA record's title line (after removing the greater than symbol) is used for both the id and name attributes. The whole title line (after removing the greater than symbol) is used for the record description. This is deliberate, partly for backwards compatibility reasons, but it also makes sense if you have a FASTA file like this: &gt;Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1 TGTAACGAACGGTGCAATAGTGATCCACACCCAACGCCTGAAATCAGATCCAGGGGGTAATCTGCTCTCC ... Note that none of the other annotation attributes get populated when reading a FASTA file:
record.dbxrefs record.annotations record.letter_annotations record.features
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
In this case our example FASTA file was from the NCBI, and they have a fairly well defined set of conventions for formatting their FASTA lines. This means it would be possible to parse this information and extract the GI number and accession for example. However, FASTA files from other sources vary, so this isn't possible in general. SeqRecord objects from GenBank files As in the previous example, we're going to look at the whole sequence for Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, originally downloaded from the NCBI, but this time as a GenBank file. This file contains a single record (i.e. only one LOCUS line) and starts: LOCUS NC_005816 9609 bp DNA circular BCT 21-JUL-2008 DEFINITION Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, complete sequence. ACCESSION NC_005816 VERSION NC_005816.1 GI:45478711 PROJECT GenomeProject:10638 ... Again, we'll use Bio.SeqIO to read this file in, and the code is almost identical to that for used above for the FASTA file:
record = SeqIO.read("data/NC_005816.gb", "genbank") record
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You should be able to spot some differences already! But taking the attributes individually, the sequence string is the same as before, but this time Bio.SeqIO has been able to automatically assign a more specific alphabet:
record.seq
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The name comes from the LOCUS line, while the \verb|id| includes the version suffix. The description comes from the DEFINITION line:
record.id record.name record.description
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit